Chapter 2In-Database Processing

It is a known fact that organizations are collecting more structured and semi-structured data than ever before, and it is presenting great opportunities and challenges to analyze ALL of this complex data. In this volatile and competitive economy, there has never been a bigger need for proactive and agile strategies to overcome these challenges by applying the analytics directly to the data rather than shuffling data around. The point: There are two key technologies that dramatically improve and increase performance when analyzing big data: in-database and in-memory analytics. I will focus on the in-database analytics (processing) in this chapter.

BACKGROUND

The concept of in-database processing was introduced in the mid-1990s, and vendors such as Teradata, IBM, and Oracle made it commercially available as object-related database systems. The in-database capabilities were still in its infancy and did not really catch on with customers until the mid-2000s. The concept of migrating analytics from the analytical workstation or personal computers and into a centralized enterprise data warehouse sounded good and promising, but customers were very wary of how it could work within their processes and cultures. In addition, IT and business users were questioning what capabilities were present to add value to their organization, which wanted to adopt the technology. Also occurring at the same time, big data has entered into the industry and become the ...

Get Leaders and Innovators now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.