Essential Guide

Browse Sections


This content is part of the Essential Guide: Guide to Oracle 12c database features and upgrades
News Stay informed about the latest enterprise technology news and product updates.

In-memory technology gets the relational treatment

Leading RDBMS makers are bringing in-memory traits to their flagship offerings. Faster analytics and operations are in the offing.

As if overnight, in-memory technology has crept out of the rare worlds of high-performance computing and Wall Street trading and entered into the mainstream.

In-memory technology that bypasses disk drives and resides in main semiconductor memory got a big boost in recent years from SAP AG, which loudly trumpeted its HANA in-memory database management system -- and its use continues to widen.

The technique is seen in analytics appliances, as well as in Hadoop, NoSQL and NewSQL territories. The activity is hard to overlook.

Incumbent relational database makers have also taken notice -- adding in-memory technology to their leading SQL products to improve performance. IBM, Oracle and Microsoft have added in-memory traits to their flagship offerings, in no small part to keep up with the high velocity of business today.

Speed increases of 10 times or more for transaction processing have been reported, with data warehouse analytics speed boosts going even higher.

High-performance applications are the "sweet spot" for in-memory offerings generally, and for in-memory relational database offerings specifically, according to William McKnight, president of Plano, Texas-based McKnight Consulting Group. The usefulness of faster in-memory performance can come to play in both analytical and operational applications, he said.

High performance gets the nod

Speed-sensitive applications are a good fit for in-memory relational databases, said Andrew Mendelsohn, executive vice president of servers technologies at Oracle, especially ones that "require access to large amounts of data in order to answer business-driving questions."

Oracle's in-memory lineage is deep. Since 2006, it has offered the TimesTen in-memory database, which it acquired from HP Labs. Also, beginning in 2007, Oracle fielded the Coherence Java-based in-memory data grid for middleware software object persistence. Last year at Oracle Open World 2013, the company announced the Oracle Database In-Memory option for Oracle Database 12c, which is currently in beta.

Like McKnight, Mendelsohn sees benefits for both operations and analytics. New classes of "hybrid applications" that combine analytics with transactions for real-time commerce can drive immediate returns, he said.

Am I BLU and in-memory too?

IBM's DB2 BLU Acceleration software also got a notable in-memory refresher in 2013. Like Oracle, IBM has offered a variety of in-memory technologies across its middleware and data processing portfolios. Now, in-memory data handling is one of the many enhancements that so-called BLU acceleration brings to IBM's mainstay relational database.

"Anything that needs online analytical processing or [data] 'cubing' is a beneficiary of in-memory," said Nancy Kopp, director of database software and systems at IBM. "Reporting, data mining and data discovery all benefit."

What some viewers describe as "real-time analytics" has been something of a holy grail, Kopp admited, and it comes closer with the application of in-memory methods. Often, data applications have been limited by I/O latency and that in turn may have limited what Kopp calls "the speed of thought" for human analysts.

In-memory has special value where "latency is critical and the number of users is really high," she said.

"People want to get answers as fast as they can ask the questions. With in-memory [technology], we can get more toward operational BI [business intelligence]."

Like others, she sees in-memory capabilities bringing a new blend of applications to the relational database. Eventually, there will be less of a line between the transactional world and the analytical world, she said.

Batch is out the window

People used to waiting for overnight batch jobs will quickly become accustomed to real-time execution as in-memory finds greater use in relational databases, according to Tiffany Wissner, director of product marketing for SQL Server at Microsoft. Moreover, she said, such capabilities prepare customers for a move to larger-scale, cloud-style processing.

She said Microsoft has included in-memory of sorts as part of the basic SQL Server database offering since 2008, when PowerPivot support allowed people to analyze billions of rows of Excel in memory. "With SQL Server 2012, we expanded the footprint with an in-memory columnar store," she said.

This week, SQL Server 2014 became generally available, which has new in-memory transaction-processing support. Wissner emphasized that, as part of the core offering, SQL Server 2014 jobs can be optimized for online transactional processing (OLTP) with high numbers of read/write operations, or can be optimized to run in a datawarehouse-style column store that is fine-tuned for high search query speed.

Placing a bet on in-memory technology

In-memory adaptations to relational database performance can reduce stress on large-scale transactional systems, according to Wolfgang "Rick" Kutschera, who is manager for database engineering at in Vienna, Austria, and whose team has gone into full production with Microsoft's latest SQL Server incarnation.

Kutschera's data group was a beta user of "Hekaton," which was the pre-release codename for the new version of SQL Server 2014 with in-memory OLTP. For -- which offers online gaming for soccer and tennis, as well as poker and other casino games -- Microsoft's latest SQL Server version helped meet the need for transaction scalability and data consistency. The transition was fairly straightforward, he said.

For more on in-memory databases

See Oracle TimesTen stacked against Sybase ASE

View a video on the IBM BLU Acceleration portfolio

Dive deep into SQL Server 2014 in-memory OLTP

"We started on an application that had hit an actual performance limit -- it could not scale up or out in an easy way. With Hekaton, it took us a day or two to convert to the in-memory technology, and once we did, we could scale to a factor 20 times [faster] than what we had before," he said. "A lot of performance-critical [application] parts were converted." Now that it is established, people are finding more things to do with it.

Like other high-transaction websites, has looked at in-memory NoSQL alternatives to established relational systems, Kutschera said. But there is a difference between a tweet and a bet.

"The main problem is the websites that use NoSQL in most cases have no problem if they lose one record. If you lose, for example, one Twitter message, nobody cares, but," he continued, "if you lose a bet that might be a $20,000 or $30,000 return, it is a big deal."

The in-memory technology trend is like a catchy song heard everywhere of late. In-memory approaches are appearing in analtyics engines of all kinds. Their appearance in relational databases may soon turn out to be be one of the most influential of these uses.

Jack Vaughan is SearchDataManagement's news and site editor. Email him at, and follow us on Twitter: @sDataManagement.

Dig Deeper on Database management system (DBMS) software and technology

Join the conversation


Send me notifications when other members comment.

Please create a username to comment.

Do you think in-memory improvements to top-line RDBMSs can forestall incursions by newer data technologies?
Store the data into collection of tables, which might be related by common fields database table columns. provide relational operators to manipulate the data stored into the database tables most RDBMS use SQL as database query language
I think NoSQL and the Big Data Stuff (splunk, hadoop) solve different problem then old-school RDBMS. Oracle, MSSQL, MySQL, and postgres ain't going away anytime soon, though the smaller RDBMS players (sybase) will continue to struggle to be relevant.
As analyst James Governor points out, you can get NoSQL and In-Memory advantages without replacing your SQL database with In-Memory Data Grid
Governor knows best!
Contrasted with database management systems that employ a disk storage mechanism. Main memory databases are faster than disk-optimized databases since the internal optimization algorithms are simpler and execute fewer CPU instructions
It's about time. I worked with a medium, one-state-only insurance company a few years ago, and I remember having to create materialized views to create two virtual "tables", then join the tables to even get the query to run. And that company only had 400,000 members with about 20 years of claims data!

With the constantly decreasing cost of memory, I wonder what in-memory would do. (Though I suspect a lot of that was CPU, the main problem there was probably I/O.)