In-Memory Processing

From Wikipedia, the free encyclopedia
Jump to: navigation, search

Definition[edit]

With businesses demanding faster and easy access to information in order to make reliable and smart decisions, In-memory processing is an emerging technology that is gaining attention. It enables users to have immediate access to right information which results in more informed decisions. Traditional Business Intelligence (BI) technology loads data onto the disk in the form of tables and multi-dimensional cubes against which queries are run. Using In-memory data is loaded into memory (Random Access Memory (RAM) or flash memory) instead of hard disks and therefore information technology (IT) staff spends less development time on data modeling, query analysis, cube building and table design.[1]

Traditional BI[edit]

Historically, every computer has two types of data storage mechanisms – disk (hard disk) and RAM (Random Access Memory). Modern computers have more available disk storage than RAM but reading data from the disk is much slower (possibly hundreds of times) when compared to reading the same data from RAM. Especially when analyzing large volumes of data, performance is severely degraded. Using traditional disk based technology the query accesses information from multiple tables stored on a server’s hard disk. Traditional disk based technologies means Relational Database Management Systems such as SQL Server, MySQL, Oracle and many others. RDMS are designed keeping transactional processing in mind. Having a database that supports both insertions, updates as well as performing aggregations, joins (typical in BI solutions) is not possible. Also the structured query language (SQL) is designed to efficiently fetch rows of data while BI queries usually involve fetching of partial rows of data involving heavy calculations.

Though SQL is a very powerful tool running complex queries took very long time to execute and often resulted in bringing down transactional processing. To improve query performance multidimensional databases or cubes also called multidimensional online analytical processing (MOLAP) were formed. Designing a cube design involved an elaborate and lengthy process which took a significant amount of time from IT staff. Changing the cubes structure to adapt to dynamically changing business needs was cumbersome. Cubes are pre populated with data to answer specific queries and although it increased performance it still failed to answer ad hoc queries.[2]

Disadvantages of traditional BI[edit]

To avoid performance issues and provide faster query processing when dealing with large volumes of data, organizations needed optimized database methods like creating indexes, use specialized data structures and aggregate tables. The point of having a data warehouse is to be able to get results for any queries asked at any time. But in order to achieve better response time for users many data marts are designed to pre calculate summaries and answer specific queries defeating the purpose of a data warehouse. Optimized aggregation algorithms needed to be used to increase performance. Traditional BI tools couldn’t keep up with the ever growing BI requirements and were unable to deliver real time data for end users.[3]

How does In-memory processing Work?[edit]

The arrival of column centric databases which stored similar information together allowed storing data more efficiently and with greater compression. This in turn allowed to store huge amounts of data in the same physical space which in turn reduced the amount memory needed to perform a query and increased the processing speed. With in-memory database, all information is initially loaded into memory. It eliminates the need for optimizing database like creating indexes, aggregates and designing of cubes and star schemas.

Most in-memory tools use compression algorithms which reduce the size of in-memory data than what would be needed for hard disks. Users query the data loaded into the system’s memory thereby avoiding slower database access and performance bottlenecks. This is different from caching, a very widely used method to speed up query performance, in that caches are subsets of very specific pre-defined organized data. With in-memory tools, data available for analysis can be as large as data mart or small data warehouse which is entirely in memory. This can be accessed within seconds by multiple concurrent users at a detailed level and offers the potential for excellent analytics. Theoretically the improvement in data access is 10,000 to 1,000,000 times faster than from disk. It also minimizes the need for performance tuning by IT staff and provides faster service for end users.

Factors driving In-memory products[edit]

Cheaper and higher performing hardware: According to Moore’s law the computing power doubles every two to three years while decreasing in costs. CPU processing, memory and disk storage are all subject to some variation of this law. Also hardware innovations like multi-core architecture, NAND flash memory, parallel servers, increased memory processing capability, etc. and software innovations like column centric databases, compression techniques and handling aggregate tables, etc. have all contributed to the demand of In-memory products.[4]

64-bits operating system: Though the idea of In-memory technology is not new, it is only recently emerging thanks to the widely popular and affordable 64-bit processors and declining memory chips prices. 64 bit operating systems allows access to far more RAM (up to 100GB or more) than the 2 or 4 GB accessible on 32-bit systems. By providing Terabytes (1 TB = 1,024 GB) of space available for storage and analysis, 64-bit operating systems make in-memory processing scalable. The use of flash memory enables systems to scale to many Terabytes more economically.

Data Volumes: As the data used by organizations grew traditional data warehouses just couldn’t deliver a timely, accurate and real time data. The extract, transform, load (ETL) process that periodically updates data warehouses with operational data can take anywhere from a few hours to weeks to complete. So at any given point of time data is at least a day old. In-memory processing makes easy to have instant access to terabytes of data for real time reporting.

Reduced Costs: In-memory processing comes at a lower cost and can be easily deployed and maintained when compared to traditional BI tools. According to Gartner survey deploying traditional BI tools can take as long as 17 months. Many data warehouse vendors are choosing In-memory technology over traditional BI to speed up implementation times.

Advantages of In-memory BI[edit]

Several in-memory vendors provide ability to connect to existing data sources and access to visually rich interactive dashboards. This allows business analysts and end users to create custom reports and queries without much training or expertise. Easy navigation and ability to modify queries on the fly is an appealing factor to many users. Since these dashboards can be populated with fresh data, it allows users to have access to real time data and create reports within minutes, which is a critical factor in any business intelligence application.

With In-memory processing the source database is queried only once instead of accessing the database every time a query is run thereby eliminating repetitive processing and reducing the burden on database servers. By scheduling to populate In-memory database overnight the database servers can be used for operational purposes during peak hours.

In-memory processing can be a blessing in disguise for operational workers such as call center representatives or warehouse managers who need instant and accurate data to make fast decisions.[5]

Disadvantages of In-memory BI[edit]

In any typical BI solution a large number of users need to have access to data. With increase in number of users and data volumes the amount of RAM needed also increases which in turn affects the hardware costs. Many users and software vendors have integrated flash memory into their systems to allow systems to scale to larger datasets more economically. Oracle has been integrating flash memory into the Oracle Exadata products for increased performance. Microsoft SQL Server 2012 BI/Data Warehousing software has been coupled with Violin Memory flash memory arrays to enable in-memory processing of datasets greater than 20TB.[6]

Who is it for?[edit]

While In-memory processing has a great potential for end users it is not the answer to everyone. Important question organizations need to ask is if slower query response times are preventing users from making important decisions. If company is a slow moving business where things don’t change often then in-memory solution is not effective. Organizations where there is a significant growth in data volume and increase in demand for reporting functionalities that facilitate new opportunities would be a right scenario to deploy in-memory BI.

Security needs to be the first and foremost concern when deploying In-memory tools as they expose huge amounts of data to end users. Care should be taken as to who has access to the data, how and where data is stored. End users download huge amounts of data onto their desktops and there is danger of data getting compromised. It could get lost or stolen. Measures should be taken to provide access to the data only to authorized users.[7]

References[edit]

  1. ^ Earls, A (2011). Tips on evaluating, deploying and managing in-memory analytics tools. Tableau. 
  2. ^ Gill, John (Second Quarter 2007). "Shifting the BI Paradigm with In-Memory Database Technologies". Business Intelligence Journal 12 (2): 58–62. 
  3. ^ "In_memory Analytics". yellowfin. p. 6. 
  4. ^ Kote, Sparjan. "In-memory computing in Business Intelligence". 
  5. ^ "In_memory Analytics". yellowfin. p. 9. 
  6. ^ "SQL Server 2012 with Violin Memory". Microsoft. 
  7. ^ "In_memory Analytics". yellowfin. p. 12.