IBM BLU Acceleration

From Wikipedia, the free encyclopedia
Jump to: navigation, search

IBM BLU Acceleration is a collection of technologies from the IBM Research and Development Labs for analytical database workloads. BLU Acceleration integrates a number of different technologies including in-memory processing of columnar data, Actionable Compression (which uses approximate Huffman encoding to compress and pack data tightly), CPU Acceleration (which exploits SIMD technology and provides parallel vector processing), and Data Skipping (which allows data that's of no use to the current active workload to be ignored).[1] The term ‘BLU’ does not stand for anything in particular; however it has an indirect play on IBM's traditional corporate nickname Big Blue. (Ten IBM Research and Development facilities around the world filed more than 25 patents while working on the Blink Ultra project, which has resulted in BLU Acceleration)[2] BLU Acceleration does not require indexes, aggregates or tuning. BLU Acceleration is integrated in Version 10.5 of IBM DB2 for Linux, Unix and Windows,(DB2 for LUW[3]) and uses the same storage and memory constructs (i.e., storage groups, table spaces, and buffer pools), SQL language interfaces, and administration tools as traditional DB2 for LUW databases.[4] BLU Acceleration is available on both IBM POWER and x86 processor architectures.[5]

History[edit]

BLU Acceleration is the second generation of the technology that originated in the Blink project, which was started at the IBM Almaden Research Center in 2006. Aimed primarily at "read-mostly" business intelligence (BI) query processing, Blink combined the scale-out of multi-core processors with dynamic random-access memory (DRAM) to store a copy of a data mart completely in memory. It also used proprietary compression techniques and algorithms that allowed most SQL queries to be performed directly against compressed data (as opposed to requiring data to be decompressed before processing could take place).[6] Eventually, Blink was incorporated into two IBM products: the IBM Smart Analytics Optimizer for DB2 for z/OS (the mainframe version of DB2), which was released in November 2010, and the Informix Warehouse accelerator, which was released in March 2011.

BLU Acceleration has been optimized for accessing data from RAM. However even if data size grows to an extent that it no longer fits the RAM, intermediate results may spill to disk.[1] BLU Acceleration was perfected and integrated with DB2 through a collaboration between DB2 product development, the IBM Systems Optimization Competency Center, and IBM Research—this collaboration resulted in the addition of columnar processing, broader SQL support, I/O and CPU efficiencies, and integration with the DB2 SQL compiler, query optimizer, and storage layer.[7]

Technical Information[edit]

There are four main advances that are a part of BLU Acceleration design. They are:

  1. In-memory performance not limited to data that fits into RAM
  2. Actionable Compression
  3. Data Skipping
  4. CPU Acceleration

In-memory performance not limited to data that fits into RAM[edit]

BLU Acceleration has been optimized for accessing data from RAM. However even if data size grows to an extent that it no longer fits the RAM, intermediate results may spill to disk.

Actionable Compression[edit]

Order-preserving, frequency-based compression (referred to as actionable compression) in BLU Acceleration allows a wide variety of comparative operations to be performed without decompression—and with efficient use of CPU memory (cache) and registers. With actionable compression, values that appear more frequently are compressed at a higher level than values that appear less often. (Actionable compression uses an entropy encoding algorithm for lossless data compression that was developed by David A. Huffman while he was a Ph.D. student at MIT, as its base.),[4][8] Offset coding is another compression optimization technique that's used in BLU Acceleration. Offset coding is very useful with numeric data; instead of trying to compress the values 100, 101, 102, and 103, for example, DB2 will store a single value (100) and just the offsets to that value (1, 2, 3, etc.). This is very similar to the way in which DB2 compresses index record IDs (RIDs)—one of three autonomic index compression algorithms that DB2 can dynamically apply to indexes.[4]

With BLU Acceleration, values are compressed such that their order is preserved, which means they can be compared to each other while they are compressed.This allows the most common comparisons in SQL predicates to be performed on encoded values without needing to decompress the data, thereby accelerating evaluations, reducing memory requirements and lowering processing needs for queries at runtime.,[1][5]

Once encoded, data is packed as tightly as possible in a collection of bits that equal the register width of the CPU of the server being used. This results in fewer I/Os (because the data is smaller), better memory utilization (because more data can be stored in memory), and fewer CPU cycles (because the data is "register aligned").[4]

Data Skipping[edit]

Data skipping enables DB2 to detect ranges of column values that are not needed to satisfy a query and avoid reading pages containing those values from disk. Data skipping utilizes a secondary object called a synopsis table, which is a tiny, column-organized table that is created and maintained automatically.[4] BLU Acceleration keeps metadata that describes the minimum and maximum range of data values on "chunks" of data (about 1,000 records) in this table. This metadata is automatically maintained during insert, update, and delete operations and this is what allows DB2 with BLU Acceleration to automatically detect large sections of data that is not needed during query processing and to effectively ignore it.[4]

Conceptually, BLU Acceleration‘s data skipping is similar to the Zone Map technology found in the PureData System for Analytics family. However, unlike Zone Maps, the metadata stored in the synopsis table isn‘t tied to any particular page or extent boundary―instead, it‘s tied to a specific "chunk" of data records. Data skipping can deliver an order of magnitude in savings across compute resources (CPU, RAM, and I/O).[4]

CPU Acceleration[edit]

BLU Acceleration takes advantage of single instruction multiple data (SIMD) processing, if it is available on the hardware being used. By exploiting SIMD instructions, which are very low-level specific CPU instructions, BLU Acceleration can perform the same operation on multiple points of data simultaneously.[4] Consequently, DB2 with BLU Acceleration can use a single SIMD instruction to get results from multiple data elements (for example, to perform equality predicate processing) —provided they are in the same register. DB2 can also put 128 bits into a SIMD register and evaluate that data with a single instruction.[4]

The level of performance achieved will ultimately be determined by the hardware resources that BLU Acceleration has to work with.[4] That said, even if a server isn‘t SIMD enabled, BLU Acceleration can emulate SIMD hardware with SMID software (using bitmasking to achieve some parallelism) to deliver some of the benefits that SIMD has to offer.[4]

In addition, BLU Acceleration is engineered so that the majority of memory access occurs in a CPU cache and not by accessing data from RAM over and over again.[4] By operating almost exclusively on data in a CPU cache and not in RAM, BLU Acceleration minimizes latency and is able to keep CPUs busy.[4]

Designed to process data that is substantially larger than memory at in-memory speeds, BLU Acceleration prefetches and streams data into the processing engine—advancing beyond system memory to in-CPU memory optimization.[5] It uses a specialized in-memory optimized columnar prefetching algorithm to determine a few milliseconds in advance what data should be loaded into RAM; every algorithm has been designed to minimize access to RAM, and maximize processing time in L3 and L2 caches, which are an order of magnitude faster than RAM.,[9][10]

References[edit]

  1. ^ a b c Raman, Attaluri, Barber, Chainani, et al. (August 2013) "DB2 with BLU Acceleration: So Much More than Just a Column Store", Proceedings of the VLDB Endowment, Volume 6 Issue 11, Pages 1080-1091. Retrieved on February 1, 2014
  2. ^ "IBM BLU Acceleration speeds analytics with dynamic in-memory computing" Retrieved on February 1, 2014.
  3. ^ DB2 for Linux, UNIX and Windows
  4. ^ a b c d e f g h i j k l m Zikopoulos, Lightstone, Huras, Sachedina, Baklarz. "DB2 10.5 with BLU Acceleration: New Dynamic In-Memory Analytics for the Era of Big Data", McGraw-Hill Education. ISBN 9780071823494. Retrieved on February 1, 2014.
  5. ^ a b c "BLU Acceleration Changes the Game", IBM Software White Paper (July 2013). Retrieved on February 1, 2014.
  6. ^ Lightstone, Lohman, Schiefer. (April 2013) "Super Analytics, Super Easy. Introducing IBM DB2 10.5 with BLU Acceleration", IBM Data Magazine. Retrieved on February 1, 2014.
  7. ^ Zikopoulos, Vincent. (August 2013) "Ask the Experts: DB2 10.5 with BLU Acceleration". Retrieved on February 1, 2014.
  8. ^ Huffman, David A. (September 1952) "A Method for the Construction of Minimum-Redundancy Codes"'. Retrieved on February 6, 2014.
  9. ^ Lightstone, Sam. "When RAM Is too slow: How Dynamic In-memory Processing changes the game for Analytics". SoftwareTradecraft.com. Retrieved on February 1, 2014
  10. ^ Howard, Philip. (December 2013) "In-memory? That’s so yesterday!", IT Analysis.com. Retrieved on February 1, 2014

External links[edit]