This article relies too much on references to primary sources. (October 2016) (Learn how and when to remove this template message)
|Initial release||13 March 2013|
2.6.0 / 27 September 2018
|Written in||Java (reference implementation)|
|License||Apache License 2.0|
Apache Parquet is a free and open-source column-oriented data storage format of the Apache Hadoop ecosystem. It is similar to the other columnar-storage file formats available in Hadoop namely RCFile and ORC. It is compatible with most of the data processing frameworks in the Hadoop environment. It provides efficient data compression and encoding schemes with enhanced performance to handle complex data in bulk.
The open-source project to build Apache Parquet began as a joint effort between Twitter and Cloudera. Parquet was designed as an improvement upon the Trevni columnar storage format created by Hadoop creator Doug Cutting. The first version—Apache Parquet 1.0—was released in July 2013. Since April 27, 2015, Apache Parquet is a top-level Apache Software Foundation (ASF)-sponsored project.
Apache Parquet is implemented using the record-shredding and assembly algorithm, which accommodates the complex data structures that can be used to store the data. The values in each column are physically stored in contiguous memory locations and this columnar storage provides the following benefits:
- Column-wise compression is efficient and saves storage space
- Compression techniques specific to a type can be applied as the column values tend to be of the same type
- Queries that fetch specific column values need not read the entire row data thus improving performance
- Different encoding techniques can be applied to different columns
Compression and encoding
In Parquet, compression is performed column by column, which enables different encoding schemes to be used for text and integer data. This strategy also keeps the door open for newer and better encoding schemes to be implemented as they are invented.
Parquet has an automatic dictionary encoding enabled dynamically for data with a small number of unique values ( < 10^5 ) that enables significant compression and boosts processing speed.
Storage of integers is usually done with dedicated 32 or 64 bits per integer. For small integers, packing multiple integers into the same space makes storage more efficient.
Run-length encoding (RLE)
To optimize storage of multiple occurrences of the same value, a single value is stored once along with the number of occurrences.
Parquet implements a hybrid of bit packing and RLE, in which the encoding switches based on which produces the best compression results. This strategy works well for certain types of integer data and combines well with dictionary encoding.
This section does not cite any sources. (October 2016) (Learn how and when to remove this template message)
Apache Parquet is comparable to RCFile and Optimized Row Columnar (ORC) file formats---all three fall under the category of columnar data storage within the Hadoop ecosystem. They all have better compression and encoding with improved read performance at the cost of slower writes. In addition to these features, Apache Parquet supports limited schema evolution, i.e., the schema can be modified according to the changes in the data. It also provides the ability to add new columns and merge schemas that don't conflict.
- Pig (programming tool)
- Apache Hive
- Apache Impala
- Apache Drill
- Apache Kudu
- Apache Spark
- Apache Thrift
- "Github releases". Retrieved 2 July 2019.
- "Parquet-MR source code". Retrieved 2 July 2019.
- "Release Date".
- "Introducing Parquet: Efficient Columnar Storage for Apache Hadoop - Cloudera Engineering Blog". 2013-03-13. Retrieved 2018-10-22.
- "The striping and assembly algorithms from the Google-inspired Dremel paper". github. Retrieved 13 November 2017.
- "Apache Parquet Documentation".
- "Apache Parquet Cloudera".
- "Apache Thrift".
- "Supported Frameworks".
- "Announcing Parquet 1.0: Columnar Storage for Hadoop | Twitter Blogs". blog.twitter.com. Retrieved 2016-09-14.