Jump to content

Apache Arrow

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by 70.102.74.164 (talk) at 00:10, 20 October 2018. The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

  • Comment: REVIEWERS: Please note that the submitting editor is the chief marketing officer and vice president of strategy at this company. [1] Missvain (talk) 04:25, 18 March 2018 (UTC)

Apache Arrow
Developer(s)Apache Software Foundation
Initial releaseOctober 10, 2016; 8 years ago (2016-10-10)
Stable release
v0.8.0...[1] / December 17, 2017; 6 years ago (2017-12-17)
Repositoryhttps://github.com/apache/arrow
Written inC++, Java, Python
TypeData analytics, machine learning algorithms
LicenseApache License 2.0
Websitearrow.apache.org

Apache Arrow is open source software for columnar in-memory data structures and processing.[2][3][4]

Arrow is sponsored by the nonprofit Apache Software Foundation[5] and was announced by Cloudera in 2016[6]. Arrow is a component, rather than a standalone piece of software, and as such is included in many popular projects, including Apache Spark and pandas.[7]

It defines a language-independent physical memory layout, enabling zero-copy, zero-deserialization interchange of flat and nested columnar data amongst a variety of systems such as Python, R, Apache Spark, ODBC protocols, and proprietary systems that utilize the open source components.[8][9] Apache Arrow is a complement to on-disk columnar data formats such as Apache Parquet and Apache ORC in that it organizes data for efficient in-memory processing by CPUs and GPUs.

Comparisons

Apache Parquet and Apache ORC are popular examples of on-disk columnar data formats. Arrow is designed as a complement to these formats for processing data in-memory.[10]. The hardware resource engineering trade-offs for in-memory processing vary from those associated with on-disk storage.[11] The Arrow and Parquet projects includes libraries that allow for reading and writing data between the two formats.[12]

References

  1. ^ "Github releases".
  2. ^ "Apache Arrow aims to speed access to big data: Apache's new project leverages columnar storage to speed data access not only for Hadoop but potentially for every language and project with big data needs".
  3. ^ "The first release of Apache Arrow".
  4. ^ "Julien Le Dem on the Future of Column-Oriented Data Processing with Apache Arrow".
  5. ^ "Apache Foundation rushes out Apache Arrow as top-level project".
  6. ^ "Introducing Apache Arrow".
  7. ^ "Apache Arrow unifies in-memory Big Data systems: Leaders from 13 existing open source projects band together to solve a common problem: how to represent Big Data in memory for maximum performance and interoperability".
  8. ^ "Big data gets a new open-source project, Apache Arrow: It offers performance improvements of more than 100x on analytical workloads, the foundation says".
  9. ^ "Apache Foundation rushes out Arrow as 'Top-Level Project'".
  10. ^ "Apache Arrow and Apache Parquet: Why We Needed Different Projects for Columnar Data, On Disk and In-Memory".
  11. ^ "Apache Arrow vs. Parquet and ORC: Do we really need a third Apache project for columnar data representation?".
  12. ^ "PyArrow:Reading and Writing the Apache Parquet Format".