Jump to content

Apache Kafka

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by 72.215.81.16 (talk) at 02:01, 17 November 2016 (minor change, better english). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Apache Kafka[1]
Developer(s)Apache Software Foundation
Initial releaseJanuary 2011; 13 years ago (2011-01)[2]
Stable release
0.10.1 / October 2016; 8 years ago (2016-10)
Repository
Written inScala, Java
Operating systemCross-platform
TypeStream Processing, Message broker
LicenseApache License 2.0
Websitekafka.apache.org

Apache Kafka is an open-source stream processing platform developed by the Apache Software Foundation written in Scala and Java. The project aims to provide a unified, high-throughput, low-latency platform for handling real-time data feeds. Its storage layer is essentially a "massively scalable pub/sub message queue architected as a distributed transaction log,"[3] making it highly valuable for enterprise infrastructures to process streaming data. Additionally, Kafka connects to external system (for data import/export) via Kafka Connect and provides Kafka Streams, a Java stream processing library.

The design is heavily influenced by transaction logs.[4]

History

Apache Kafka was originally developed by LinkedIn, and was subsequently open sourced in early 2011. Graduation from the Apache Incubator occurred on 23 October 2012. In November 2014, several engineers who worked on Kafka at LinkedIn created a new company named Confluent[5] with a focus on Kafka.

Enterprises that use Kafka

The following is a list of notable enterprises that have used or are using Kafka:

Kafka performance

Due to its widespread integration into enterprise-level infrastructures, monitoring Kafka performance at scale has become an increasingly important issue. Monitoring end-to-end performance requires tracking metrics from brokers, consumer, and producers, in addition to monitoring ZooKeeper which is used by Kafka for coordination among consumers.[17][18] There are currently several monitoring platforms to track Kafka performance, both open-source, like LinkedIn's Burrow, as well as paid, like Datadog. In addition to these platforms, collecting Kafka data can also be performed using tools commonly bundled with Java, including JConsole.[19]

See also

References

  1. ^ Repository Mirror at GitHub
  2. ^ "Open-sourcing Kafka, LinkedIn's distributed message queue". Retrieved 27 October 2016.
  3. ^ Monitoring Kafka performance metrics, Datadog Engineering Blog, accessed 23 May 2016/
  4. ^ The Log: What every software engineer should know about real-time data's unifying abstraction, LinkedIn Engineering Blog, accessed 5 May 2014
  5. ^ Primack, Dan. "LinkedIn engineers spin out to launch 'Kafka' startup Confluent". fortune.com. Retrieved 10 February 2015.
  6. ^ "OpenSOC: An Open Commitment to Security". Cisco blog. Retrieved 2016-02-03.
  7. ^ Doyung Yoon. "S2Graph : A Large-Scale Graph Database with HBase".
  8. ^ Cheolsoo Park and Ashwin Shankar. "Netflix: Integrating Spark at Petabyte Scale".
  9. ^ Shibi Sudhakaran of PayPal. "PayPal: Creating a Central Data Backbone: Couchbase Server to Kafka to Hadoop and Back (talk at Couchbase Connect 2015)". Couchbase. Retrieved 2016-02-03.
  10. ^ Josh Baer. "How Apache Drives Spotify's Music Recommendations".
  11. ^ "Stream Processing in Uber". InfoQ. Retrieved 2015-12-06.
  12. ^ "Shopify - Sarama is a Go library for Apache Kafka".
  13. ^ "Exchange Market Data Streaming with Kafka".
  14. ^ "Concurrency and At Least Once Semantics with the New Kafka Consumer".
  15. ^ "Kafka at HubSpot: Critical Consumer Metrics".
  16. ^ "More data, more data".
  17. ^ "Monitoring Kafka performance metrics". 2016-04-06. Retrieved 2016-10-05.
  18. ^ "Monitoring Kafka performance metrics". 2016-04-06. Retrieved 2016-10-05.
  19. ^ "Collecting Kafka performance metrics - Datadog". 2016-04-06. Retrieved 2016-10-05.