HyperLogLog is an algorithm for the count-distinct problem, approximating the number of distinct elements in a multiset. Calculating the exact cardinality of a multiset requires an amount of memory proportional to the cardinality, which is impractical for very large data sets. Probabilistic cardinality estimators, such as the HyperLogLog algorithm, use significantly less memory than this, at the cost of obtaining only an approximation of the cardinality. The HyperLogLog algorithm is able to estimate cardinalities of > 109 with a typical accuracy of 2%, using 1.5 kB of memory. HyperLogLog is an extension of the earlier LogLog algorithm, itself deriving from the 1984 Flajolet–Martin algorithm.
In the original paper by Flajolet et al and in related literature on the count-distinct problem, the term "cardinality" is used to mean the number of distinct elements in a data stream with repeated elements. However in the theory of multisets the term refers to the sum of multiplicities of each member of a multiset. This article chooses to use Flajolet's definition for consistency with the sources.
This section includes a list of references, but its sources remain unclear because it has insufficient inline citations. (March 2014) (Learn how and when to remove this template message)
The basis of the HyperLogLog algorithm is the observation that the cardinality of a multiset of uniformly distributed random numbers can be estimated by calculating the maximum number of leading zeros in the binary representation of each number in the set. If the maximum number of leading zeros observed is n, an estimate for the number of distinct elements in the set is 2n.
In the HyperLogLog algorithm, a hash function is applied to each element in the original multiset to obtain a multiset of uniformly distributed random numbers with the same cardinality as the original multiset. The cardinality of this randomly distributed set can then be estimated using the algorithm above.
The simple estimate of cardinality obtained using the algorithm above has the disadvantage of a large variance. In the HyperLogLog algorithm, the variance is minimised by splitting the multiset into numerous subsets, calculating the maximum number of leading zeros in the numbers in each of these subsets, and using a harmonic mean to combine these estimates for each subset into an estimate of the cardinality of the whole set.
The HyperLogLog has three main operations: add to add a new element to the set, count to obtain the cardinality of the set and merge to obtain the union of two sets. Some derived operations can be computed using the Inclusion–exclusion principle like the cardinality of the intersection or the cardinality of the difference between two HyperLogLogs combining the merge and count operations.
The data of the HyperLogLog is stored in an array M of counters called registers with size m that are set to 0 in their initial state.
The add operation consists of computing the hash of the input data v with a hash function h, getting the first b bits (where b is ), and adding 1 to them to obtain the address of the register to modify. With the remaining bits compute which returns the position of the leftmost 1. The new value of the register will be the maximum between the current value of the register and .
The count algorithm consists in computing the harmonic mean of the m registers.
The intuition is that being n the unknown cardinality of M. Each subset will have elements. Then should be close to the harmonic mean of 2 to these quantities is which should be near . Thus, should be n approximately.
Finally, the constant is introduced to correct a systematic multiplicative bias present in due to hash collisions.
The merge operation for two HLLs () consists in obtaining the maximum for each pair of registers
To analyze the complexity, the data streaming model is used, which analyses the space necessary to get a approximation with a fixed success probability . The relative error of HLL is and it needs space, where n is the set cardinality and m is the number of registers (usually less than one byte size).
The add operation depends on the size of the output of the hash function. As this size is fixed, we can consider the running time for the add operation to be .
The count and merge operations depend on the number of registers m and have a theoretical cost of . In some implementations (Redis) the number of registers is fixed and the cost is considered to be in the documentation.
The HLL++ is a practical improvement of the HyperLogLog with some modifications to make it useful for real environments. Among other improvements it proposes:
- Use 64 bit hash functions instead of the 32 bits used in the original paper. This reduces the hash collisions for typical ranges allowing to remove the correction factor.
- Some bias is found for small cardinalities when switching from linear counting to the HLL counting. They propose to do an empirical bias correction.
- Normally, it is not known the order of magnitude of the cardinality of the set one wants to estimate. Sometimes, it might be that this cardinalities are very small making the size of the HLL similar to or larger. In that case many registers are not used and memory is wasted. For this reason, in HLL++ they propose a sparse representation of the registers (using a dictionary to store for the register j) which can be later transformed to a dense representation if the cardinality grows.
HLL-TailCut+ is an improvement of the HyperLogLog using 45% less memory.
- Flajolet, Philippe; Fusy, Éric; Gandouet, Olivier; Meunier, Frédéric (2007). "Hyperloglog: The analysis of a near-optimal cardinality estimation algorithm" (PDF). Discrete Mathematics and Theoretical Computer Science proceedings. Nancy, France. AH: 127–146. CiteSeerX . Retrieved 2016-12-11.
- Durand, M.; Flajolet, P. (2003). "LogLog counting of large cardinalities." (PDF). In G. Di Battista and U. Zwick. Lecture Notes in Computer Science. Annual European Symposium on Algorithms (ESA03). 2832. Springer. pp. 605–617.
- Flajolet, Philippe; Martin, G. Nigel (1985). "Probabilistic counting algorithms for data base applications" (PDF). Journal of Computer and System Sciences. 31 (2): 182–209. doi:10.1016/0022-0000(85)90041-8.
- S Heule, M Nunkesser, A Hall (2013). "HyperLogLog in Practice: Algorithmic Engineering of a State of The Art Cardinality Estimation Algorithm" (PDF). sec 4.
- Whang, Kyu-Young; Vander-Zanden, Brad T; Taylor, Howard M (1990). "A linear-time probabilistic counting algorithm for database applications". ACM Transactions on Database Systems (TODS). 15 (2): 208–229.
- "HyperLogLog in Practice: Algorithmic Engineering of a State of The Art Cardinality Estimation Algorithm". research.google.com. Retrieved 2014-04-19.
- Xiao, Q.; Zhou, Y.; Chen, S. (May 2017). "Better with fewer bits: Improving the performance of cardinality estimation of large data streams". IEEE INFOCOM 2017 - IEEE Conference on Computer Communications: 1–9. doi:10.1109/INFOCOM.2017.8057088.
- "Probabilistic Data Structures for Web Analytics and Data Mining | Highly Scalable Blog". highlyscalable.wordpress.com. Retrieved 2014-04-19.
- "New cardinality estimation algorithms for HyperLogLog sketches" (PDF). Retrieved 2016-10-29.