Heap (data structure)
In computer science, a heap is a specialized tree-based data structure that satisfies the heap property: if B is a child node of A, then key(A) ≥ key(B). This implies that an element with the greatest key is always in the root node, and so such a heap is sometimes called a max heap. (Alternatively, if the comparison is reversed, the smallest element is always in the root node, which results in a min heap.) This is why heaps are used to implement priority queues. The efficiency of heap operations is crucial in several graph algorithms.
The operations commonly performed with a heap are
- delete-max or delete-min: removing the root node of a max- or min-heap, respectively
- increase-key or decrease-key: updating a key within a max- or min-heap, respectively
- insert: adding a new key to the heap
- merge: joining two heaps to form a valid new heap containing all the elements of both.
Heaps are used in the sorting algorithm heapsort.
Variants
- Binary heap
- Binomial heap
- D-ary heap
- Fibonacci heap
- Pairing heap
- Leftist heap
- Soft heap
- 2-3 heap
- Ternary heap
- Treap
- Beap
- Skew heap
Comparison of theoretic bounds for variants
The following complexities[1] are worst-case for binary and binomial heaps and amortized complexity for Fibonacci heap. O(f) gives asymptotic upper bound and Θ(f) is asymptotically tight bound (see Big O notation). Function names assume a min-heap.
Operation | Binary | Binomial | Fibonacci |
---|---|---|---|
createHeap | Θ(n) | Θ(1) | Θ(1) |
findMin | Θ(1) | O(lg n) or Θ(1) | Θ(1) |
deleteMin | Θ(lg n) | Θ(lg n) | O(lg n) |
insert | Θ(lg n) | O(lg n) | Θ(1) |
decreaseKey | Θ(lg n) | Θ(lg n) | Θ(1) |
merge | Θ(n) | O(lg n) | Θ(1) |
For pairing heaps the insert and merge operations are conjectured [citation needed] to be O(1) amortized complexity but this has not yet been proven. decreaseKey is not O(1) amortized complexity [1] [2]
Heap applications
Heaps are favorite data structures for many applications.
- Heapsort: One of the best sorting methods being in-place and with no quadratic worst case scenarios.
- Selection algorithms: Finding the min, max or both of them, median or even any k-th element in sublinear time can be done dynamically with heaps.
- Graph algorithms: By using heaps as internal traversal data structures, run time will be reduced by an order of polynomial. Examples of such problems are Prim's minimal spanning tree algorithm and Dijkstra's shortest path problem.
Interestingly, binary heaps may be represented using an array alone. The first (or last) element will contain the root. The next two elements of the array contain its children. The next four contain the four children of the two child nodes, etc. Thus the children of the node at position n
would be at positions 2n
and 2n+1
in a one-based array, or 2n+1
and 2n+2
in a zero-based array. Balancing a heap is done by swapping elements which are out of order. As we can build a heap from an array without requiring extra memory (for the nodes, for example), heapsort can be used to sort an array in-place.
One more advantage of heaps over trees in some applications is that construction of heaps can be done in linear time using Tarjan's algorithm.
Heap implementations
- The C++ Standard Template Library provides the make_heap, push_heap and pop_heap algorithms for binary heaps, which operate on arbitrary random access iterators. It treats the iterators as a reference to an array, and uses the array-to-heap conversion detailed above.
See also
Notes
- ^ Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest (1990): Introduction to algorithms. MIT Press / McGraw-Hill.
External links
- Heap Applet by Kubo Kovac
- Priority Queues by Lee Killough
- Heap Source-Code (C++) with documentation