Jump to content

Binary heap

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by 216.239.45.130 (talk) at 22:43, 29 November 2016 (Building a heap: Correct technical term is "sift", not "shift"). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Binary Heap
Typetree
Time complexity in big O notation
Operation Average Worst case
Search O(n) O(n)
Insert O(1) O(log n)
Delete O(log n) O(log n)
Peek O(1) O(1)
Space complexity
Space O(n) O(n)
Example of a complete binary max heap
Example of a complete binary min heap

A binary heap is a heap data structure that takes the form of a binary tree. Binary heaps are a common way of implementing priority queues.[1]: 162–163  The binary heap was introduced by J. W. J. Williams in 1964, as a data structure for the heapsort.[2]

A binary heap is defined as a binary tree with two additional constraints:[3]

  • Shape property: a binary heap is a complete binary tree; that is, all levels of the tree, except possibly the last one (deepest) are fully filled, and, if the last level of the tree is not complete, the nodes of that level are filled from left to right.
  • Heap property: the key stored in each node is either greater than or equal to or less than or equal to the keys in the node's children, according to some total order.

Heaps where the parent key is greater than or equal to (≥) the child keys are called max-heaps; those where it is less than or equal to (≤) are called min-heaps. Efficient (logarithmic time) algorithms are known for the two operations needed to implement a priority queue on a binary heap: inserting an element, and removing the smallest (largest) element from a min-heap (max-heap). Binary heaps are also commonly employed in the heapsort sorting algorithm, which is an in-place algorithm owing to the fact that binary heaps can be implemented as an implicit data structure, storing keys in an array and using their relative positions within that array to represent child-parent relationships.

Heap operations

Both the insert and remove operations modify the heap to conform to the shape property first, by adding or removing from the end of the heap. Then the heap property is restored by traversing up or down the heap. Both operations take O(log n) time.

Insert

To add an element to a heap we must perform an up-heap operation (also known as bubble-up, percolate-up, sift-up, trickle-up, heapify-up, or cascade-up), by following this algorithm:

  1. Add the element to the bottom level of the heap.
  2. Compare the added element with its parent; if they are in the correct order, stop.
  3. If not, swap the element with its parent and return to the previous step.

The number of operations required depends only on the number of levels the new element must rise to satisfy the heap property, thus the insertion operation has a worst-case time complexity of O(log n) but an average-case complexity of O(1).[4]

As an example of binary heap insertion, say we have a max-heap

and we want to add the number 15 to the heap. We first place the 15 in the position marked by the X. However, the heap property is violated since 15 > 8, so we need to swap the 15 and the 8. So, we have the heap looking as follows after the first swap:

However the heap property is still violated since 15 > 11, so we need to swap again:

which is a valid max-heap. There is no need to check the left child after this final step: at the start, the max-heap was valid, meaning 11 > 5; if 15 > 11, and 11 > 5, then 15 > 5, because of the transitive relation.

Extract

The procedure for deleting the root from the heap (effectively extracting the maximum element in a max-heap or the minimum element in a min-heap) and restoring the properties is called down-heap (also known as bubble-down, percolate-down, sift-down, trickle down, heapify-down, cascade-down, and extract-min/max).

  1. Replace the root of the heap with the last element on the last level.
  2. Compare the new root with its children; if they are in the correct order, stop.
  3. If not, swap the element with one of its children and return to the previous step. (Swap with its smaller child in a min-heap and its larger child in a max-heap.)

So, if we have the same max-heap as before

We remove the 11 and replace it with the 4.

Now the heap property is violated since 8 is greater than 4. In this case, swapping the two elements, 4 and 8, is enough to restore the heap property and we need not swap elements further:

The downward-moving node is swapped with the larger of its children in a max-heap (in a min-heap it would be swapped with its smaller child), until it satisfies the heap property in its new position. This functionality is achieved by the Max-Heapify function as defined below in pseudocode for an array-backed heap A of length heap_length[A]. Note that "A" is indexed starting at 1, not 0 as is common in many real programming languages[citation needed].

Max-Heapify (A, i):
left ← 2*i // ← means "assignment" right ← 2*i + 1
largesti
if leftheap_length[A] and A[left] > A[largest] then:
largestleft
if rightheap_length[A] and A[right] > A[largest] then:
largestright
if largesti then:
swap A[i] and A[largest] Max-Heapify(A, largest)

For the above algorithm to correctly re-heapify the array, the node at index i and its two direct children must violate the heap property. If they do not, the algorithm will fall through with no change to the array. The down-heap operation (without the preceding swap) can also be used to modify the value of the root, even when an element is not being deleted. In the pseudocode above, what starts with // is a comment. Note that A is an array (or list) that starts being indexed from 1 up to length(A), according to the pseudocode.

In the worst case, the new root has to be swapped with its child on each level until it reaches the bottom level of the heap, meaning that the delete operation has a time complexity relative to the height of the tree, or O(log n).

Building a heap

Building a heap from an array of n input elements can be done by starting with an empty heap, then successively inserting each element. This approach, called Williams' method after the inventor of binary heaps, is easily seen to run in O(n log n) time: it performs n insertions at O(log n) cost each.[a]

However, Williams' method is suboptimal. A faster method (due to Floyd[5]) starts by arbitrarily putting the elements on a binary tree, respecting the shape property (the tree could be represented by an array, see below). Then starting from the lowest level and moving upwards, sift the root of each subtree downward as in the deletion algorithm until the heap property is restored. More specifically if all the subtrees starting at some height (measured from the bottom) have already been "heapified", the trees at height can be heapified by sending their root down along the path of maximum valued children when building a max-heap, or minimum valued children when building a min-heap. This process takes operations (swaps) per node. In this method most of the heapification takes place in the lower levels. Since the height of the heap is , the number of nodes at height is . Therefore, the cost of heapifying all subtrees is:

This uses the fact that the given infinite series h / 2h converges to 2.

The exact value of the above (the worst-case number of comparisons during the heap construction) is known to be equal to:

,[6]

where s2(n) is the sum of all digits of the binary representation of n and e2(n) is the exponent of 2 in the prime factorization of n.

The Build-Max-Heap function that follows, converts an array A which stores a complete binary tree with n nodes to a max-heap by repeatedly using Max-Heapify in a bottom up manner. It is based on the observation that the array elements indexed by floor(n/2) + 1, floor(n/2) + 2, ..., n are all leaves for the tree (assuming, as before, that indices start at 1), thus each is a one-element heap. Build-Max-Heap runs Max-Heapify on each of the remaining tree nodes.

Build-Max-Heap (A):
heap_length[A] ← length[A]
for each index i from floor(length[A]/2) downto 1 do:
Max-Heapify(A, i)

Heap implementation

A small complete binary tree stored in an array
Comparison between a binary heap and an array implementation.

Heaps are commonly implemented with an array. Any binary tree can be stored in an array, but because a binary heap is always a complete binary tree, it can be stored compactly. No space is required for pointers; instead, the parent and children of each node can be found by arithmetic on array indices. These properties make this heap implementation a simple example of an implicit data structure or Ahnentafel list. Details depend on the root position, which in turn may depend on constraints of a programming language used for implementation, or programmer preference. Specifically, sometimes the root is placed at index 1, sacrificing space in order to simplify arithmetic.

Let n be the number of elements in the heap and i be an arbitrary valid index of the array storing the heap. If the tree root is at index 0, with valid indices 0 through n − 1, then each element a at index i has

  • children at indices 2i + 1 and 2i + 2
  • its parent at index floor((i − 1) ∕ 2).

Alternatively, if the tree root is at index 1, with valid indices 1 through n, then each element a at index i has

  • children at indices 2i and 2i +1
  • its parent at index floor(i ∕ 2).

This implementation is used in the heapsort algorithm, where it allows the space in the input array to be reused to store the heap (i.e. the algorithm is done in-place). The implementation is also useful for use as a Priority queue where use of a dynamic array allows insertion of an unbounded number of items.

The upheap/downheap operations can then be stated in terms of an array as follows: suppose that the heap property holds for the indices b, b+1, ..., e. The sift-down function extends the heap property to b−1, b, b+1, ..., e. Only index i = b−1 can violate the heap property. Let j be the index of the largest child of a[i] (for a max-heap, or the smallest child for a min-heap) within the range b, ..., e. (If no such index exists because 2i > e then the heap property holds for the newly extended range and nothing needs to be done.) By swapping the values a[i] and a[j] the heap property for position i is established. At this point, the only problem is that the heap property might not hold for index j. The sift-down function is applied tail-recursively to index j until the heap property is established for all elements.

The sift-down function is fast. In each step it only needs two comparisons and one swap. The index value where it is working doubles in each iteration, so that at most log2 e steps are required.

For big heaps and using virtual memory, storing elements in an array according to the above scheme is inefficient: (almost) every level is in a different page. B-heaps are binary heaps that keep subtrees in a single page, reducing the number of pages accessed by up to a factor of ten.[7]

The operation of merging two binary heaps takes Θ(n) for equal-sized heaps. The best you can do is (in case of array implementation) simply concatenating the two heap arrays and build a heap of the result.[8] A heap on n elements can be merged with a heap on k elements using O(log n log k) key comparisons, or, in case of a pointer-based implementation, in O(log n log k) time.[9] An algorithm for splitting a heap on n elements into two heaps on k and n-k elements, respectively, based on a new view of heaps as an ordered collections of subheaps was presented in.[10] The algorithm requires O(log n * log n) comparisons. The view also presents a new and conceptually simple algorithm for merging heaps. When merging is a common task, a different heap implementation is recommended, such as binomial heaps, which can be merged in O(log n).

Additionally, a binary heap can be implemented with a traditional binary tree data structure, but there is an issue with finding the adjacent element on the last level on the binary heap when adding an element. This element can be determined algorithmically or by adding extra data to the nodes, called "threading" the tree—instead of merely storing references to the children, we store the inorder successor of the node as well.

It is possible to modify the heap structure to allow extraction of both the smallest and largest element in time.[11] To do this, the rows alternate between min heap and max heap. The algorithms are roughly the same, but, in each step, one must consider the alternating rows with alternating comparisons. The performance is roughly the same as a normal single direction heap. This idea can be generalised to a min-max-median heap.

Derivation of index equations

In an array-based heap, the children and parent of a node can be located via simple arithmetic on the node's index. This section derives the relevant equations for heaps with their root at index 0, with additional notes on heaps with their root at index 1.

To avoid confusion, we'll define the level of a node as its distance from the root, such that the root itself occupies level 0.

Child nodes

For a general node located at index (beginning from 0), we will first derive the index of its right child, .

Let node be located in level , and note that any level contains exactly nodes. Furthermore, there are exactly nodes contained in the layers up to and including layer (think of binary arithmetic; 0111...111 = 1000...000 - 1). Because the root is stored at 0, the th node will be stored at index . Putting these observations together yields the following expression for the index of the last node in layer l.

Let there be nodes after node in layer L, such that

Each of these nodes must have exactly 2 children, so there must be nodes separating 's right child from the end of its layer ().

As required.

Noting that the left child of any node is always 1 place before its right child, we get .

If the root is located at index 1 instead of 0, the last node in each level is instead at index . Using this throughout yields and for heaps with their root at 1.

Parent node

Every node is either the left or right child of its parent, so we know that either of the following is true.

Hence,

Now consider the expression .

If node is a left child, this gives the result immediately, however, it also gives the correct result if node is a right child. In this case, must be even, and hence must be odd.

Therefore, irrespective of whether a node is a left or right child, its parent can be found by the expression:

Since the ordering of siblings in a heap is not specified by the heap property, a single node's two children can be freely interchanged unless doing so violates the shape property (compare with treap). Note, however, that in the common array-based heap, simply swapping the children might also necessitate moving the children's sub-tree nodes to retain the heap property.

The binary heap is a special case of the d-ary heap in which d = 2.

Summary of running times

Here are time complexities[12] of various heap data structures. The abbreviation am. indicates that the given complexity is amortized, otherwise it is a worst-case complexity. For the meaning of "O(f)" and "Θ(f)" see Big O notation. Names of operations assume a min-heap.

Operation find-min delete-min decrease-key insert meld make-heap[b]
Binary[12] Θ(1) Θ(log n) Θ(log n) Θ(log n) Θ(n) Θ(n)
Skew[13] Θ(1) O(log n) am. O(log n) am. O(log n) am. O(log n) am. Θ(n) am.
Leftist[14] Θ(1) Θ(log n) Θ(log n) Θ(log n) Θ(log n) Θ(n)
Binomial[12][16] Θ(1) Θ(log n) Θ(log n) Θ(1) am. Θ(log n)[c] Θ(n)
Skew binomial[17] Θ(1) Θ(log n) Θ(log n) Θ(1) Θ(log n)[c] Θ(n)
2–3 heap[19] Θ(1) O(log n) am. Θ(1) Θ(1) am. O(log n)[c] Θ(n)
Bottom-up skew[13] Θ(1) O(log n) am. O(log n) am. Θ(1) am. Θ(1) am. Θ(n) am.
Pairing[20] Θ(1) O(log n) am. o(log n) am.[d] Θ(1) Θ(1) Θ(n)
Rank-pairing[23] Θ(1) O(log n) am. Θ(1) am. Θ(1) Θ(1) Θ(n)
Fibonacci[12][24] Θ(1) O(log n) am. Θ(1) am. Θ(1) Θ(1) Θ(n)
Strict Fibonacci[25][e] Θ(1) Θ(log n) Θ(1) Θ(1) Θ(1) Θ(n)
Brodal[26][e] Θ(1) Θ(log n) Θ(1) Θ(1) Θ(1) Θ(n)[27]
  1. ^ In fact, this procedure can be shown to take Θ(n log n) time in the worst case, meaning that n log n is also an asymptotic lower bound on the complexity.[1]: 167  In the average case (averaging over all permutations of n inputs), though, the method takes linear time.[5]
  2. ^ make-heap is the operation of building a heap from a sequence of n unsorted elements. It can be done in Θ(n) time whenever meld runs in O(log n) time (where both complexities can be amortized).[13][14] Another algorithm achieves Θ(n) for binary heaps.[15]
  3. ^ a b c For persistent heaps (not supporting decrease-key), a generic transformation reduces the cost of meld to that of insert, while the new cost of delete-min is the sum of the old costs of delete-min and meld.[18] Here, it makes meld run in Θ(1) time (amortized, if the cost of insert is) while delete-min still runs in O(log n). Applied to skew binomial heaps, it yields Brodal-Okasaki queues, persistent heaps with optimal worst-case complexities.[17]
  4. ^ Lower bound of [21] upper bound of [22]
  5. ^ a b Brodal queues and strict Fibonacci heaps achieve optimal worst-case complexities for heaps. They were first described as imperative data structures. The Brodal-Okasaki queue is a persistent data structure achieving the same optimum, except that decrease-key is not supported.

See also

References

  1. ^ a b Cormen, Thomas H.; Leiserson, Charles E.; Rivest, Ronald L.; Stein, Clifford (2009) [1990]. Introduction to Algorithms (3rd ed.). MIT Press and McGraw-Hill. ISBN 0-262-03384-4.
  2. ^ Williams, J. W. J. (1964), "Algorithm 232 - Heapsort", Communications of the ACM, 7 (6): 347–348, doi:10.1145/512274.512284
  3. ^ "Binary Heaps", Data Structures and Algorithms {{citation}}: Unknown parameter |authors= ignored (help)
  4. ^ http://wcipeg.com/wiki/Binary_heap
  5. ^ a b Hayward, Ryan; McDiarmid, Colin (1991). "Average Case Analysis of Heap Building by Repeated Insertion" (PDF). J. Algorithms. 12: 126–153.
  6. ^ Suchenek, Marek A. (2012), "Elementary Yet Precise Worst-Case Analysis of Floyd's Heap-Construction Program", Fundamenta Informaticae, 120 (1), IOS Press: 75–92, doi:10.3233/FI-2012-751.
  7. ^ Poul-Henning Kamp. "You're Doing It Wrong". ACM Queue. June 11, 2010.
  8. ^ Chris L. Kuszmaul. "binary heap". Dictionary of Algorithms and Data Structures, Paul E. Black, ed., U.S. National Institute of Standards and Technology. 16 November 2009.
  9. ^ J.-R. Sack and T. Strothotte "An Algorithm for Merging Heaps", Acta Informatica 22, 171-186 (1985).
  10. ^ . J.-R. Sack and T. Strothotte "A characterization of heaps and its applications" Information and Computation Volume 86, Issue 1, May 1990, Pages 69–86.
  11. ^ Atkinson, M.D.; J.-R. Sack; N. Santoro; T. Strothotte (1 October 1986). "Min-max heaps and generalized priority queues" (PDF). Programming techniques and Data structures. Comm. ACM, 29(10): 996–1000. {{cite web}}: Unknown parameter |last-author-amp= ignored (|name-list-style= suggested) (help)
  12. ^ a b c d Cormen, Thomas H.; Leiserson, Charles E.; Rivest, Ronald L. (1990). Introduction to Algorithms (1st ed.). MIT Press and McGraw-Hill. ISBN 0-262-03141-8.
  13. ^ a b c Sleator, Daniel Dominic; Tarjan, Robert Endre (February 1986). "Self-Adjusting Heaps". SIAM Journal on Computing. 15 (1): 52–69. CiteSeerX 10.1.1.93.6678. doi:10.1137/0215004. ISSN 0097-5397.
  14. ^ a b Tarjan, Robert (1983). "3.3. Leftist heaps". Data Structures and Network Algorithms. pp. 38–42. doi:10.1137/1.9781611970265. ISBN 978-0-89871-187-5.
  15. ^ Hayward, Ryan; McDiarmid, Colin (1991). "Average Case Analysis of Heap Building by Repeated Insertion" (PDF). J. Algorithms. 12: 126–153. CiteSeerX 10.1.1.353.7888. doi:10.1016/0196-6774(91)90027-v. Archived from the original (PDF) on 2016-02-05. Retrieved 2016-01-28.
  16. ^ "Binomial Heap | Brilliant Math & Science Wiki". brilliant.org. Retrieved 2019-09-30.
  17. ^ a b Brodal, Gerth Stølting; Okasaki, Chris (November 1996), "Optimal purely functional priority queues", Journal of Functional Programming, 6 (6): 839–857, doi:10.1017/s095679680000201x
  18. ^ Okasaki, Chris (1998). "10.2. Structural Abstraction". Purely Functional Data Structures (1st ed.). pp. 158–162. ISBN 9780521631242.
  19. ^ Takaoka, Tadao (1999), Theory of 2–3 Heaps (PDF), p. 12
  20. ^ Iacono, John (2000), "Improved upper bounds for pairing heaps", Proc. 7th Scandinavian Workshop on Algorithm Theory (PDF), Lecture Notes in Computer Science, vol. 1851, Springer-Verlag, pp. 63–77, arXiv:1110.4428, CiteSeerX 10.1.1.748.7812, doi:10.1007/3-540-44985-X_5, ISBN 3-540-67690-2
  21. ^ Fredman, Michael Lawrence (July 1999). "On the Efficiency of Pairing Heaps and Related Data Structures" (PDF). Journal of the Association for Computing Machinery. 46 (4): 473–501. doi:10.1145/320211.320214.
  22. ^ Pettie, Seth (2005). Towards a Final Analysis of Pairing Heaps (PDF). FOCS '05 Proceedings of the 46th Annual IEEE Symposium on Foundations of Computer Science. pp. 174–183. CiteSeerX 10.1.1.549.471. doi:10.1109/SFCS.2005.75. ISBN 0-7695-2468-0.
  23. ^ Haeupler, Bernhard; Sen, Siddhartha; Tarjan, Robert E. (November 2011). "Rank-pairing heaps" (PDF). SIAM J. Computing. 40 (6): 1463–1485. doi:10.1137/100785351.
  24. ^ Fredman, Michael Lawrence; Tarjan, Robert E. (July 1987). "Fibonacci heaps and their uses in improved network optimization algorithms" (PDF). Journal of the Association for Computing Machinery. 34 (3): 596–615. CiteSeerX 10.1.1.309.8927. doi:10.1145/28869.28874.
  25. ^ Brodal, Gerth Stølting; Lagogiannis, George; Tarjan, Robert E. (2012). Strict Fibonacci heaps (PDF). Proceedings of the 44th symposium on Theory of Computing - STOC '12. pp. 1177–1184. CiteSeerX 10.1.1.233.1740. doi:10.1145/2213977.2214082. ISBN 978-1-4503-1245-5.
  26. ^ Brodal, Gerth S. (1996), "Worst-Case Efficient Priority Queues" (PDF), Proc. 7th Annual ACM-SIAM Symposium on Discrete Algorithms, pp. 52–58
  27. ^ Goodrich, Michael T.; Tamassia, Roberto (2004). "7.3.6. Bottom-Up Heap Construction". Data Structures and Algorithms in Java (3rd ed.). pp. 338–341. ISBN 0-471-46983-1.