Jump to content

Interleave lower bound

From Wikipedia, the free encyclopedia

In the theory of optimal binary search trees, the interleave lower bound is a lower bound on the number of operations required by a Binary Search Tree (BST) to execute a given sequence of accesses.

Several variants of this lower bound have been proven.[1][2][3] This article is based on a variation of the first Wilber's bound.[4] This lower bound is used in the design and analysis of Tango tree.[4] Furthermore, this lower bound can be rephrased and proven geometrically, Geometry of binary search trees.[5]

Definition

[edit]

The bound is based on a fixed perfect BST , called the lower bound tree, over the keys . For example, for , can be represented by the following parenthesis structure:

[([1] 2 [3]) 4 ([5] 6 [7])]

For each node in , define:

  • to be the set of nodes in the left sub-tree of , including .
  • to be the set of nodes in the right sub-tree of .

Consider the following access sequence: . For a fixed node , and for each access , define the label of with respect to as:

  • "L" - if is in .
  • "R" - if is in ;
  • Null - otherwise.

The label of is the concatenation of the labels from all the accesses. For example, if the sequence of accesses is: then the label of the root is: "RRL", the label of 6 is: "RL", and the label of 2 is: "R".

For every node , define the amount of interleaving through y as the number of alternations between L and R in the label of . In the above example, the interleaving through and is and the interleaving through all other nodes is .

The interleave bound, , is the sum of the interleaving through all the nodes of the tree. The interleave bound of the above sequence is .

The Lower Bound Statement and its Proof

[edit]

The interleave bound is summarized by the following theorem.

Theorem —  Let be an access sequence. Denote by the interleave bound of , then is a lower bound of , the cost of optimal offline BST that serves .

The following proof is based on.[4]

Proof

[edit]

Let be an access sequence. Denote by the state of an arbitrary BST at time i.e. after executing the sequence . We also fix a lower bound BST .

For a node in , define the transition point for at time to be the minimum-depth node in the BST such that the path from the root of to includes both a node from Left(y) and a node from Right(y). Intuitively, any BST algorithm on that accesses an element from Right(y) and then an element from Left(y) (or vice versa) must touch the transition point of at least once. In the following Lemma, we will show that transition point is well-defined.

Lemma 1 — The transition point of a node in at a time exists and it is unique.[4]

Proof

Define to be the lowest common ancestor of all nodes in that are in Left(y). Given any two nodes in , the lowest common ancestor of and , denoted by , satisfies the following inequalities. . Consequently, is in Left(y), and is the unique node of minimum depth in . Same reasoning can be applied for , the lowest common ancestor of all nodes in that are in Right(y). In addition, the lowest common ancestor for all the points in Left(y) and right(y) is also in one of these sets. Therefore, the unique minimum depth node must be among the nodes of Left(y) and right(y). More precisely, it is either or . Suppose, it is . Then, is an ancestor of . Consequently, is a transition points since the path from the root to contains . Moreover, any path in from the root to a node in the sub-tree of must visit because it is the ancestor of all such nodes, and for any path to a node in the right region must visit because it is lowest common ancestor of all the nodes in right(y). To conclude, is the unique transition point for in .

The second lemma that we need to prove states that the transition point is stable. It will not change until it is touched.

Lemma 2 — Given a node . Suppose is the transition point of at a time . If an access algorithm for a BST does not touch in for , then the transition point of will remain in for . [4]

Proof

Consider the same definition for and as in Lemma 1. Without loss of generality, suppose also that is an ancestor of in the BST at time , denoted by . As a result, will be the transition point of . By hypothesis, the BST algorithm does not touch the transition point, in our case , for the entirety of . Therefore, it does not touch any node in Right(y). Consequently, remains the lowest common ancestor for any two nodes in Right(y). However, the access algorithm might touch a node in Left(y). More precisely, it might touch the lowest common ancestor of all nodes in Left(y) at a time , which we will denoted by . Even so, will remain the ancestor of for the following reasons: Firstly, observe that any node of Left(y) that was outside the tree rooted at at time cannot enter this tree at a time , since isn't touched in this time frame. Secondly, there exists at least one node in Left(y) outside the tree rooted at , for any time . This is since was initially outside 's sub-tree, and no nodes from outside the tree can enter it in this timeframe. Now, consider . cannot be since is not in the sub-tree of . So, must be in Left(y), since . Consequently must be an ancestor of and by consequence an ancestor of at time . Therefore, there always exists a node in Left(y) on the path from the root to , and as such remains the transition point.

The last Lemma toward the proof states that every node has its unique transition point.

Lemma 3 — Given a BST at time , , any node in can be only a transition for at most one node in .[4]

Proof

Given two distinct nodes . Let be the lowest common ancestor of respectively. From Lemma 1, we know that the transition point of is either or for . Now we have two main cases to consider.

Case 1: There is no ancestrally relation between and in . Consequently, the and are all disjoint. Thus, , and the transition points are different.

Case 2: Suppose without loss of generality that is an ancestor of in .

Case 2.1: Suppose that the transition point of is not in the tree rooted at in . Thus, it is different from and , and consequently the transition point of .

Case 2.2: The transition point of is in the tree rooted at in . More precisely, it is one of the lowest common ancestor of and . In other words, it is either or .

Suppose is the lowest common ancestor of the sub-tree rooted at and does not contain . We have and deeper than because one of them is the transition point. Suppose that is the transition point. Then, is less deep that . In this case, is the transition point of and is the transition point of . Similar reasoning applies if is less deep that . In sum, the transition point of is the less deep from and , and has the deeper one as a transition point.

In conclusion, the transition points are different in all the cases.

Now, we are ready to prove the theorem. First of all, observe that the number of touched transition points by the offline BST algorithm is a lower bound on its cost, we are counting less nodes than the required for the total cost.

We know by Lemma 3 that at any time , any node in can be only a transition for at most one node in . Thus, It is enough to count the number of touches of a transition node of , the sum over all .

Therefore, for a fixed node , let and to be defined as in Lemma 1. The transition point of is among these two nodes. In fact, it is the deeper one. Let be a maximal ordered access sequence to nodes that alternate between and . Then is the amount of interleaving through the node . Suppose that the even indexed accesses are in the , and the odd ones are in i.e. and . We know by the properties of lowest common ancestor that an access to a node in , it must touch . Similarly, an access to a node in must touch . Consider every . For two consecutive accesses and , if they avoid touching the access point of , then and must change in between. However, by Lemma 2, such change requires touching the transition point. Consequently, the BST access algorithm touches the transition point of at least once in the interval of . Summing over all , the best algorithm touches the transition point of at least . Summing over all ,

      

where is the amount of interleave through . By definition, the 's add up to . That concludes the proof.

See also

[edit]

References

[edit]
  1. ^ Wilber, R. (1989). "Lower Bounds for Accessing Binary Search Trees with Rotations". SIAM Journal on Computing. 18: 56–67. doi:10.1137/0218004.
  2. ^ Hampapuram, H.; Fredman, M. L. (1998). "Optimal Biweighted Binary Trees and the Complexity of Maintaining Partial Sums". SIAM Journal on Computing. 28: 1–9. doi:10.1137/S0097539795291598.
  3. ^ Patrascu, M.; Demaine, E. D. (2006). "Logarithmic Lower Bounds in the Cell-Probe Model" (PDF). SIAM Journal on Computing. 35 (4): 932. arXiv:cs/0502041. doi:10.1137/S0097539705447256.
  4. ^ a b c d e f Demaine, E. D.; Harmon, D.; Iacono, J.; Pătraşcu, M. (2007). "Dynamic Optimality—Almost" (PDF). SIAM Journal on Computing. 37: 240–251. doi:10.1137/S0097539705447347.
  5. ^ Demaine, Erik D.; Harmon, Dion; Iacono, John; Kane, Daniel; Pătraşcu, Mihai (2009), "The geometry of binary search trees", In Proceedings of the 20th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA 2009), New York: 496–505, doi:10.1137/1.9781611973068.55, ISBN 978-0-89871-680-1