# B+ tree

(Redirected from B+-tree)
B+ tree
TypeTree (data structure)
Time complexity in big O notation
Algorithm Space Average Worst case O(n) O(n) O(log n) O(log n + log L) O(log n) O(M*log n + log L) O(log n) O(M*log n + log L)
A simple B+ tree example linking the keys 1–7 to data values d1-d7. The linked list (red) allows rapid in-order traversal. This particular tree's branching factor is ${\displaystyle b}$=4.

A B+ tree is an m-ary tree with a variable but often large number of children per node. A B+ tree consists of a root, internal nodes and leaves.[1] The root may be either a leaf or a node with two or more children.

A B+ tree can be viewed as a B-tree in which each node contains only keys (not key–value pairs), and to which an additional level is added at the bottom with linked leaves.

The primary value of a B+ tree is in storing data for efficient retrieval in a block-oriented storage context — in particular, filesystems. This is primarily because unlike binary search trees, B+ trees have very high fanout (number of pointers to child nodes in a node,[1] typically on the order of 100 or more), which reduces the number of I/O operations required to find an element in the tree.

The ReiserFS, NSS, XFS, JFS, ReFS, and BFS filesystems all use this type of tree for metadata indexing; BFS also uses B+ trees for storing directories. NTFS uses B+ trees for directory and security-related metadata indexing. EXT4 uses extent trees (a modified B+ tree data structure) for file extent indexing.[2] APFS uses B+ trees to store mappings from filesystem object IDs to their locations on disk, and to store filesystem records (including directories), though these trees' leaf nodes lack sibling pointers.[3] Relational database management systems such as IBM Db2,[4] Informix,[4] Microsoft SQL Server,[4] Oracle 8,[4] Sybase ASE,[4] and SQLite[5] support this type of tree for table indices. Key–value database management systems such as CouchDB[6] and Tokyo Cabinet[7] support this type of tree for data access.

## Overview

The order, or branching factor, b of a B+ tree measures the capacity of nodes (i.e., the number of children nodes) for internal nodes in the tree. The actual number of children for a node, referred to here as m, is constrained for internal nodes so that ${\displaystyle \lceil b/2\rceil \leq m\leq b}$. The root is an exception: it is allowed to have as few as two children.[1] For example, if the order of a B+ tree is 7, each internal node (meaning not root or leaf) may have between 4 and 7 children; the root may have between 2 and 7. Leaf nodes have no children, but are constrained so that the number of keys must be at least ${\displaystyle \lceil b/2\rceil }$ and at most ${\displaystyle b}$. In the situation where a B+ tree is empty or contains one node the root is the single leaf. This node is permitted to have no keys if necessary and at most ${\displaystyle b-1}$.

Node Type Children Type Min Number of Children Max Number of Children Example ${\displaystyle b=7}$ Example ${\displaystyle b=100}$
Root Node (when it is the only node in the tree) Records 0 ${\displaystyle b-1}$ 0–6 0–99
Root Node Internal Nodes or Leaf Nodes 2 b 2–7 2–100
Internal Node Internal Nodes or Leaf Nodes ${\displaystyle \lceil b/2\rceil }$ b 4–7 50–100
Leaf Node Records ${\displaystyle \lceil b/2\rceil }$ ${\displaystyle b}$ 4–7 50–100

## Algorithms

### Search

The root of a B+ Tree represents the whole range of values in the tree, where every internal node is a subinterval.

We are looking for a value k in the B+ Tree. Starting from the root, we are looking for the leaf which may contain the value k. At each node, we figure out which internal pointer we should follow. An internal B+ Tree node has at most ${\displaystyle d}$${\displaystyle b}$ children, where every one of them represents a different sub-interval. We select the corresponding node by searching on the key values of the node.

function search(k) is
return tree_search(k, root)
function: tree_search(k, node) is
if node is a leaf then
return node
switch k do
case k ≤ k_0
return tree_search(k, p_0)
case k_i < k ≤ k_{i+1}
return tree_search(k, p_{i+1})
case k_d < k
return tree_search(k, p_{d})

This pseudocode assumes that no duplicates are allowed.

### Prefix key compression

• It is important to increase fanout, as this allows to direct searches to the leaf level more efficiently.
• Index Entries are only to 'direct traffic', thus we can compress them.

### Insertion

• Perform a search to determine what bucket the new record should go into.
• If the bucket is not full (at most ${\displaystyle b-1}$ entries after the insertion), add the record.
• Otherwise, before inserting the new record
• split the bucket.
• original node has ⎡(L+1)/2⎤ items
• new node has ⎣(L+1)/2⎦ items
• Move ⎡(L+1)/2⎤-th key to the parent, and insert the new node to the parent.
• Repeat until a parent is found that need not split.
• If the root splits, treat it as if it has an empty parent and split as outline above.

B-trees grow at the root and not at the leaves.[1]

Given a collection of data records, we want to create a B+ tree index on some key field. One approach is to insert each record into an empty tree. However, it is quite expensive, because each entry requires us to start from the root and go down to the appropriate leaf page. An efficient alternative is to use bulk-loading.

• The first step is to sort the data entries according to a search key in ascending order.
• We allocate an empty page to serve as the root, and insert a pointer to the first page of entries into it.
• When the root is full, we split the root, and create a new root page.
• Keep inserting entries to the right most index page just above the leaf level, until all entries are indexed.

Note :

• when the right-most index page above the leaf level fills up, it is split;
• this action may, in turn, cause a split of the right-most index page one step closer to the root;
• splits only occur on the right-most path from the root to the leaf level.[8]

### Deletion

The basic of delete algorithm is to remove the desire entry node from the tree structure. We recursively calling the delete algorithm to the appropriate nodes until no node is found. For each function calling, we traverse along, using the index to navigate until finding that node, remove, and then work back up to the root.

At entry L that we wish to remove:

- If L is at least half-full, done

- If L has only d-1 entries, try to re-distribute, borrowing from sibling (adjacent node with same parent as L).

After the re-distribution of two sibling nodes happen, the parent node must be updated to reflect this change. The index key that points to the second sibling must take the smallest value of that node to be the index key.

- If re-distribute fails, merge L and sibling. After merging, the parent node is updated by deleting the index key that point to the deleted entry. In other words, if merge occurred, must delete entry (pointing to L or sibling) from parent of L.

Note: merge could propagate to root, which means decreasing height.[9]

B+ tree deletion

## Characteristics

For a b-order B+ tree with h levels of index:[citation needed]

• The maximum number of records stored is ${\displaystyle n_{\max }=b^{h}-b^{h-1}}$
• The minimum number of records stored is ${\displaystyle n_{\min }=2\left\lceil {\tfrac {b}{2}}\right\rceil ^{h-1}-2\left\lceil {\tfrac {b}{2}}\right\rceil ^{h-2}}$
• The minimum number of keys is ${\displaystyle n_{\mathrm {kmin} }=2\left\lceil {\tfrac {b}{2}}\right\rceil ^{h-1}-1}$
• The maximum number of keys is ${\displaystyle n_{\mathrm {kmax} }=b^{h}-1}$
• The space required to store the tree is ${\displaystyle O(n)}$
• Inserting a record requires ${\displaystyle O(\log _{b}n)}$ operations
• Finding a record requires ${\displaystyle O(\log _{b}n)}$ operations
• Removing a (previously located) record requires ${\displaystyle O(\log _{b}n)}$ operations
• Performing a range query with k elements occurring within the range requires ${\displaystyle O(\log _{b}n+k)}$ operations
• The B+ tree structure expands/contracts as the number of records increases/decreases. There are no restrictions on the size of B+ trees. Thus, increasing usability of a database system.
• Any change in structure does not affect performance due to balanced tree properties.[10]
• The data is stored in the leaf nodes and more branching of internal nodes helps to reduce the tree's height, thus, reduce search time. As a result, it works well in secondary storage devices.[11]
• Searching becomes extremely simple because all records are stored only in the leaf node and are sorted sequentially in the linked list.
• We can retrieve range retrieval or partial retrieval using B+ tree. This is made easier and faster by traversing the tree structure. This feature makes B+ tree structure applied in many search methods.[10]

## Implementation

The leaves (the bottom-most index blocks) of the B+ tree are often linked to one another in a linked list; this makes range queries or an (ordered) iteration through the blocks simpler and more efficient (though the aforementioned upper bound can be achieved even without this addition). This does not substantially increase space consumption or maintenance on the tree. This illustrates one of the significant advantages of a B+tree over a B-tree; in a B-tree, since not all keys are present in the leaves, such an ordered linked list cannot be constructed. A B+tree is thus particularly useful as a database system index, where the data typically resides on disk, as it allows the B+tree to actually provide an efficient structure for housing the data itself (this is described in[4]: 238  as index structure "Alternative 1").

If a storage system has a block size of B bytes, and the keys to be stored have a size of k, arguably the most efficient B+ tree is one where ${\displaystyle b={\tfrac {B}{k}}-1}$. Although theoretically the one-off is unnecessary, in practice there is often a little extra space taken up by the index blocks (for example, the linked list references in the leaf blocks). Having an index block which is slightly larger than the storage system's actual block represents a significant performance decrease; therefore erring on the side of caution is preferable.

If nodes of the B+ tree are organized as arrays of elements, then it may take a considerable time to insert or delete an element as half of the array will need to be shifted on average. To overcome this problem, elements inside a node can be organized in a binary tree or a B+ tree instead of an array.

B+ trees can also be used for data stored in RAM. In this case a reasonable choice for block size would be the size of processor's cache line.

Space efficiency of B+ trees can be improved by using some compression techniques. One possibility is to use delta encoding to compress keys stored into each block. For internal blocks, space saving can be achieved by either compressing keys or pointers. For string keys, space can be saved by using the following technique: Normally the i-th entry of an internal block contains the first key of block ${\displaystyle i+1}$. Instead of storing the full key, we could store the shortest prefix of the first key of block ${\displaystyle i+1}$ that is strictly greater (in lexicographic order) than last key of block i. There is also a simple way to compress pointers: if we suppose that some consecutive blocks ${\displaystyle i,i+1,...i+k}$ are stored contiguously, then it will suffice to store only a pointer to the first block and the count of consecutive blocks.

All the above compression techniques have some drawbacks. First, a full block must be decompressed to extract a single element. One technique to overcome this problem is to divide each block into sub-blocks and compress them separately. In this case searching or inserting an element will only need to decompress or compress a sub-block instead of a full block. Another drawback of compression techniques is that the number of stored elements may vary considerably from a block to another depending on how well the elements are compressed inside each block.

## Applications

Finding objects in a high-dimensional database that are comparable to a particular query object is one of the most often utilized and yet expensive procedures in these systems. In such situations, finding the closest neighbor using a B+ tree is productive.[12]

### iDistance

B+ tree is efficiently used to construct an indexed search method called iDistance. iDistance searches for k nearest neighbors (kNN) in high-dimension metric spaces. The data in those high-dimension spaces is divided based on space or partition strategies, and each partition has an index value that is close with the respect to the partition. From here, those points can be efficiently implemented using B+ tree, thus, the queries are mapped to single dimensions ranged search. In other words, the iDistance technique can be viewed as a way of accelerating the sequential scan. Instead of scanning records from the beginning to the end of the data file, the iDistance starts the scan from spots where the nearest neighbors can be obtained early with a very high probability.[13]

### Nonvolatile random-access memory (NVRAM) system

Nonvolatile random-access memory (NVRAM) has been using B+ tree structure as the main memory access technique for the Internet Of Things (IoT) system because of its non static power consumption and high solidity of cell memory.  B+ can regulate the trafficking of data to memory efficiently. Moreover, with advanced strategies on frequencies of some highly used leaf or reference point, the B+ tree shows significant results in increasing the endurance of database systems.[14]

## History

The B tree was first described in the paper Organization and Maintenance of Large Ordered Indices. Acta Informatica 1: 173–189 (1972) by Rudolf Bayer and Edward M. McCreight. There is no single paper introducing the B+ tree concept. Instead, the notion of maintaining all data in leaf nodes is repeatedly brought up as an interesting variant. An early survey of B trees also covering B+ trees is Douglas Comer.[15] Comer notes that the B+ tree was used in IBM's VSAM data access software and he refers to an IBM published article from 1973.

## References

1. ^ a b c d Navathe, Ramez Elmasri, Shamkant B. (2010). Fundamentals of database systems (6th ed.). Upper Saddle River, N.J.: Pearson Education. pp. 652–660. ISBN 9780136086208.
2. ^ Giampaolo, Dominic (1999). Practical File System Design with the Be File System (PDF). Morgan Kaufmann. ISBN 1-55860-497-9. Archived from the original (PDF) on 2017-02-13. Retrieved 2014-07-29.
3. ^ "B-Trees". Apple File System Reference (PDF). Apple Inc. 2020-06-22. p. 122. Retrieved 2021-03-10.
4. Ramakrishnan Raghu, Gehrke Johannes – Database Management Systems, McGraw-Hill Higher Education (2000), 2nd edition (en) page 267
5. ^ SQLite Version 3 Overview
6. ^ CouchDB Guide (see note after 3rd paragraph)
7. ^ Tokyo Cabinet reference Archived September 12, 2009, at the Wayback Machine
8. ^ "ECS 165B: Database System Implementation Lecture 6" (PDF). UC Davis CS department. April 9, 2010. pp. 21–23.
9. ^ Ramakrishnan, Raghu; Johannes Gehrke (2003). Database management systems (3rd ed.). Boston: McGraw-Hill. ISBN 0-07-246563-8. OCLC 49977005.
10. ^ a b "Database Systems for Advanced Applications". Scalable Splitting of Massive Data Streams.
11. ^ "[Database Systems for Advanced Applications "Update Migration: An Efficient B+ Tree for Flash Storage"]". Lecture Notes in Computer Science, Vol 5982. Springer, Berlin, Heidelberg.
12. ^ Database Systems for Advanced Applications. Japan. 2010.
13. ^ Jagadish, H. V.; Ooi, Beng Chin; Tan, Kian-Lee; Yu, Cui; Zhang, Rui (June 2005). "iDistance: An adaptive B + -tree based indexing method for nearest neighbor search". ACM Transactions on Database Systems. 30 (2): 364–397. doi:10.1145/1071610.1071612. ISSN 0362-5915. S2CID 967678.
14. ^ Dharamjeet; Chen, Tseng-Yi; Chang, Yuan-Hao; Wu, Chun-Feng; Lee, Chi-Heng; Shih, Wei-Kuan (December 2021). "Beyond Write-Reduction Consideration: A Wear-Leveling-Enabled B⁺-Tree Indexing Scheme Over an NVRAM-Based Architecture". IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems. 40 (12): 2455–2466. doi:10.1109/TCAD.2021.3049677. ISSN 0278-0070. S2CID 234157183.
15. ^ "The Ubiquitous B-Tree", ACM Computing Surveys 11(2): 121–137 (1979).