Jump to content

K-d tree: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
m →‎Construction: Corrected computational complexity of a priori sorted k-d tree algorithm.
m →‎Complexity: Updated list of complexity.
Line 408: Line 408:
* Building a static ''k''-d tree from ''n'' points takes:
* Building a static ''k''-d tree from ''n'' points takes:
** [[Big O notation|O]](''n'' log<sup>2</sup> ''n'') time if an O(''n'' log ''n'') sort is used to compute the median at each level;
** [[Big O notation|O]](''n'' log<sup>2</sup> ''n'') time if an O(''n'' log ''n'') sort is used to compute the median at each level;
** O(''n'' log ''n'') time if a linear [[Selection algorithm|median-finding]] algorithm such as the one described in Cormen ''et al.''<ref>{{Introduction to Algorithms}} Chapter 10.</ref> is used, or if the ''n'' points are sorted in each of the ''k'' dimensions using an O(''n'' log ''n'') sort ''prior'' to building the ''k''-d tree.
** O(''n'' log ''n'') time if a linear-time [[Selection algorithm|median-finding]] algorithm such as the one described in Cormen ''et al.''<ref>{{Introduction to Algorithms}} Chapter 10.</ref> is used;
** O([''k''-1]''n'' log ''n'') time if ''n'' points are sorted in each of ''k'' dimensions ''prior'' to building the ''k''-d tree.


* Inserting a new point into a balanced ''k''-d tree takes O(log ''n'') time.
* Inserting a new point into a balanced ''k''-d tree takes O(log ''n'') time.

Revision as of 06:17, 3 March 2012

KD-tree
TypeMultidimensional BST
Invented1975
Invented byJon Louis Bentley
Time complexity in big O notation
Operation Average Worst case
Search O(log n) O(n)
Insert O(log n) O(n)
Delete O(log n) O(n)
Space complexity
Space O(n) O(n)
A 3-dimensional k-d tree. The first split (red) cuts the root cell (white) into two subcells, each of which is then split (green) into two subcells. Finally, each of those four is split (blue) into two subcells. Since there is no more splitting, the final eight are called leaf cells.

In computer science, a k-d tree (short for k-dimensional tree) is a space-partitioning data structure for organizing points in a k-dimensional space. k-d trees are a useful data structure for several applications, such as searches involving a multidimensional search key (e.g. range searches and nearest neighbor searches). k-d trees are a special case of binary space partitioning trees.

Informal description

The k-d tree is a binary tree in which every node is a k-dimensional point. Every non-leaf node can be thought of as implicitly generating a splitting hyperplane that divides the space into two parts, known as half-spaces. Points to the left of this hyperplane represent the left subtree of that node and points right of the hyperplane are represented by the right subtree. The hyperplane direction is chosen in the following way: every node in the tree is associated with one of the k-dimensions, with the hyperplane perpendicular to that dimension's axis. So, for example, if for a particular split the "x" axis is chosen, all points in the subtree with a smaller "x" value than the node will appear in the left subtree and all points with larger "x" value will be in the right subtree. In such a case, the hyperplane would be set by the x-value of the point, and its normal would be the unit x-axis.[1]

Operations on k-d trees

Construction

Since there are many possible ways to choose axis-aligned splitting planes, there are many different ways to construct k-d trees. The canonical method of k-d tree construction has the following constraints:

  • As one moves down the tree, one cycles through the axes used to select the splitting planes. (For example, in a 3-dimensional tree, the root would have an x-aligned plane, the root's children would both have y-aligned planes, the root's grandchildren would all have z-aligned planes, the root's great-grandchildren would all have x-aligned planes, the root's great-great-grandchildren would all have y-aligned planes, and so on.)
  • Points are inserted by selecting the median of the points being put into the subtree, with respect to their coordinates in the axis being used to create the splitting plane. (Note the assumption that we feed the entire set of points into the algorithm up-front.)

This method leads to a balanced k-d tree, in which each leaf node is about the same distance from the root. However, balanced trees are not necessarily optimal for all applications.

Note also that it is not required to select the median point. In that case, the result is simply that there is no guarantee that the tree will be balanced. A simple heuristic to avoid coding a complex linear-time median-finding algorithm or using an O(n log n) sort is to use sort to find the median of a fixed number of randomly selected points to serve as the cut line. In practice, this technique often results in nicely balanced trees.


Given a list of n points, the following algorithm uses a median-finding sort to construct a balanced k-d tree containing those points.

function kdtree (list of points pointList, int depth)
{
    // Select axis based on depth so that axis cycles through all valid values
    var int axis := depth mod k;
        
    // Sort point list and choose median as pivot element
    select median by axis from pointList;
        
    // Create node and construct subtrees
    var tree_node node;
    node.location := median;
    node.leftChild := kdtree(points in pointList before median, depth+1);
    node.rightChild := kdtree(points in pointList after median, depth+1);
    return node;
}

It is common that points "after" the median include only the ones that are strictly greater than the median. For points that lie on the median, it is possible to define a "superkey" function that compares the points in all dimensions. In some cases, it is acceptable to let points equal to the median lie on one side of the median, as for example by splitting the points into a "less than" subset and a "greater than or equal to" subset.


The above algorithm implemented in the Python programming language is as follows:

class Node: pass

def kdtree(point_list, depth=0):

    if not point_list:
        return None

    # Select axis based on depth so that axis cycles through all valid values
    k = len(point_list[0]) # assumes all points have the same dimension
    axis = depth % k

    # Sort point list and choose median as pivot element
    point_list.sort(key=lambda point: point[axis])
    median = len(point_list) // 2 # choose median

    # Create node and construct subtrees
    node = Node()
    node.location = point_list[median]
    node.left_child = kdtree(point_list[:median], depth + 1)
    node.right_child = kdtree(point_list[median + 1:], depth + 1)
    return node
The resulting k-d tree decomposition.
The resulting k-d tree.

Example usage would be:

point_list = [(2,3), (5,4), (9,6), (4,7), (8,1), (7,2)]
tree = kdtree(point_list)

The tree generated is shown on the right.

This algorithm creates the invariant that for any node, all the nodes in the left subtree are on one side of a splitting plane, and all the nodes in the right subtree are on the other side. Points that lie on the splitting plane may appear on either side. The splitting plane of a node goes through the point associated with that node (referred to in the code as node.location).


A novel algorithm builds a balanced k-d tree in O[(k-1)n log n] time, without requiring a complex linear-time median-finding algorithm. This novel algorithm sorts the points in each of the k dimensions prior to building the k-d tree.[2][3] The results of these k a priori sorts can be represented using k arrays of indices that reference the point coordinates. Subsequently, when each level of the tree is built, a splitting plane is obtained from the median of one of these index arrays, then each of the other index arrays is processed in a priori sorted order and subdivided into two sub-arrays by comparing the corresponding point coordinates against the splitting plane. This approach preserves the a priori sorted order in both sub-arrays for each of the k dimensions, and hence requires no sorting during the actual building of the k-d tree. Because O[(k-1)n] comparisons are performed at each of the log n levels of the tree, the complexity for building the k-d tree is O[(k-1)n log n]. If the a priori sorting is done using heapsort, that sorting will have O(kn log n) complexity. An implementation of this algorithm in the C programming language is as follows:

#include <stdio.h>
#include <stdlib.h>

typedef struct kdNode
{
  int n;
  struct kdNode *left, *right;
} KDNODE_T;

/* This function allocates and initializes a kdNode structure. */
KDNODE_T *newKDnode(int i)
{
  KDNODE_T *kdPtr = (KDNODE_T *)malloc(sizeof(KDNODE_T));
  kdPtr->n = i;
  kdPtr->left = kdPtr->right = NULL;
  return kdPtr;
}

/*
 * This function builds a k-d tree by recursively subdividing
 * the index arrays and adding nodes to the tree.  These arrays
 * are permuted cyclically for successive levels of the tree in
 * order that sorting occur on x, y, z...
 *
 * Calling parameters are as follows:
 *
 * index - index arrays sorted by x, y, z...
 * t - temporary index array
 * first - first element of the index arrays
 * last - last element of the index arrays
 * xyz - array containing x, y, z... coordinates
 * dim - the number of dimensions to sort
 * depth - the depth in the tree
 */
KDNODE_T *buildKDtree(int **index, int t[], int first, int last,
		      int **xyz, int dim, int depth)
{
  KDNODE_T *kdPtr;

  /* The 'axis' permutes as x, y, z... and is used to address 'xyz'. */
  int axis = depth % dim;

  /* If only one element was passed to this function, add it to the tree. */
  if (last == first) {
    kdPtr = newKDnode(index[0][last]);
  }

  /*
   * Otherwise, if two elements were passed to this function, determine
   * which element is the left child and which is the right child.
   * If the two children have equal coordinates, arbitrarily pick the
   * 'last' element as the >= child, in order to split as < and >=.
   */
  else if (last == first + 1) {
    if (xyz[index[0][first]][axis] < xyz[index[0][last]][axis]) {
      kdPtr = newKDnode(index[0][last]);
      kdPtr->left = newKDnode(index[0][first]);
    }
    else if (xyz[index[0][first]][axis] > xyz[index[0][last]][axis]) {
      kdPtr = newKDnode(index[0][first]);
      kdPtr->left = newKDnode(index[0][last]);
    }
    else {
      kdPtr = newKDnode(index[0][first]);
      kdPtr->right = newKDnode(index[0][last]);
    }
  }

  /* Otherwise, more than two elements were passed to this function. */
  else {

    /*
     * The median element of index[0] is taken as the element about
     * which the other index arrays will be subdivided.  But search
     * the lower elements of index[0] to ensure that no elements of
     * this array below the median element have coordinates equal to
     * that of the median element.  This approach splits as < and >=.
     */
    int median = (first + last) / 2;
    int split = xyz[index[0][median]][axis];
    while (median > first && xyz[index[0][median-1]][axis] == split) {
      median--;
    }

    /* Store the median element of index[0] in a new kdNode. */
    kdPtr = newKDnode(index[0][median]);

    /* Copy index[0] to the temporary index array. */
    for (int i = first; i <= last; i++) {
      t[i] = index[0][i];
    }

    /*
     * Process each of the other index arrays in ascending order
     * and subdivide it by comparing coordinates.  Copy the result
     * from index[i] to index[i-1], thus permuting the index arrays.
     * Skip the element of index[i] that equals the median.
     */
    int ltIndex = first - 1;
    int geIndex = median;
    for (int i = 1; i < dim; i++) {
      ltIndex = first - 1;
      geIndex = median;
      for (int j = first; j <= last; j++) {
	if (index[i][j] != kdPtr->n) {
	  if (xyz[index[i][j]][axis] < split) {
	    index[i-1][++ltIndex] = index[i][j];
	  } else {
	    index[i-1][++geIndex] = index[i][j];
	  }
	}
      }
    }

    /* Copy the temporary array to index[dim-1] to finish permutation. */
    for (int i = first; i <= last; i++) {
      index[dim-1][i] = t[i];
    }

    /*
     * Recursively build the left branch of the tree if the ltIndex group
     * of the index arrays is not empty.
     */
    if (ltIndex >= first) {
      kdPtr->left = buildKDtree(index, t, first, ltIndex, xyz, dim, depth+1);
    }

    /*
     * Recursively build the right branch of the tree if the geIndex group
     * of the index arrays is not empty.
     */
    if (geIndex > median) {
      kdPtr->right = buildKDtree(index, t, median+1, geIndex, xyz, dim, depth+1);
    }
  }
  return kdPtr;
}

/*
 * Heapsort functions from p. 77 of "C, a Reference Manual", 4th Edition
 * by Harbison & Steele, modified to sort the elements of the index array
 * via comparison of the corresponding elements of the coordinate array.
 */
#define SWAP(x, y) (temp = (x), (x) = (y), (y) = temp)

/*
 * If xyz[v[m+1]][axis] through xyz[v[n]][axis] are already in heap
 * form, put xyz[v[m]][axis] through xyz[v[n]][axis] into heap form.
 */
void adjust(int v[], int m, int n, int **xyz, int axis)
{
  int *b, j, k, temp;
  b = v - 1;    /* b is "1-origin", i.e., v[j] is the same as b[j-1] */
  j = m;
  k = m + m;
  while (k <= n) {
    if ( (k < n) && (xyz[b[k]][axis] < xyz[b[k+1]][axis]) ) ++k;
    if (xyz[b[j]][axis] < xyz[b[k]][axis]) SWAP(b[j], b[k]);
    j = k;
    k = k + k;
  }
}

/* Sort xyz[b[0]][axis]...xyz[v[n-1]][axis] into the form of a heap. */
void heapsort(int v[], int n, int **xyz, int axis)
{
  int *b, j, temp;
  b = v - 1;    /* b is "1-origin", i.e., v[j] is the same as b[j-1] */
  
  /* Put the array into the form of a heap. */
  for (j = n/2; j > 0; j--) adjust(v, j, n, xyz, axis);

  /*
   * Repeatedly extract the largest element and
   * put it at the end of the unsorted region.
   */
  for (j = n-1; j > 0; j--) {
    SWAP(b[1], b[j+1]);
    adjust(v, 1, j, xyz, axis);
  }
}

/* Print the k-d tree "sideways" with the root at the left. */
void printKDtree(KDNODE_T *kdPtr, int **xyz, int dim, int depth)
{
  if (kdPtr) {
    printKDtree(kdPtr->right, xyz, dim, depth+1);
    for (int i=0; i<depth; i++) printf("         ");
    printf("(%d, ", xyz[kdPtr->n][0]);
    for (int i=1; i<dim-1; i++) printf("%d, ", xyz[kdPtr->n][i]);
    printf("%d)\n", xyz[kdPtr->n][dim-1]);
    printKDtree(kdPtr->left, xyz, dim, depth+1);
  }
}

#define DIM (3)
#define PNT (15)

/* Create a simple k-d tree and print its topology for inspection. */
int main(int argc, char **argv) {

  /* Declare the 2D array 'points' that contains (x,y,z) coordinates. */
  int points[PNT][DIM] = { {2,3,3}, {5,4,2}, {9,6,7}, {4,7,9}, {8,1,5},
			   {7,2,6}, {9,4,1}, {8,4,2}, {9,7,8}, {6,3,1},
			   {3,4,5}, {1,6,8}, {9,5,3}, {2,1,3}, {8,7,6} };

  /* Copy the 2D array 'points' to the 'xyz' array of arrays. */
  int **xyz = (int **)malloc(PNT*sizeof(int *));
  for (int i=0; i<PNT; i++) {
    xyz[i] = (int *)malloc(DIM*sizeof(int));
    for (int j=0; j<DIM; j++) {
      xyz[i][j] = points[i][j];
    }
  }

  /* Initialize and heapsort the 'index' array of arrays. */
  int **index = (int **)malloc(DIM*sizeof(int *));
  for (int i=0; i<DIM; i++) {
    index[i] = (int *)malloc(PNT*sizeof(int));
    for (int j=0; j<PNT; j++) {
      index[i][j] = j;
    }
    heapsort(index[i], PNT, xyz, i);
  }

  /* Declare a temporary array and build the k-d tree. */
  int t[PNT];
  KDNODE_T *root = buildKDtree(index, t, 0, PNT-1, xyz, DIM, 0);

  /* Print the k-d tree "sideways" with the root at the left. */
  printKDtree(root, xyz, DIM, 0);
  return 0;
}

Adding elements

One adds a new point to a k-d tree in the same way as one adds an element to any other search tree. First, traverse the tree, starting from the root and moving to either the left or the right child depending on whether the point to be inserted is on the "left" or "right" side of the splitting plane. Once you get to the node under which the child should be located, add the new point as either the left or right child of the leaf node, again depending on which side of the node's splitting plane contains the new node.

Adding points in this manner can cause the tree to become unbalanced, leading to decreased tree performance. The rate of tree performance degradation is dependent upon the spatial distribution of tree points being added, and the number of points added in relation to the tree size. If a tree becomes too unbalanced, it may need to be re-balanced to restore the performance of queries that rely on the tree balancing, such as nearest neighbour searching.

Removing elements

To remove a point from an existing k-d tree, without breaking the invariant, the easiest way is to form the set of all nodes and leaves from the children of the target node, and recreate that part of the tree.

Another approach is to find a replacement for the point removed.[4] First, find the node R that contains the point to be removed. For the base case where R is a leaf node, no replacement is required. For the general case, find a replacement point, say p, from the subtree rooted at R. Replace the point stored at R with p. Then, recursively remove p.

For finding a replacement point, if R discriminates on x (say) and R has a right child, find the point with the minimum x value from the subtree rooted at the right child. Otherwise, find the point with the maximum x value from the subtree rooted at the left child.

Balancing

Balancing a k-d tree requires care. Because k-d trees are sorted in multiple dimensions, the tree rotation technique cannot be used to balance them — this may break the invariant.

Several variants of balanced k-d trees exist. They include divided k-d tree, pseudo k-d tree, k-d B-tree, hB-tree and Bkd-tree. Many of these variants are adaptive k-d trees.

Nearest neighbour search

Animation of NN searching with a k-d tree in two dimensions

The nearest neighbour search (NN) algorithm aims to find the point in the tree that is nearest to a given input point. This search can be done efficiently by using the tree properties to quickly eliminate large portions of the search space.

Searching for a nearest neighbour in a k-d tree proceeds as follows:

  1. Starting with the root node, the algorithm moves down the tree recursively, in the same way that it would if the search point were being inserted (i.e. it goes left or right depending on whether the point is less than or greater than the current node in the split dimension).
  2. Once the algorithm reaches a leaf node, it saves that node point as the "current best"
  3. The algorithm unwinds the recursion of the tree, performing the following steps at each node:
    1. If the current node is closer than the current best, then it becomes the current best.
    2. The algorithm checks whether there could be any points on the other side of the splitting plane that are closer to the search point than the current best. In concept, this is done by intersecting the splitting hyperplane with a hypersphere around the search point that has a radius equal to the current nearest distance. Since the hyperplanes are all axis-aligned this is implemented as a simple comparison to see whether the difference between the splitting coordinate of the search point and current node is less than the distance (overall coordinates) from the search point to the current best.
      1. If the hypersphere crosses the plane, there could be nearer points on the other side of the plane, so the algorithm must move down the other branch of the tree from the current node looking for closer points, following the same recursive process as the entire search.
      2. If the hypersphere doesn't intersect the splitting plane, then the algorithm continues walking up the tree, and the entire branch on the other side of that node is eliminated.
  4. When the algorithm finishes this process for the root node, then the search is complete.

Generally the algorithm uses squared distances for comparison to avoid computing square roots. Additionally, it can save computation by holding the squared current best distance in a variable for comparison.

Finding the nearest point is an O(log N) operation in the case of randomly distributed points. Analyses of binary search trees has found that the worst case search time for a k-dimensional KD tree containing N nodes is given by the following equation.[5]

In very high dimensional spaces, the curse of dimensionality causes the algorithm to need to visit many more branches than in lower dimensional spaces. In particular, when the number of points is only slightly higher than the number of dimensions, the algorithm is only slightly better than a linear search of all of the points.

The algorithm can be extended in several ways by simple modifications. It can provide the k-Nearest Neighbours to a point by maintaining k current bests instead of just one. Branches are only eliminated when they can't have points closer than any of the k current bests.

It can also be converted to an approximation algorithm to run faster. For example, approximate nearest neighbour searching can be achieved by simply setting an upper bound on the number points to examine in the tree, or by interrupting the search process based upon a real time clock (which may be more appropriate in hardware implementations). Nearest neighbour for points that are in the tree already can be achieved by not updating the refinement for nodes that give zero distance as the result, this has the downside of discarding points that are not unique, but are co-located with the original search point.

Approximate nearest neighbour is useful in real-time applications such as robotics due to the significant speed increase gained by not searching for the best point exhaustively. One of its implementations is best-bin-first search.

High-dimensional data

k-d trees are not suitable for efficiently finding the nearest neighbour in high dimensional spaces. As a general rule, if the dimensionality is k, the number of points in the data, N, should be N >> 2k. Otherwise, when k-d trees are used with high-dimensional data, most of the points in the tree will be evaluated and the efficiency is no better than exhaustive search,[6] and approximate nearest-neighbour methods are used instead.

Complexity

  • Building a static k-d tree from n points takes:
    • O(n log2 n) time if an O(n log n) sort is used to compute the median at each level;
    • O(n log n) time if a linear-time median-finding algorithm such as the one described in Cormen et al.[7] is used;
    • O([k-1]n log n) time if n points are sorted in each of k dimensions prior to building the k-d tree.
  • Inserting a new point into a balanced k-d tree takes O(log n) time.
  • Removing a point from a balanced k-d tree takes O(log n) time.
  • Querying an axis-parallel range in a balanced k-d tree takes O(n1-1/k +m) time, where m is the number of the reported points, and k the dimension of the k-d tree.

Variations

Volumetric objects

Instead of points, a k-d tree can also contain rectangles or hyperrectangles.[8][9] Thus range search becomes the problem of returning all rectangles intersecting the search rectangle. The tree is constructed the usual way with all the rectangles at the leaves. In an orthogonal range search, the opposite coordinate is used when comparing against the median. For example, if the current level is split along xhigh, we check the xlow coordinate of the search rectangle. If the median is less than the xlow coordinate of the search rectangle, then no rectangle in the left branch can ever intersect with the search rectangle and so can be pruned. Otherwise both branches should be traversed. See also interval tree, which is a 1-dimensional special case.

Points only in leaves

It is also possible to define a k-d tree with points stored solely in leaves.[10] This form of k-d tree allows a variety of split mechanics other than the standard median split. The midpoint splitting rule[11] selects on the middle of the longest axis of the space being searched, regardless of the distribution of points. This guarantees that the aspect ratio will be at most 2:1, but the depth is dependent on the distribution of points. A variation, called sliding-midpoint, only splits on the middle if there are points on both sides of the split. Otherwise, it splits on point nearest to the middle. Maneewongvatana and Mount show that this offers "good enough" performance on common data sets. Using sliding-midpoint, an approximate nearest neighbour query can be answered in . Approximate range counting can be answered in with this method.

See also

References

  1. ^ J. L. Bentley. Multidimensional binary search trees used for associative searching. Communications of the ACM, 18(9):509-517, 1975.
  2. ^ I. Wald and V. Hvran. On building fast kd-trees for ray tracing, and on doing that in O(NlogN) On building fast kd-trees for ray tracing, and on doing that in O(NlogN). IEEE Symposium on Interactive Ray Tracing, pp. 61-69, 2006.
  3. ^ H. M. Kakde. Range searching using kd tree. pp. 1-12, 2005Range searching using kd tree. pp. 1-12, 2005
  4. ^ Chandran, Sharat. Introduction to kd-trees. University of Maryland Department of Computer Science.
  5. ^ Lee, D. T.; Wong, C. K. (1977). "Worst-case analysis for region and partial region searches in multidimensional binary search trees and balanced quad trees". Acta Informatica. 9 (1): 23–29. doi:10.1007/BF00263763.
  6. ^ Jacob E. Goodman, Joseph O'Rourke and Piotr Indyk (Ed.) (2004). "Chapter 39 : Nearest neighbours in high-dimensional spaces". Handbook of Discrete and Computational Geometry (2nd ed.). CRC Press.
  7. ^ Cormen, Thomas H.; Leiserson, Charles E.; Rivest, Ronald L. Introduction to Algorithms. MIT Press and McGraw-Hill. Chapter 10.
  8. ^ Rosenberg J. Geographical Data Structures Compared: A Study of Data Structures Supporting Region Queries. IEEE Transaction on CAD Integrated Circuits Systems 4(1):53-67
  9. ^ Houthuys P. Box Sort, a multidimensional binary sorting method for rectangular boxes, used for quick range searching. The Visual Computer, 1987, 3:236-249
  10. ^ de Berg, Mark et al. Computational Geometry: Algorithms and Applications, 3rd Edition, pages 99-101. Springer, 2008.
  11. ^ S. Maneewongvatana and D. M. Mount. It's okay to be skinny, if your friends are fat. 4th Annual CGC Workshop on Computational Geometry, 1999.

External links