From Wikipedia, the free encyclopedia

In pattern recognition, the iDistance is an indexing and query processing technique for k-nearest neighbor queries on point data in multi-dimensional metric spaces. The kNN query is one of the hardest problems on multi-dimensional data, especially when the dimensionality of the data is high. The iDistance is designed to process kNN queries in high-dimensional spaces efficiently and it is especially good for skewed data distributions, which usually occur in real-life data sets. The iDistance can be augmented with machine learning models to learn the data distributions for searching and storing the multi-dimensional data.[1]



Building the iDistance index has two steps:

  1. A number of reference points in the data space are chosen. There are various ways of choosing reference points. Using cluster centers as reference points is the most efficient way. Succinctly, the data points are partitioned into Voronoi cells based on well-chosen reference points.
  2. The distance between a data point and its closest reference point is calculated. This distance plus a scaling value is called the point's iDistance. By this means, points in a multi-dimensional space are mapped to one-dimensional values, and then a B+-tree can be adopted to index the points using the iDistance as the key.

The figure on the right shows an example where three reference points (O1, O2, O3) are chosen. The data points are then mapped to a one-dimensional space and indexed in a B+-tree. Various extensions have been proposed to make the selection of reference points for effective query performance, including employing machine learning to learn the identification of reference points.

Query processing[edit]

To process a kNN query, the query is mapped to a number of one-dimensional range queries, which can be processed efficiently on a B+-tree. In the above figure, the query Q is mapped to a value in the B+-tree while the kNN search ``sphere" is mapped to a range in the B+-tree. The search sphere expands gradually until the k NNs are found. This corresponds to gradually expanding range searches in the B+-tree.

The iDistance technique can be viewed as a way of accelerating the sequential scan. Instead of scanning records from the beginning to the end of the data file, the iDistance starts the scan from spots where the nearest neighbors can be obtained early with a very high probability.


The iDistance has been used in many applications including

Historical background[edit]

The iDistance was first proposed by Cui Yu, Beng Chin Ooi, Kian-Lee Tan and H. V. Jagadish in 2001.[7] Later, together with Rui Zhang, they improved the technique and performed a more comprehensive study on it in 2005.[8]


  1. ^ Angjela Davitkova, Evica Milchevski, Sebastian Michel, The ML-Index: A Multidimensional, Learned Index for Point, Range, and Nearest-Neighbor Queries, Proceedings of the 23rd International Conference on Extending Database Technology, Copenhagen, Denmark, 407-410, 2020.
  2. ^ Junqi Zhang, Xiangdong Zhou, Wei Wang, Baile Shi, Jian Pei, Using High Dimensional Indexes to Support Relevance Feedback Based Interactive Images Retrieval, Proceedings of the 32nd International Conference on Very Large Data Bases, Seoul, Korea, 1211-1214, 2006.
  3. ^ Heng Tao Shen, Beng Chin Ooi, Xiaofang Zhou, Towards Effective Indexing for Very Large Video Sequence Database, Proceedings of the ACM SIGMOD International Conference on Management of Data, Baltimore, Maryland, United States, 730-741, 2005.
  4. ^ Christos Doulkeridis, Akrivi Vlachou, Yannis Kotidis, Michalis Vazirgiannis, Peer-to-Peer Similarity Search in Metric Spaces, Proceedings of the 33rd International Conference on Very Large Data Bases, Vienna, Austria, 986-997, 2007.
  5. ^ Sergio Ilarri, Eduardo Mena, Arantza Illarramendi, Location-Dependent Queries in Mobile Contexts: Distributed Processing Using Mobile Agents, IEEE Transactions on Mobile Computing, Volume 5, Issue 8, Aug. 2006 Page(s): 1029 - 1043.
  6. ^ Yang Song, Yu Gu, Rui Zhang, Ge Yu, ProMIPS: Efficient High-Dimensional c-Approximate Maximum Inner Product Search with a Lightweight Index, 37th IEEE International Conference on Data Engineering, Chania, Greece, 1619-1630, 2021.
  7. ^ Cui Yu, Beng Chin Ooi, Kian-Lee Tan and H. V. Jagadish Indexing the distance: an efficient method to KNN processing, Proceedings of the 27th International Conference on Very Large Data Bases, Rome, Italy, 421-430, 2001.
  8. ^ H. V. Jagadish, Beng Chin Ooi, Kian-Lee Tan, Cui Yu and Rui Zhang iDistance: An Adaptive B+-tree Based Indexing Method for Nearest Neighbor Search, ACM Transactions on Data Base Systems (ACM TODS), 30, 2, 364-397, June 2005.

External links[edit]