Jump to content

K-medoids: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
Undid WP:COI edit by Motiwari (talk)
Undid revision 993044862 by 91.123.228.33 (talk). To the previous user who edited: PAM is indeed outdated. BanditPAM is a strictly better version of PAM, which achieves the same answer to the clustering problem but in faster time. It is now accepted by state-of-the-art by the community, as evidenced by its acceptance into NeurIPS 2020, the top ML conference
Tags: Undo Reverted
Line 7: Line 7:


The [[medoid]] of a cluster is defined as the object in the cluster whose average dissimilarity to all the objects in the cluster is minimal, that is, it is a most centrally located point in the cluster.
The [[medoid]] of a cluster is defined as the object in the cluster whose average dissimilarity to all the objects in the cluster is minimal, that is, it is a most centrally located point in the cluster.

The current state-of-the-art algorithm for the {{math|<var>k</var>}}-medoids problem is BanditPAM.<ref>M. Tiwari, M. Zhang, J. Mayclin, S. Thrun, C. Piech, I. Shomorony (2020). "BanditPAM: Almost Linear Time {{math|<var>k</var>}}-medoids clustering via Multi-Armed Bandits". Advances in Neural Information Processing Systems 2020.</ref> BanditPAM builds upon previous work such as Partitioning Around Medoids (PAM), which was proposed in 1987<ref>Kaufman, L. and Rousseeuw, P.J. (1987), Clustering by means of Medoids, in Statistical Data Analysis Based on the <math>L_1</math>–Norm and Related Methods, edited by Y. Dodge, North-Holland, 405–416.</ref> for the work with [[l1 norm|<math>l_1</math> norm]] and other distances.


==Algorithms==
==Algorithms==
Line 14: Line 16:


===Partitioning Around Medoids (PAM)===
===Partitioning Around Medoids (PAM)===
PAM uses a greedy search which may not find the optimum solution, but it is faster than exhaustive search. It works as follows:
Until recently, the partitioning around medoids (PAM) algorithm was a widely used {{math|<var>k</var>}}-medoid clustering algorithm (it has since been superseded by BanditPAM). PAM uses a greedy search which may not find the optimum solution, but it is faster than exhaustive search. It works as follows:


# (BUILD) Initialize: [[Greedy algorithm|greedily]] select {{mvar|k}} of the {{mvar|n}} data points as the medoids to minimize the cost
# (BUILD) Initialize: [[Greedy algorithm|greedily]] select {{mvar|k}} of the {{mvar|n}} data points as the medoids to minimize the cost
Line 26: Line 28:
The runtime complexity of the original PAM algorithm per iteration of (3) is <math>O(k (n-k)^2)</math>, by only computing the change in cost. A naive implementation recomputing the entire cost function every time will be in <math>O(n^2k^2)</math>. This runtime can be further reduced to <math>O(n^2)</math>, by splitting the cost change into three parts such that computations can be shared or avoided.<ref name=":0">{{Citation|last=Schubert|first=Erich|title=Faster k-Medoids Clustering: Improving the PAM, CLARA, and CLARANS Algorithms|date=2019|work=Similarity Search and Applications|volume=11807|pages=171–187|editor-last=Amato|editor-first=Giuseppe|publisher=Springer International Publishing|language=en|doi=10.1007/978-3-030-32047-8_16|isbn=9783030320461|last2=Rousseeuw|first2=Peter J.|editor2-last=Gennaro|editor2-first=Claudio|editor3-last=Oria|editor3-first=Vincent|editor4-last=Radovanović|editor4-first=Miloš|arxiv=1810.05691}}</ref>
The runtime complexity of the original PAM algorithm per iteration of (3) is <math>O(k (n-k)^2)</math>, by only computing the change in cost. A naive implementation recomputing the entire cost function every time will be in <math>O(n^2k^2)</math>. This runtime can be further reduced to <math>O(n^2)</math>, by splitting the cost change into three parts such that computations can be shared or avoided.<ref name=":0">{{Citation|last=Schubert|first=Erich|title=Faster k-Medoids Clustering: Improving the PAM, CLARA, and CLARANS Algorithms|date=2019|work=Similarity Search and Applications|volume=11807|pages=171–187|editor-last=Amato|editor-first=Giuseppe|publisher=Springer International Publishing|language=en|doi=10.1007/978-3-030-32047-8_16|isbn=9783030320461|last2=Rousseeuw|first2=Peter J.|editor2-last=Gennaro|editor2-first=Claudio|editor3-last=Oria|editor3-first=Vincent|editor4-last=Radovanović|editor4-first=Miloš|arxiv=1810.05691}}</ref>


===BanditPAM===
BanditPAM is a randomized algorithm that matches PAM in clustering quality. Crucially, however, BanditPAM reduces the dependence in dataset size from <math>O(n^2)</math> to <math>O(n \log n)</math>. BanditPAM tracks PAM's optimization trajectory in the BUILD and SWAP steps, but estimates the changes in losses at each step instead of computing these quantities exactly. In each each step of the algorithm, BanditPAM formulates finding the best new medoids (in the BUILD step) or the best medoid-nonmedoid pair (in the SWAP step) as [[multi-armed bandit]] problems. As BanditPAM matches PAM in clustering quality but is much faster, it is considered state-of-the-art.


===Other Algorithms===
===Other Algorithms===
Algorithms other than PAM have also been suggested in the literature, including the following [[Lloyd's algorithm|Voronoi iteration]] method:<ref>{{Cite journal|last=Maranzana|first=F. E.|date=1963|title=On the location of supply points to minimize transportation costs|journal=IBM Systems Journal|volume=2|issue=2|pages=129–135|doi=10.1147/sj.22.0129}}</ref><ref name="EoSL">T. Hastie, R. Tibshirani, and J. Friedman. The Elements of Statistical Learning, Springer (2001), 468–469.</ref><ref>{{Cite journal|last=Park|first=Hae-Sang|last2=Jun|first2=Chi-Hyuck|date=2009|title=A simple and fast algorithm for K-medoids clustering|journal=Expert Systems with Applications|language=en|volume=36|issue=2|pages=3336–3341|doi=10.1016/j.eswa.2008.01.039}}</ref>
Algorithms other than PAM and BanditPAM have also been suggested in the literature, including the following [[Lloyd's algorithm|Voronoi iteration]] method:<ref>{{Cite journal|last=Maranzana|first=F. E.|date=1963|title=On the location of supply points to minimize transportation costs|journal=IBM Systems Journal|volume=2|issue=2|pages=129–135|doi=10.1147/sj.22.0129}}</ref><ref name="EoSL">T. Hastie, R. Tibshirani, and J. Friedman. The Elements of Statistical Learning, Springer (2001), 468–469.</ref><ref>{{Cite journal|last=Park|first=Hae-Sang|last2=Jun|first2=Chi-Hyuck|date=2009|title=A simple and fast algorithm for K-medoids clustering|journal=Expert Systems with Applications|language=en|volume=36|issue=2|pages=3336–3341|doi=10.1016/j.eswa.2008.01.039}}</ref>


# Select initial medoids randomly
# Select initial medoids randomly
Line 40: Line 44:


==Software==
==Software==
* [https://github.com/ThrunGroup/BanditPAM BanditPAM] is the state-of-the-art algorithm for the {{math|<var>k</var>}}-medoids problem with an open-source implementation. It is written in C++ for performance, and can be called from both C++ and Python.
* [[ELKI]] includes several <var>k</var>-medoid variants, including a Voronoi-iteration <var>k</var>-medoids, the original PAM algorithm, Reynolds' improvements, and the O(n²) FastPAM algorithm, CLARA, CLARANS, FastCLARA and FastCLARANS.
* [[ELKI]] includes several <var>k</var>-medoid variants, including a Voronoi-iteration <var>k</var>-medoids, the original PAM algorithm, Reynolds' improvements, and the O(n²) FastPAM algorithm, CLARA, CLARANS, FastCLARA and FastCLARANS.
* [[julia language|Julia]] contains a <var>k</var>-medoid implementation of the k-means style algorithm (faster, but much worse result quality) in the [https://github.com/JuliaStats/Clustering.jl JuliaStats/Clustering.jl] package.
* [[julia language|Julia]] contains a <var>k</var>-medoid implementation of the k-means style algorithm (faster, but much worse result quality) in the [https://github.com/JuliaStats/Clustering.jl JuliaStats/Clustering.jl] package.

Revision as of 01:07, 12 December 2020

The k-medoids problem is a clustering problem similar to k-means. Both the k-means and k-medoids algorithms are partitional (breaking the dataset up into groups) and attempt to minimize the distance between points labeled to be in a cluster and a point designated as the center of that cluster. In contrast to the k-means algorithm, k-medoids chooses actual data points as centers (medoids or exemplars), and thereby allows for greater interpretability of the cluster centers than in k-means, where the center of a cluster is not necessarily one of the input data points (it is the average between the points in the cluster). Furthermore, k-medoids can be used with arbitrary dissimilarity measures, whereas k-means generally requires Euclidean distance for efficient solutions. Because k-medoids minimizes a sum of pairwise dissimilarities instead of a sum of squared Euclidean distances, it is more robust to noise and outliers than k-means.

k-medoids is a classical partitioning technique of clustering that splits the data set of n objects into k clusters, where the number k of clusters assumed known a priori (which implies that the programmer must specify k before the execution of a k-medoids algorithm). The "goodness" of the given value of k can be assessed with methods such as the silhouette method.

The medoid of a cluster is defined as the object in the cluster whose average dissimilarity to all the objects in the cluster is minimal, that is, it is a most centrally located point in the cluster.

The current state-of-the-art algorithm for the k-medoids problem is BanditPAM.[1] BanditPAM builds upon previous work such as Partitioning Around Medoids (PAM), which was proposed in 1987[2] for the work with norm and other distances.

Algorithms

PAM choosing initial medoids, then iterating to convergence for k=3 clusters, visualized with ELKI.

In general, the k-medoids problem is NP-hard to solve exactly. As such, many heuristic solutions exist.

Partitioning Around Medoids (PAM)

Until recently, the partitioning around medoids (PAM) algorithm was a widely used k-medoid clustering algorithm (it has since been superseded by BanditPAM). PAM uses a greedy search which may not find the optimum solution, but it is faster than exhaustive search. It works as follows:

  1. (BUILD) Initialize: greedily select k of the n data points as the medoids to minimize the cost
  2. Associate each data point to the closest medoid.
  3. (SWAP) While the cost of the configuration decreases:
    1. For each medoid m, and for each non-medoid data point o:
      1. Consider the swap of m and o, and compute the cost change
      2. If the cost change is the current best, remember this m and o combination
    2. Perform the best swap of and , if it decreases the cost function. Otherwise, the algorithm terminates.

The runtime complexity of the original PAM algorithm per iteration of (3) is , by only computing the change in cost. A naive implementation recomputing the entire cost function every time will be in . This runtime can be further reduced to , by splitting the cost change into three parts such that computations can be shared or avoided.[3]

BanditPAM

BanditPAM is a randomized algorithm that matches PAM in clustering quality. Crucially, however, BanditPAM reduces the dependence in dataset size from to . BanditPAM tracks PAM's optimization trajectory in the BUILD and SWAP steps, but estimates the changes in losses at each step instead of computing these quantities exactly. In each each step of the algorithm, BanditPAM formulates finding the best new medoids (in the BUILD step) or the best medoid-nonmedoid pair (in the SWAP step) as multi-armed bandit problems. As BanditPAM matches PAM in clustering quality but is much faster, it is considered state-of-the-art.

Other Algorithms

Algorithms other than PAM and BanditPAM have also been suggested in the literature, including the following Voronoi iteration method:[4][5][6]

  1. Select initial medoids randomly
  2. Iterate while the cost decreases:
    1. In each cluster, make the point that minimizes the sum of distances within the cluster the medoid
    2. Reassign each point to the cluster defined by the closest medoid determined in the previous step.

However, k-means-style Voronoi iteration finds worse results, as it does not allow reassigning points to other clusters while changing means, and thus only explores a smaller search space.[3][7]

Other approximate algorithms such as CLARA and CLARANS trade optimality for runtime. CLARA applies PAM on multiple subsamples, keeping the best result. CLARANS works on the entire data set, but only explores a subset of the possible swaps of medoids and non-medoids using sampling.

Software

  • BanditPAM is the state-of-the-art algorithm for the k-medoids problem with an open-source implementation. It is written in C++ for performance, and can be called from both C++ and Python.
  • ELKI includes several k-medoid variants, including a Voronoi-iteration k-medoids, the original PAM algorithm, Reynolds' improvements, and the O(n²) FastPAM algorithm, CLARA, CLARANS, FastCLARA and FastCLARANS.
  • Julia contains a k-medoid implementation of the k-means style algorithm (faster, but much worse result quality) in the JuliaStats/Clustering.jl package.
  • KNIME includes a k-medoid implementation supporting a variety of efficient matrix distance measures, as well as a number of native (and integrated third-party) k-means implementations
  • R contains PAM in the "cluster" package, including some of the FastPAM improvements via the pamonce=5 option.
  • RapidMiner has an operator named KMedoids, but it does not implement the KMedoids algorithm correctly. Instead, it is a k-means variant, that substitutes the mean with the closest data point (which is not the medoid).
  • MATLAB implements PAM, CLARA, and two other algorithms to solve the k-medoid clustering problem.

References

  1. ^ M. Tiwari, M. Zhang, J. Mayclin, S. Thrun, C. Piech, I. Shomorony (2020). "BanditPAM: Almost Linear Time k-medoids clustering via Multi-Armed Bandits". Advances in Neural Information Processing Systems 2020.
  2. ^ Kaufman, L. and Rousseeuw, P.J. (1987), Clustering by means of Medoids, in Statistical Data Analysis Based on the –Norm and Related Methods, edited by Y. Dodge, North-Holland, 405–416.
  3. ^ a b Schubert, Erich; Rousseeuw, Peter J. (2019), Amato, Giuseppe; Gennaro, Claudio; Oria, Vincent; Radovanović, Miloš (eds.), "Faster k-Medoids Clustering: Improving the PAM, CLARA, and CLARANS Algorithms", Similarity Search and Applications, vol. 11807, Springer International Publishing, pp. 171–187, arXiv:1810.05691, doi:10.1007/978-3-030-32047-8_16, ISBN 9783030320461
  4. ^ Maranzana, F. E. (1963). "On the location of supply points to minimize transportation costs". IBM Systems Journal. 2 (2): 129–135. doi:10.1147/sj.22.0129.
  5. ^ T. Hastie, R. Tibshirani, and J. Friedman. The Elements of Statistical Learning, Springer (2001), 468–469.
  6. ^ Park, Hae-Sang; Jun, Chi-Hyuck (2009). "A simple and fast algorithm for K-medoids clustering". Expert Systems with Applications. 36 (2): 3336–3341. doi:10.1016/j.eswa.2008.01.039.
  7. ^ Teitz, Michael B.; Bart, Polly (1968-10-01). "Heuristic Methods for Estimating the Generalized Vertex Median of a Weighted Graph". Operations Research. 16 (5): 955–961. doi:10.1287/opre.16.5.955. ISSN 0030-364X.