# SUBCLU

SUBCLU is an algorithm for clustering high-dimensional data by Karin Kailing, Hans-Peter Kriegel and Peer Kröger.[1] It is a subspace clustering algorithm that builds on the density-based clustering algorithm DBSCAN. SUBCLU can find clusters in axis-parallel subspaces, and uses a bottom-up, greedy strategy to remain efficient.

## Approach

SUBCLU uses a monotonicity criteria: if a cluster is found in a subspace ${\displaystyle S}$, then each subspace ${\displaystyle T\subseteq S}$ also contains a cluster. However, a cluster ${\displaystyle C\subseteq DB}$ in subspace ${\displaystyle S}$ is not necessarily a cluster in ${\displaystyle T\subseteq S}$, since clusters are required to be maximal, and more objects might be contained in the cluster in ${\displaystyle T}$ that contains ${\displaystyle C}$. However, a density-connected set in a subspace ${\displaystyle S}$ is also a density-connected set in ${\displaystyle T\subseteq S}$.

This downward-closure property is utilized by SUBCLU in a way similar to the Apriori algorithm: first, all 1-dimensional subspaces are clustered. All clusters in a higher-dimensional subspace will be subsets of the clusters detected in this first clustering. SUBCLU hence recursively produces ${\displaystyle k+1}$-dimensional candidate subspaces by combining ${\displaystyle k}$-dimensional subspaces with clusters sharing ${\displaystyle k-1}$ attributes. After pruning irrelevant candidates, DBSCAN is applied to the candidate subspace to find out if it still contains clusters. If it does, the candidate subspace is used for the next combination of subspaces. In order to improve the runtime of DBSCAN, only the points known to belong to clusters in one ${\displaystyle k}$-dimensional subspace (which is chosen to contain as little clusters as possible) are considered. Due to the downward-closure property, other point cannot be part of a ${\displaystyle k+1}$-dimensional cluster anyway.

## Pseudocode

SUBCLU takes two parameters, ${\displaystyle \epsilon \!\,}$ and ${\displaystyle MinPts}$, which serve the same role as in DBSCAN. In a first step, DBSCAN is used to find 1D-clusters in each subspace spanned by a single attribute:

${\displaystyle {\mathtt {SUBCLU}}(DB,eps,MinPts)}$

${\displaystyle S_{1}:=\emptyset }$
${\displaystyle C_{1}:=\emptyset }$
${\displaystyle {\mathtt {for\,each}}\,a\in Attributes}$
${\displaystyle C^{\{a\}}={\mathtt {DBSCAN}}(DB,\{a\},eps,MinPts)\!\,}$
${\displaystyle {\mathtt {if}}(C^{\{a\}}\neq \emptyset )}$
${\displaystyle S_{1}:=S_{1}\cup \{a\}}$
${\displaystyle C_{1}:=C_{1}\cup C^{\{a\}}}$
${\displaystyle {\mathtt {end\,if}}}$
${\displaystyle {\mathtt {end\,for}}}$
// In a second step, ${\displaystyle k+1}$-dimensional clusters are built from ${\displaystyle k}$-dimensional ones:
${\displaystyle k:=1\!\,}$
${\displaystyle {\mathtt {while}}(C_{k}\neq \emptyset )}$
${\displaystyle {\mathtt {CandS}}_{k+1}:={\mathtt {GenerateCandidateSubspaces}}(S_{k})\!\,}$
${\displaystyle {\mathtt {for\,each}}\,cand\in {\mathtt {CandS}}_{k+1}}$
${\displaystyle {\mathtt {bestSubspace:=}}\min _{s\in S_{k}\wedge s\subset cand}\sum _{C_{i}\in C^{s}}|C_{i}|}$
${\displaystyle C^{cand}:=\emptyset }$
${\displaystyle {\mathtt {for\,each\,cluster}}\,cl\in C^{\mathtt {bestSubspace}}}$
${\displaystyle C^{cand}:=C^{cand}\cup {\mathtt {DBSCAN}}(cl,cand,eps,MinPts)}$
${\displaystyle {\mathtt {if}}\,(C^{cand}\neq \emptyset )}$
${\displaystyle S_{k+1}:=S_{k+1}\cup cand}$
${\displaystyle C_{k+1}:=C_{k+1}\cup C^{cand}}$
${\displaystyle {\mathtt {end\,if}}}$
${\displaystyle {\mathtt {end\,for}}}$
${\displaystyle {\mathtt {end\,for}}}$
${\displaystyle k:=k+1\!\,}$
${\displaystyle {\mathtt {end\,while}}}$

${\displaystyle {\mathtt {end}}\!\,}$

The set ${\displaystyle S_{k}}$ contains all the ${\displaystyle k}$-dimensional subspaces that are known to contain clusters. The set ${\displaystyle C_{k}}$ contains the sets of clusters found in the subspaces. The ${\displaystyle bestSubspace}$ is chosen to minimize the runs of DBSCAN (and the number of points that need to be considered in each run) for finding the clusters in the candidate subspaces.

Candidate subspaces are generated much alike the Apriori algorithm generates the frequent itemset candidates: Pairs of the ${\displaystyle k}$-dimensional subspaces are compared, and if they differ in one attribute only, they form a ${\displaystyle k+1}$-dimensional candidate. However, a number of irrelevant candidates are found as well; they contain a ${\displaystyle k}$-dimensional subspace that does not contain a cluster. Hence, these candidates are removed in a second step:

${\displaystyle {\mathtt {GenerateCandidateSubspaces}}(S_{k})}$

${\displaystyle {\mathtt {CandS}}_{k+1}:=\emptyset }$
${\displaystyle {\mathtt {for\,each}}\,s_{1}\in S_{k}}$
${\displaystyle {\mathtt {for\,each}}\,s_{2}\in S_{k}}$
${\displaystyle {\mathtt {if}}\,(s_{1}\,{\mathtt {and}}\,s_{2}\,\,{\mathtt {differ\,\,in\,\,exactely\,\,one\,\,attribute}})}$
${\displaystyle {\mathtt {CandS}}_{k+1}:={\mathtt {CandS}}_{k+1}\cup \{s_{1}\cup s_{2}\}}$
${\displaystyle {\mathtt {end\,if}}}$
${\displaystyle {\mathtt {end\,for}}}$
${\displaystyle {\mathtt {end\,for}}}$
// Pruning of irrelevant candidate subspaces
${\displaystyle {\mathtt {for\,each}}\,cand\in {\mathtt {CandS}}_{k+1}}$
${\displaystyle {\mathtt {for\,each}}\,k{\texttt {-element}}\,s\subset cand}$
${\displaystyle {\mathtt {if}}\,(s\not \in S_{k})}$
${\displaystyle {\mathtt {CandS}}_{k+1}={\mathtt {CandS}}a_{k+1}\setminus \{cand\}}$
${\displaystyle {\mathtt {end\,if}}}$
${\displaystyle {\mathtt {end\,for}}}$
${\displaystyle {\mathtt {end\,for}}}$

${\displaystyle {\mathtt {end}}\,\!}$

## Availability

An example implementation of SUBCLU is available in the ELKI framework.

## References

1. ^ Karin Kailing, Hans-Peter Kriegel and Peer Kröger. Density-Connected Subspace Clustering for High-Dimensional Data. In: Proc. SIAM Int. Conf. on Data Mining (SDM'04), pp. 246-257, 2004.