Jump to content

Biclustering

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Seemu (talk | contribs) at 13:40, 9 February 2011 (Algorithms). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.


Biclustering, co-clustering, or two-mode clustering[1] is a data mining technique which allows simultaneous clustering of the rows and columns of a matrix. The term was first introduced by Mirkin[2] (recently by Cheng and Church[3] in gene expression analysis), although the technique was originally introduced much earlier[2] (i.e., by J.A. Hartigan[4]).

Given a set of rows in columns (i.e., an matrix), the biclustering algorithm generates biclusters - a subset of rows which exhibit similar behavior across a subset of columns, or vice versa.

Complexity

The complexity of the biclustering problem depends on the exact problem formulation, and particularly on the merit function used to evaluate the quality of a given bicluster. However most interesting variants of this problem are NP-complete requiring either large computational effort or the use of lossy heuristics to short-circuit the calculation.

Type of Bicluster

Different biclustering algorithms have different definitions of bicluster.

They are:

  1. Bicluster with constant values (a),
  2. Bicluster with constant values on rows (b) or columns (c),
  3. Bicluster with coherent values (d, e).
a) Bicluster with constant values
2.0 2.0 2.0 2.0 2.0
2.0 2.0 2.0 2.0 2.0
2.0 2.0 2.0 2.0 2.0
2.0 2.0 2.0 2.0 2.0
2.0 2.0 2.0 2.0 2.0
b) Bicluster with constant values on rows
1.0 1.0 1.0 1.0 1.0
2.0 2.0 2.0 2.0 2.0
3.0 3.0 3.0 3.0 3.0
4.0 4.0 4.0 4.0 4.0
4.0 4.0 4.0 4.0 4.0
c) Bicluster with constant values on columns
1.0 2.0 3.0 4.0 5.0
1.0 2.0 3.0 4.0 5.0
1.0 2.0 3.0 4.0 5.0
1.0 2.0 3.0 4.0 5.0
1.0 2.0 3.0 4.0 5.0
d) Bicluster with coherent values (additive)
1.0 4.0 5.0 0.0 1.5
4.0 7.0 8.0 3.0 4.5
3.0 6.0 7.0 2.0 3.5
5.0 8.0 9.0 4.0 5.5
2.0 5.0 6.0 1.0 2.5
e) Bicluster with coherent values (multiplicative)
1.0 0.5 2.0 0.2 0.8
2.0 1.0 4.0 0.4 1.6
3.0 1.5 6.0 0.6 2.4
4.0 2.0 8.0 0.8 3.2
5.0 2.5 10.0 1.0 4.0


The relationship between these cluster models and other types of clustering such as correlation clustering is discussed in [5].

Algorithms

There are many biclustering algorithms developed for bioinformatics, including: Block clustering, CTWC (Coupled Two-Way Clustering) , ITWC (Interrelated Two-Way Clustering), δ-bicluster, δ-pCluster, δ-pattern, FLOC, OPC, Plaid Model, OPSMs (Order-preserving submatrixes), Gibbs, SAMBA (Statistical-Algorithmic Method for Bicluster Analysis)[6], , Robust Biclustering Algorithm (RoBA), Crossing Minimization [7] , cMonkey[8], PRMs, DCC, LEB (Localize and Extract Biclusters), QUBIC (QUalitative BIClustering), BCCA (Bi-Correlation Clustering Algorithm) and FABIA (Factor Analysis for Bicluster Acquisition)[9]. Biclustering algorithms have also been proposed and used in other application fields under the names coclustering, biodimentional clustering, and subspace clustering[10].

Some recent algorithms have attempted to include additional support for biclustering rectangular matrices in the form of other datatypes, including cMonkey.

There is an ongoing debate about how to judge the results of these methods, as biclustering allows overlap between clusters and some algorithms allow the exclusion of hard to reconcile columns/conditions. Not all of the available algorithms are deterministic and you need to pay attention to the degree to which results represent stable minima. Because this is an unsupervised classification problem, the lack of gold standard makes it difficult to spot errors in the results. One approach is to utilize multiple biclustering algorithms, with majority or super-majority voting amongst them deciding the best result. Another way is to analyse the quality of shifting and scaling patterns in biclusters[11]. Biclustering has been used in the domain for text mining (or classification) where it is popularly known as Co-Clustering [12]. Text corpora are represented in a vectorial form as a Matrix D whose rows denote the documents and whose columns denote the words in the dictionary. Matrix elements Dij denote occurrence of word j in document i. Co-clustering algorithms are then applied to discover blocks in the matrix D which correspond to a group of documents (rows) characterized by a group of words(columns).

Several approaches have been proposed based on the information contents of the resulting blocks, matrix based approaches such as SVD and BVD, and graph based approaches . Information theoretic algorithms iteratively assign each row to a cluster of documents and each column to a cluster of words such that the mutual information is maximized. Matrix based methods focus on the decomposition of matrices into blocks such that the error between the original matrix and the re-generated matrices from the decomposition is minimized. Graph based methods tend to minimize the cuts between the clusters. Given two groups of documents d1 and d2, the number of cuts can be measured as the number of words that occur in documents of groups d1 and d2.

More recently (Bisson and Hussain)[12] have proposed a new approach of using the similarity between words and the similarity between documents to co-cluster the matrix. Their method (known as χ-Sim, for cross similarity) is based on finding document-document similarity and word-word similarity, and then using classical clustering methods such as hierarchical clustering. Instead of explicitly clustering rows and columns alternately, they consider higher order occurrences of words that inherently takes into account the documents in which they occur into consideration. Thus, the similarity between two words is calculated based on the documents in which they occur and also the documents in which "similar" words occur. The idea here is that two documents about the same topic do not necessarily use the same set of words to describe it but a subset of the words and other similar words that are characteristic of that topic. This approach of taking higher-order similarities takes the latent sematic structure of the whole corpus into consideration with the result of having a better clustering of the documents and words.

In contrast to other approaches, FABIA is a multiplicative model that assumes realistic non-Gaussian signal distributions with Heavy tails. FABIA utilizes well understood model selection techniques like variational approaches and applies the Bayesian framework. The generative framework allows FABIA to determine the Information content of each bicluster to separate spurious biclusters from true biclusters.

See also

References

  1. ^ Van Mechelen I, Bock HH, De Boeck P (2004). "Two-mode clustering methods:a structured overview". Statistical Methods in Medical Research. 13 (5): 363–94. doi:10.1191/0962280204sm373ra. PMID 15516031.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  2. ^ a b Mirkin, Boris (1996). Mathematical Classification and Clustering. Kluwer Academic Publishers. ISBN 0792341597.
  3. ^ Cheng Y, Church GM (2000). "Biclustering of expression data". Proceedings of the 8th International Conference on Intelligent Systems for Molecular Biology: 93–103.
  4. ^ Hartigan JA (1972). "Direct clustering of a data matrix". Journal of the American Statistical Association. 67 (337). American Statistical Association: 123–9. doi:10.2307/2284710. {{cite journal}}: Cite has empty unknown parameter: |month= (help)
  5. ^ Kriegel, H.-P. (March 2009). "Clustering High Dimensional Data: A Survey on Subspace Clustering, Pattern-based Clustering, and Correlation Clustering". ACM Transactions on Knowledge Discovery from Data (TKDD). 3 (1): 1–58. doi:10.1145/1497577.1497578. {{cite journal}}: Unknown parameter |coauthors= ignored (|author= suggested) (help)
  6. ^ Tanay A, Sharan R, Kupiec M and Sahmir R (2004). "Revealing modularity and organization in the yeast molecular network by integrated analysis of highly heterogeneous genomewide data". Proc Natl Acad Sci USA. 101 (9): 2981–2986. doi:10.1073/pnas.0308661100. PMC 14973197. PMID 16749936. {{cite journal}}: Check |pmc= value (help)CS1 maint: multiple names: authors list (link)
  7. ^ Abdullah, Ahsan; Hussain, Amir (2006). "A new biclustering technique based on crossing minimization". Neurocomputing, vol. 69 issue 16-18: 1882–1896.
  8. ^ Reiss DJ, Baliga NS, Bonneau R (2006). "Integrated biclustering of heterogeneous genome-wide datasets for the inference of global regulatory networks". BMC Bioinformatics. 2 (7): 280–302. doi:10.1186/1471-2105-7-280. PMC 1502140. PMID 16749936.{{cite journal}}: CS1 maint: multiple names: authors list (link) CS1 maint: unflagged free DOI (link)
  9. ^ Hochreiter S, Bodenhofer U, Heusel M, Mayr A, Mitterecker A, Kasim A, Khamiakova T, Van Sanden S, Lin D, Talloen W, Bijnens L, Gohlmann HWH, Shkedy Z, Clevert DA (2010). "FABIA: factor analysis for bicluster acquisition". Bioinformatics. 26 (12): 1520–1527. doi:10.1093/bioinformatics/btq227. PMC 2881408. PMID 20418340.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  10. ^ Madeira SC, Oliveira AL (2004). "Biclustering Algorithms for Biological Data Analysis: A Survey". IEEE Transactions on Computational Biology and Bioinformatics. 1 (1): 24–45. doi:10.1109/TCBB.2004.2. PMID 17048406.
  11. ^ Aguilar-Ruiz JS (2005). "Shifting and scaling patterns from gene expression data". Bioinformatics. 21 (10): 3840–3845. doi:10.1093/bioinformatics/bti641. PMID 16144809.
  12. ^ a b Bission G. and Hussain F. (2008). "Chi-Sim: A new similarity measure for the co-clustering task". ICMLA: 211–217. doi:10.1109/ICMLA.2008.103.

Others

  • A. Tanay. R. Sharan, and R. Shamir, "Biclustering Algorithms: A Survey", In Handbook of Computational Molecular Biology, Edited by Srinivas Aluru, Chapman (2004)
  • Kluger Y, Basri R, Chang JT, and Gerstein MB. "Spectral Biclustering of Microarray Data: Coclustering Genes and Conditions", Genome Research 2003; 13: 703-716.