Information bottleneck method
This article is incomplete. This is because Empty sections.(September 2017)
The information bottleneck method is a technique in information theory introduced by Naftali Tishby, Fernando C. Pereira, and William Bialek. It is designed for finding the best tradeoff between accuracy and complexity (compression) when summarizing (e.g. clustering) a random variable X, given a joint probability distribution p(X,Y) between X and an observed relevant variable Y. Other applications include distributional clustering and dimension reduction. More recently it has been suggested as a theoretical foundation for deep learning. It generalized the classical notion of minimal sufficient statistics from parametric statistics to arbitrary distributions, not necessarily of exponential form. It does so by relaxing the sufficiency condition to capture some fraction of the mutual information with the relevant variable Y.
The information bottleneck can also be viewed as a rate distortion problem, with a distortion function that measures how well Y is predicted from a compressed representation T compared to its direct prediction from X. This interpretation provides a general iterative algorithm for solving the information bottleneck tradeoff and calculating the information curve from the distribution p(X,Y).
The compressed variable is and the algorithm minimizes:
- where and are the mutual information between and , respectively, and is a Lagrange multiplier.
- 1 Minimal sufficient statistics
- 2 Self-consistent equations
- 3 Learning theory
- 4 Phase transitions
- 5 Information theory of deep learning
- 6 Variational bottleneck
- 7 Gaussian bottleneck
- 8 Defining decision contours
- 9 Extensions
- 10 Bibliography
- 11 References
Minimal sufficient statistics
Information theory of deep learning
The Gaussian bottleneck, namely, applying the information bottleneck approach to Gaussian variables, leads to solutions related to canonical correlation analysis. Assume are jointly multivariate zero mean normal vectors with covariances and is a compressed version of that must maintain a given value of mutual information with . It can be shown that the optimum is a normal vector consisting of linear combinations of the elements of where matrix has orthogonal rows.
The projection matrix in fact contains rows selected from the weighted left eigenvectors of the singular value decomposition of the matrix (generally asymmetric)
Define the singular value decomposition
and the critical values
then the number of active eigenvectors in the projection, or order of approximation, is given by
And we finally get
In which the weights are given by
Applying the Gaussian information bottleneck to time series (processes), yields solutions related to optimal predictive coding. This procedure is formally equivalent to linear Slow Feature Analysis.
Optimal temporal structures in linear dynamic systems can be revealed in the so-called past-future information bottleneck, an application of the bottleneck method to non-Gaussian sampled data. The concept, as treated by Creutzig, Tishby et al., is not without complication as two independent phases make up in the exercise: firstly estimation of the unknown parent probability densities from which the data samples are drawn and secondly the use of these densities within the information theoretic framework of the bottleneck.
Since the bottleneck method is framed in probabilistic rather than statistical terms, the underlying probability density at the sample points must be estimated. This is a well known problem with multiple solutions described by Silverman. In the present method, joint sample probabilities are found by use of a Markov transition matrix method and this has some mathematical synergy with the bottleneck method itself.
The arbitrarily increasing distance metric between all sample pairs and distance matrix is . Then transition probabilities between sample pairs for some must be computed. Treating samples as states, and a normalised version of as a Markov state transition probability matrix, the vector of probabilities of the ‘states’ after steps, conditioned on the initial state , is . The equilibrium probability vector given, in the usual way, by the dominant eigenvector of matrix which is independent of the initialising vector . This Markov transition method establishes a probability at the sample points which is claimed to be proportional to the probabilities' densities there.
Other interpretations of the use of the eigenvalues of distance matrix are discussed in Silverman's Density Estimation for Statistics and Data Analysis.
In the following soft clustering example, the reference vector contains sample categories and the joint probability is assumed known. A soft cluster is defined by its probability distribution over the data samples . Tishby et al. presented the following iterative set of equations to determine the clusters which are ultimately a generalization of the Blahut-Arimoto algorithm, developed in rate distortion theory. The application of this type of algorithm in neural networks appears to originate in entropy arguments arising in the application of Gibbs Distributions in deterministic annealing.
The function of each line of the iteration expands as
Line 1: This is a matrix valued set of conditional probabilities
The Kullback–Leibler distance between the vectors generated by the sample data and those generated by its reduced information proxy is applied to assess the fidelity of the compressed vector with respect to the reference (or categorical) data in accordance with the fundamental bottleneck equation. is the Kullback Leibler distance between distributions
and is a scalar normalization. The weighting by the negative exponent of the distance means that prior cluster probabilities are downweighted in line 1 when the Kullback Liebler distance is large, thus successful clusters grow in probability while unsuccessful ones decay.
Line 2: Second matrix-valued set of conditional probabilities. By definition
where the Bayes identities are used.
Line 3: this line finds the marginal distribution of the clusters
This is a standard result.
Further inputs to the algorithm are the marginal sample distribution which has already been determined by the dominant eigenvector of and the matrix valued Kullback Leibler distance function
derived from the sample spacings and transition probabilities.
The matrix can be initialized randomly or with a reasonable guess, while matrix needs no prior values. Although the algorithm converges, multiple minima may exist that would need to be resolved.
Defining decision contours
To categorize a new sample external to the training set , the previous distance metric finds the transition probabilities between and all samples in , with a normalization. Secondly apply the last two lines of the 3-line algorithm to get cluster and conditional category probabilities.
Parameter must be kept under close supervision since, as it is increased from zero, increasing numbers of features, in the category probability space, snap into focus at certain critical thresholds.
The following case examines clustering in a four quadrant multiplier with random inputs and two categories of output, , generated by . This function has two spatially separated clusters for each category and so demonstrates that the method can handle such distributions.
20 samples are taken, uniformly distributed on the square . The number of clusters used beyond the number of categories, two in this case, has little effect on performance and the results are shown for two clusters using parameters .
The distance function is where while the conditional distribution is a 2 × 20 matrix
and zero elsewhere.
The summation in line 2 incorporates only two values representing the training values of +1 or −1, but nevertheless works well. The figure shows the locations of the twenty samples with '0' representing Y = 1 and 'x' representing Y = −1. The contour at the unity likelihood ratio level is shown,
as a new sample is scanned over the square. Theoretically the contour should align with the and coordinates but for such small sample numbers they have instead followed the spurious clusterings of the sample points.
Neural network/fuzzy logic analogies
This algorithm is somewhat analogous to a neural network with a single hidden layer. The internal nodes are represented by the clusters and the first and second layers of network weights are the conditional probabilities and respectively. However, unlike a standard neural network, the algorithm relies entirely on probabilities as inputs rather than the sample values themselves, while internal and output values are all conditional probability density distributions. Nonlinear functions are encapsulated in distance metric (or influence functions/radial basis functions) and transition probabilities instead of sigmoid functions.
The Blahut-Arimoto three-line algorithm converges rapidly, often in tens of iterations, and by varying , and and the cardinality of the clusters, various levels of focus on features can be achieved.
The statistical soft clustering definition has some overlap with the verbal fuzzy membership concept of fuzzy logic.
An interesting extension is the case of information bottleneck with side information. Here information is maximized about one target variable and minimized about another, learning a representation that is informative about selected aspects of data. Formally
- Weiss, Y. (1999), "Segmentation using eigenvectors: a unifying view", Proceedings IEEE International Conference on Computer Vision (PDF), pp. 975–982
- P. Harremoes and N. Tishby "The Information Bottleneck Revisited or How to Choose a Good Distortion Measure". In proceedings of the International Symposium on Information Theory (ISIT) 2007
- Tishby, Naftali; Pereira, Fernando C.; Bialek, William (September 1999). The Information Bottleneck Method (PDF). The 37th annual Allerton Conference on Communication, Control, and Computing. pp. 368–377.
- Chechik, Gal; Globerson, Amir; Tishby, Naftali; Weiss, Yair (1 January 2005). Dayan, Peter, ed. "Information Bottleneck for Gaussian Variables" (PDF). Journal of Machine Learning Research (published 1 May 2005) (6): 165–188.
- Creutzig, Felix; Sprekeler, Henning (2007-12-17). "Predictive Coding and the Slowness Principle: An Information-Theoretic Approach". Neural Computation. 20 (4): 1026–1041. doi:10.1162/neco.2008.01-07-455. ISSN 0899-7667.
- Creutzig, Felix; Globerson, Amir; Tishby, Naftali (2009-04-27). "Past-future information bottleneck in dynamical systems". Physical Review E. 79 (4): 041925. doi:10.1103/PhysRevE.79.041925.
- Silverman, Bernie (1986). Density Estimation for Statistics and Data Analysis. Chapman & Hall. ISBN 978-0412246203.
- Slonim, Noam; Tishby, Naftali (2000-01-01). "Document Clustering Using Word Clusters via the Information Bottleneck Method". Proceedings of the 23rd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval. SIGIR '00. New York, NY, USA: ACM: 208–215. doi:10.1145/345508.345578. ISBN 1-58113-226-3.
- D. J. Miller, A. V. Rao, K. Rose, A. Gersho: "An Information-theoretic Learning Algorithm for Neural Network Classification". NIPS 1995: pp. 591–597
- Tishby, Naftali; Slonim, N. Data clustering by Markovian Relaxation and the Information Bottleneck Method (PDF). Neural Information Processing Systems (NIPS) 2000. pp. 640–646.
- Chechik, Gal; Tishby, Naftali (2002). "Extracting Relevant Structures with Side Information" (PDF). Advances in Neural Information Processing Systems: 857–864.