Jump to content

Expectation–maximization algorithm

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Nils Grimsmo (talk | contribs) at 14:00, 29 December 2006 (Undo revision 97006760 by 80.167.45.222 (talk)). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

An expectation-maximization (EM) algorithm is used in statistics for finding maximum likelihood estimates of parameters in probabilistic models, where the model depends on unobserved latent variables. EM alternates between performing an expectation (E) step, which computes an expectation of the likelihood by including the latent variables as if they were observed, and a maximization (M) step, which computes the maximum likelihood estimates of the parameters by maximizing the expected likelihood found on the E step. The parameters found on the M step are then used to begin another E step, and the process is repeated.

Applications

Example of applying the EM algorithm to estimate the parameters of a 12-component Bernoulli Mixture Model for the MNIST database of handwritten digits [1].

EM is frequently used for data clustering in machine learning and computer vision.

In psychometry, EM is almost indispensable for estimating item parameters and latent abilities of item response theory models.

Specification of the EM procedure

Let denote incomplete data consisting of values of observable variables and the missing data. Together, and form the complete data. can either be actual missing measurements or a hidden variable that would make the problem easier if its value were known. For instance, in mixture models, the likelihood formula would be much more convenient if mixture components that "generated" the samples were known (see example below).

Estimate unobservable data

Let be the joint probability distribution (continuous case) or probability mass function (discrete case) of the complete data with parameters given by the vector : . This function then also gives the complete data likelihood. Further, note that the conditional distribution of the missing data given the observed can be expressed as:

when using the Bayes rule and law of total probability. This formulation only requires knowledge of the observation likelihood given the unobservable data , as well as the probability of the unobservable data .

Maximize expected log-likelihood for the complete dataset

An EM algorithm will then iteratively improve an initial estimate and construct new estimates . An individual re-estimation step that derives from takes the following form:

where denotes the conditional expectation of being taken with in the conditional distribution of x fixed at . Log-likelihood is often used instead of true likelihood because it leads to easier formulas, but still attains its maximum at the same point as the likelihood.

In other words, is the value that maximizes (M) the conditional expectation (E) of the complete data log-likelihood given the observed variables under the previous parameter value. This expectation is usually denoted as . In the continuous case, it would be given by

Speaking of an expectation (E) step is a bit of a misnomer. What is calculated in the first step are the fixed, data-dependent parameters of the function Q. Once the parameters of Q are known, it is fully determined and is maximized in the second (M) step of an EM algorithm.

Properties

It can be shown that an EM iteration does not decrease the observed data likelihood function. However, there is no guarantee that the sequence converges to a maximum likelihood estimator. For multimodal distributions, this means that an EM algorithm will converge to a local maximum (or saddle point) of the observed data likelihood function, depending on starting values. There are a variety of heuristic approaches for escaping a local maximum such as using several different random initial estimates, , or applying simulated annealing.

EM is particularly useful when maximum likelihood estimation of a complete data model is easy. If closed-form estimators exist, the M step is often trivial. A classic example is maximum likelihood estimation of a finite mixture of Gaussians, where each component of the mixture can be estimated trivially if the mixing distribution is known.

"Expectation-maximization" is a description of a class of related algorithms, not a specific algorithm; EM is a recipe or meta-algorithm which is used to devise particular algorithms. The Baum-Welch algorithm is an example of an EM algorithm applied to hidden Markov models. Another example is the EM algorithm for fitting a mixture density model.

An EM algorithm can also find maximum a posteriori (MAP) estimates, by performing MAP estimation in the M step, rather than maximum likelihood.

There are other methods for finding maximum likelihood estimates, such as gradient descent, conjugate gradient or variations of the Gauss-Newton method. Unlike EM, such methods typically require the evaluation of first and/or second derivatives of the likelihood function.

Incremental versions

The classic EM procedure is to replace both Q and θ with their optimal possible (argmax) values at each iteration. However it can be shown (see Neal & Hinton, 1999) that simply finding Q and θ to give some improvement over their current value will also ensure successful convergence.

For example, to improve Q, we could restrict the space of possible functions to a computationally simple distribution such as a factorial distribution,

Thus at each E step we compute the variational approximation of Q.

To improve θ, we could use any hill-climbing method, and not worry about finding the optimal θ, just some improvement. This method is also known as Generalized EM (GEM).

Relation to variational Bayes methods

EM is a partially non-Bayesian, maximum likelihood method. Its final result gives a probability distribution over the latent variables (in the Bayesian style) together with a point estimate for θ (either a maximum likelihood estimate or a posterior mode). We may want a fully Bayesian version of this, giving a probability distribution over θ as well as the latent variables. In fact the Bayesian approach to inference is simply to treat θ as another latent variable. In this paradigm, the distinction between the E and M steps disappears. If we use the factorized Q approximation as described above (variational Bayes), we may iterate over each latent variable (now including θ) and optimize them one at a time. There are now k steps per iteration, where k is the number of latent variables. For graphical models this is easy to do as each variable's new Q depends only on its Markov blanket, so local message passing can be used for efficient inference.

Example: Mixture Gaussian

Assume that the samples , where , are drawn from the gaussians , such that

The model you are trying to estimate is

E-step:

Estimation for unobserved event (which Gaussian is used), conditioned on the observation, using the values from the last maximisation step:

M-step

You want to maximise the expected log-likelihood of the joint event:

If we expand the probability of the joint event, we get

You have a constraint

If we add a Lagrange multiplier, and expand the pdf, we get

To find the new estimate , you find a maxima where .

New estimate for mean (using some differentiation rules from matrix calculus):

New estimate for covariance:

New estimate for class probability:

Inserting into the constraint:

Inserting into our estimate:

These estimates now become our , to be used in the next estimation step.

References

See also