# Expectation propagation

Expectation propagation (EP) is a technique in Bayesian machine learning.[1]

EP finds approximations to a probability distribution.[1] It uses an iterative approach that uses the factorization structure of the target distribution.[1] It differs from other Bayesian approximation approaches such as variational Bayesian methods.[1]

More specifically, suppose we wish to approximate an intractable probability distribution ${\displaystyle p(\mathbf {x} )}$ with a tractable distribution ${\displaystyle q(\mathbf {x} )}$. Expectation propagation achieves this approximation by minimizing the Kullback-Leibler divergence ${\displaystyle \mathrm {KL} (p||q)}$.[1] Variational Bayesian methods minimize ${\displaystyle \mathrm {KL} (q||p)}$ instead.[1]

If ${\displaystyle q(\mathbf {x} )}$ is a Gaussian ${\displaystyle {\mathcal {N}}(\mathbf {x} |\mu ,\Sigma )}$, then ${\displaystyle \mathrm {KL} (p||q)}$ is minimized with ${\displaystyle \mu }$ and ${\displaystyle \Sigma }$ being equal to the mean of ${\displaystyle p(\mathbf {x} )}$ and the covariance of ${\displaystyle p(\mathbf {x} )}$, respectively; this is called moment matching.[1]

## Applications

Expectation propagation via moment matching plays a vital role in approximation for indicator functions that appear when deriving the message passing equations for TrueSkill.

## References

1. Bishop, Christopher (2007). Pattern Recognition and Machine Learning. New York: Springer-Verlag New York Inc. ISBN 978-0387310732.