# Particle filter

Result of particle filtering (red line) based on observed data generated from the blue line (Larger version)

In statistics, a particle filter, also known as a sequential Monte Carlo method (SMC), is a sophisticated model estimation technique based on simulation.[1] Particle filters are usually used to estimate Bayesian models in which the latent variables are connected in a Markov chain — similar to a hidden Markov model (HMM), but typically where the state space of the latent variables is continuous rather than discrete, and not sufficiently restricted to make exact inference tractable (as, for example, in a linear dynamical system, where the state space of the latent variables is restricted to Gaussian distributions and hence exact inference can be done efficiently using a Kalman filter). In the context of HMMs and related models, "filtering" refers to determining the distribution of a latent variable at a specific time, given all observations up to that time; particle filters are so named because they allow for approximate "filtering" (in the sense just given) using a set of "particles" (differently weighted samples of the distribution).

Particle filters are the sequential (online) analogue of Markov chain Monte Carlo (MCMC) batch methods and are often similar to importance sampling methods. Well-designed particle filters can often be much faster than MCMC. They are often an alternative to the Extended Kalman filter (EKF) or unscented Kalman filter (UKF) with the advantage that, with sufficient samples, they approach the Bayesian optimal estimate, so they can be made more accurate than either the EKF or UKF. However, when the simulated sample is not sufficiently large, they might suffer from sample impoverishment. The approaches can also be combined by using a version of the Kalman filter as a proposal distribution for the particle filter.[citation needed]

Compared with MCMC methods, particle filters estimate only the distribution of only one of the latent variables at a time, rather than attempting to estimate them all at once[citation needed], and produce a set of weighted samples, rather than a (usually much larger) set of unweighted samples.

Particle filters have important applications in econometrics,[2] and in other fields.

## Goal

The particle filter aims to estimate the sequence of hidden parameters, xk for k = 0,1,2,3,…, based only on the observed data yk for k = 0,1,2,3,…. All Bayesian estimates of xk follow from the posterior distribution p(xk | y0,y1,…,yk). In contrast, the MCMC or importance sampling approach would model the full posterior p(x0,x1,…,xk | y0,y1,…,yk).

## Model

Particle methods assume $x_k$ and the observations $y_k$ can be modeled in this form:

• $x_0, x_1, \ldots$ is a first order Markov process such that
$x_k|x_{k-1} \sim p_{x_k|x_{k-1}}(x|x_{k-1})$

and with an initial distribution $p(x_0)$.

• The observations $y_0, y_1, \ldots$ are conditionally independent provided that $x_0, x_1, \ldots$ are known
In other words, each $y_k$ only depends on $x_k$
$y_k|x_k \sim p_{y|x}(y|x_k)$

One example form of this scenario is

$x_k = g(x_{k-1}) + w_k \,$
$y_k = h(x_k) + v_k \,$

where both $w_k$ and $v_k$ are mutually independent and identically distributed sequences with known probability density functions and $g(\cdot)$ and $h(\cdot)$ are known functions. These two equations can be viewed as state space equations and look similar to the state space equations for the Kalman filter. If the functions $g(\cdot)$ and $h(\cdot)$ are linear, and if both $w_k$ and $v_k$ are Gaussian, the Kalman filter finds the exact Bayesian filtering distribution. If not, Kalman filter based methods are a first-order approximation (EKF) or a second-order approximation (UKF in general, but if probability distribution is Gaussian a third-order approximation is possible). Particle filters are also an approximation, but with enough particles they can be much more accurate.

## Monte Carlo approximation

Particle methods, like all sampling-based approaches (e.g., MCMC), generate a set of samples that approximate the filtering distribution $p(x_k|y_0,\dots,y_k)$. So, with $P$ samples, expectations with respect to the filtering distribution are approximated by

$\int f(x_k)p(x_k|y_0,\dots,y_k) \, dx_k\approx\frac1P\sum_{L=1}^Pf(x_k^{(L)})$

and $f(\cdot)$, in the usual way for Monte Carlo, can give all the moments etc. of the distribution up to some degree of approximation.

## Sequential importance resampling (SIR)

Sequential importance resampling (SIR), the original particle filtering algorithm (Gordon et al. 1993), is a very commonly used particle filtering algorithm, which approximates the filtering distribution $p(x_k|y_0,\ldots,y_k)$ by a weighted set of P particles

$\{(w^{(L)}_k,x^{(L)}_k)~:~L\in\{1,\ldots,P\}\}.$

The importance weights $w^{(L)}_k$ are approximations to the relative posterior probabilities (or densities) of the particles such that $\sum_{L=1}^P w^{(L)}_k = 1$.

SIR is a sequential (i.e., recursive) version of importance sampling. As in importance sampling, the expectation of a function $f(\cdot)$ can be approximated as a weighted average

$\int f(x_k) p(x_k|y_0,\dots,y_k) dx_k \approx \sum_{L=1}^P w^{(L)} f(x_k^{(L)}).$

For a finite set of particles, the algorithm performance is dependent on the choice of the proposal distribution

$\pi(x_k|x_{0:k-1},y_{0:k})\,$.

The optimal proposal distribution is given as the target distribution

$\pi(x_k|x_{0:k-1},y_{0:k}) = p(x_k|x_{k-1},y_{k}). \,$

However, the transition prior is often used as importance function, since it is easier to draw particles (or samples) and perform subsequent importance weight calculations:

$\pi(x_k|x_{0:k-1},y_{0:k}) = p(x_k|x_{k-1}). \,$

Sequential Importance Resampling (SIR) filters with transition prior as importance function are commonly known as bootstrap filter and condensation algorithm.

Resampling is used to avoid the problem of degeneracy of the algorithm, that is, avoiding the situation that all but one of the importance weights are close to zero. The performance of the algorithm can be also affected by proper choice of resampling method. The stratified sampling proposed by Kitagawa (1996) is optimal in terms of variance.

A single step of sequential importance resampling is as follows:

1) For $L=1,\ldots,P$ draw samples from the proposal distribution
$x^{(L)}_k \sim \pi(x_k|x^{(L)}_{0:k-1},y_{0:k})$
2) For $L=1,\ldots,P$ update the importance weights up to a normalizing constant:
$\hat{w}^{(L)}_k = w^{(L)}_{k-1} \frac{p(y_k|x^{(L)}_k) p(x^{(L)}_k|x^{(L)}_{k-1})} {\pi(x_k^{(L)}|x^{(L)}_{0:k-1},y_{0:k})}.$
Note that when we use the transition prior as the importance function, $\pi(x_k^{(L)}|x^{(L)}_{0:k-1},y_{0:k}) = p(x^{(L)}_k|x^{(L)}_{k-1})$, this simplifies to the following :
$\hat{w}^{(L)}_k = w^{(L)}_{k-1} p(y_k|x^{(L)}_k),$
3) For $L=1,\ldots,P$ compute the normalized importance weights:
$w^{(L)}_k = \frac{\hat{w}^{(L)}_k}{\sum_{J=1}^P \hat{w}^{(J)}_k}$
4) Compute an estimate of the effective number of particles as
$\hat{N}_\mathit{eff} = \frac{1}{\sum_{L=1}^P\left(w^{(L)}_k\right)^2}$
5) If the effective number of particles is less than a given threshold $\hat{N}_\mathit{eff} < N_{thr}$, then perform resampling:
a) Draw $P$ particles from the current particle set with probabilities proportional to their weights. Replace the current particle set with this new one.
b) For $L=1,\ldots,P$ set $w^{(L)}_k = 1/P.$

The term Sampling Importance Resampling is also sometimes used when referring to SIR filters.

## Sequential importance sampling (SIS)

• Is the same as sequential importance resampling, but without the resampling stage.

## "direct version" algorithm

The "direct version" algorithm[citation needed] is rather simple (compared to other particle filtering algorithms) and it uses composition and rejection. To generate a single sample $x$ at $k$ from $p_{x_k|y_{1:k}}(x|y_{1:k})$:

1) Set n=0 (This will count the number of particles generated so far)
2) Uniformly choose an index L from the range $\{1,..., P\}$
3) Generate a test $\hat{x}$ from the distribution $p_{x_k|x_{k-1}}(x|x_{k-1|k-1}^{(L)})$
4) Generate the probability of $\hat{y}$ using $\hat{x}$ from $p_{y|x}(y_k|\hat{x})$ where $y_k$ is the measured value
5) Generate another uniform u from $[0, m_k]$ where $m_k = \sup_x p_{y|x}(y_k|x)$
6) Compare u and $p\left(\hat{y}\right)$
6a) If u is larger then repeat from step 2
6b) If u is smaller then save $\hat{x}$ as $x_{k|k}^{(p)}$ and increment n
7) If n == P then quit

The goal is to generate P "particles" at $k$ using only the particles from $k-1$. This requires that a Markov equation can be written (and computed) to generate a $x_k$ based only upon $x_{k-1}$. This algorithm uses composition of the P particles from $k-1$ to generate a particle at $k$ and repeats (steps 2–6) until P particles are generated at $k$.

This can be more easily visualized if $x$ is viewed as a two-dimensional array. One dimension is $k$ and the other dimensions is the particle number. For example, $x(k,L)$ would be the Lth particle at $k$ and can also be written $x_k^{(L)}$ (as done above in the algorithm). Step 3 generates a potential $x_k$ based on a randomly chosen particle ($x_{k-1}^{(L)}$) at time $k-1$ and rejects or accepts it in step 6. In other words, the $x_k$ values are generated using the previously generated $x_{k-1}$.

## References

1. ^ Doucet, A.; De Freitas, N.; Gordon, N.J. (2001). Sequential Monte Carlo Methods in Practice. Springer.
2. ^ Thomas Flury & Neil Shephard, 2008. "Bayesian inference based only on simulated likelihood: particle filter analysis of dynamic economic models," OFRC Working Papers Series 2008fe32, Oxford Financial Research Centre.
3. ^ Pitt, M.K.; Shephard, N. (1999). "Filtering Via Simulation: Auxiliary Particle Filters". Journal of the American Statistical Association (American Statistical Association) 94 (446): 590–591. doi:10.2307/2670179. JSTOR 2670179. Retrieved 2008-05-06.
4. ^ Liu, J.; Wang, W., Ma, F. (2011). "A Regularized Auxiliary Particle Filtering Approach for System State Estimation and Battery Life Prediction". Smart Materials and Structures 20 (7): 1–9. doi:10.1088/0964-1726/20/7/075021.
5. ^ Canton-Ferrer, C.; Casas, J.R., Pardàs, M. (2011). "Human Motion Capture Using Scalable Body Models". Computer Vision and Image Understanding (Elsevier) 115 (10): 1363–1374. doi:10.1016/j.cviu.2011.06.001.
6. ^ Doucet, A.; De Freitas, N. and Murphy, K. and Russell, S. (2000). "Rao–Blackwellised particle filtering for dynamic Bayesian networks". Proceedings of the Sixteenth conference on Uncertainty in artificial intelligence. pp. 176–183. CiteSeerX: 10.1.1.137.5199.
7. ^ Blanco, J.L.; Gonzalez, J. and Fernandez-Madrigal, J.A. (2008). "An Optimal Filtering Algorithm for Non-Parametric Observation Models in Robot Localization". IEEE International Conference on Robotics and Automation (ICRA'08). pp. 461–466. CiteSeerX: 10.1.1.190.7092.
8. ^ Blanco, J.L.; Gonzalez, J. and Fernandez-Madrigal, J.A. (2010). "Optimal Filtering for Non-Parametric Observation Models: Applications to Localization and SLAM". The International Journal of Robotics Research (IJRR) 29 (14): 1726–1742. doi:10.1177/0278364910364165.

## Bibliography

• Cappe, O.; Moulines, E.; Ryden, T. (2005). Inference in Hidden Markov Models. Springer.
• Liu, J. (2001). Monte Carlo strategies in Scientific Computing. Springer.
• Ristic, B.; Arulampalam, S.; Gordon, N. (2004). Beyond the Kalman Filter: Particle Filters for Tracking Applications. Artech House.
• Cappe, O.; Godsill, S.; Moulines, E.; (2007). "An overview of existing methods and recent advances in sequential Monte Carlo". Proceedings of IEEE 95 (5): 899. doi:10.1109/JPROC.2007.893250.
• Kitagawa, G. (1996). "Monte carlo filter and smoother for non-Gaussian nonlinear state space models". Journal of Computational and Graphical Statistics 5 (1): 1–25. doi:10.2307/1390750. JSTOR 1390750.
• Kotecha, J.H.; Djuric, P.; (2003). "Gaussian Particle filtering". IEEE Transactions Signal Processing 51 (10).
• Vaswani, N.; Rathi, Y. Yezzi, A., Tannenbaum, A. (2007). "Tracking deforming objects using particle filtering for geometric active contours". IEEE Trans. on Pattern Analysis and Machine Intelligence 29 (8): 1470=1475.