Jump to content

Talk:Particle filter: Difference between revisions

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
Content deleted Content added
m Add signature to last comment
Line 156: Line 156:
== SMC methods use a grid-based approach? ==
== SMC methods use a grid-based approach? ==


As I understand this statement, it is not true at all. The samples drawn by an SMC algorithm are not restricted to be discrete (i.e. defined in a grid-division of a continuous space). In other words, in principle the samples (or particles in a particle filtering context) can be anywhere within the support of the target distribution. <small>Comment added by [[User:Iglesiasg|Iglesiasg]] ([[User talk:Iglesiasg|talk]] • [[Special:Contributions/Iglesiasg|contribs]]) 08:54, 1 April 2014 (UTC)</span></small><!-- Template:Unsigned --> <!--Autosigned by SineBot-->
As I understand this statement, it is not true at all. The samples drawn by an SMC algorithm are not restricted to be discrete (i.e. defined in a grid-division of a continuous space). In other words, in principle the samples (or particles in a particle filtering context) can be anywhere within the support of the target distribution. <small>Comment added by [[Special:Contributions/Iglesiasg|Iglesiasg]] 9:58, 1 April 2014 (UTC)</span></small>

Revision as of 08:58, 1 April 2014

Corrected meaning of SIR

SIR stands for "sequential importance resampling", not "sampling importance resampling". See any of Doucet's work: http://people.cs.ubc.ca/~arnaud/doucet_johansen_tutorialPF.pdf http://people.cs.ubc.ca/~arnaud/samsi_course.html

129.31.206.59 (talk) 04:26, 11 August 2010 (UTC)[reply]

SIR is "sampling importance resampling" Here is the original paper: ftp://swfscftp.noaa.gov/users/jbarlow/SIO-279/Reading%20Assigments/Rubin%201988%20SIRalgorithm.pdf — Preceding unsigned comment added by 131.96.49.166 (talk) 20:52, 20 February 2013 (UTC)[reply]

Nice work but...

Quoting: "They are something like an Extended Kalman filter (EKF)"

They are NOTHING like an EKF!

  • The EKF:
  1. uses a 1st order linearisation around the current estimate.
  2. assumes that the process and measurement noise of the system are Gaussian.
  • The particle filter:
  1. uses the actual nonlinear dynamics for propagating the system.
  2. it can deal with extreme non-Gaussian and multimodal noise distributions.
  3. since it is a Monte Carlo based technique, it can incorporate easily and accurately in its structure any non-standard information (like hard/soft constrains, a-priori knowledge), improving thus its performance.

and many more...

(if I will have time I might add some new things in the article)

Have changed the offending phrase, hope it's an improvement.
As to why it's a valid comparison in the first place,
* It's in an encyclopedia article. That means it must be useful to non-specialists.
* The EKF does solve a related problem and it's probably the best known filtering algorithm after the Kalman filter itself, and the best known one for nonlinear state-space models (if there's a better known one put that in instead).  : * It's in an introductory paragraph, and an appropriate place for an informal introduction.
Of course there are major differences compared with the EKF, but it's still a worthwhile comparison for anyone new to particle filters.
I'm with the original author on this one. It's pedantic at best to say that particle filtering is "nothing" like EKF. Both exist to solve nonlinear estimation problems. Particle filtering, of course, goes about this in a very different way in trying to approximate important samples of the density rather than forcing a Gaussian estimate via linearization. But the general purpose and scope of the two approaches is quite similar. Mateoee 20:42, 4 November 2006 (UTC)[reply]

- I would have to agree that particle filters and any version of the kalman filter are not similar. SMC methods are based on applying the Bayesian recursion equation directly and then using a Monte-Carlo based approach to solve the integrals involved in Bayesian Inference. A comparison with an EKF should not be made.

- Another important modification should be to include the SIS section before the SIR section, since the Resampling step was proposed by Gordon et al., in 1993 to address the issue of weight-collapse faced by the SIS. — Preceding unsigned comment added by Sharkir hussain (talkcontribs) 22:27, 22 August 2013 (UTC)[reply]

missing probability symbol

The definition could be clearer. For example, what is in the definition? It means conditional probability of , so why not say ?

Not necessarily. If you're talking about:
then that means is a distribution, not a probability. Cburnett July 8, 2005 16:05 (UTC)


So this notation convention should be explained, since it is non standard enough for some of us to understand it, no? --Powo 10:51, 15 January 2007 (UTC)[reply]
Indeed the notation can use some improvement and few words of explanation should be added here and there. wants to say "if is the state at the time then the probability density of , the state at the time , is ." Here is a function and its argument. Putting the conditional symbol | also in the argument makes little sense. Later on the article drops the subscript of the density function completely. It should stick to one or the other and define the notation properly or link to an article that does. Unfortunately such terse expression and notation (perhaps misuse of notation?) are common in the PF literature and make understanding hard for the novice. Jmath666 02:36, 11 March 2007 (UTC)[reply]

Choice of P

How is the number of particles (P) normally chosen? Is it a necessarily large number and does each state have the same number of particles?

According to Cristan et al., the square-root of the number of particles is inversely proportional to the root-mean-square-error of the predictions. Theoretically an infinite size of the particle population would provide accurate estimates, however that is computationally infeasible.

This number is picked based on the problem it's trying to solve, most importantly on the number of dimensions X models. The bigger the possible range of X, the more samples you need.

Eh?

I find this article too hard to understand right now. Examples could help. Thanks, --Abdull 14:52, 28 February 2006 (UTC)[reply]

Indeed. Perhaps something like my explanation above would be useful, too. Jmath666 03:09, 11 March 2007 (UTC)[reply]

I agree. This looks like a cheat sheet for people who already largely understand particle filters but can't remember the technicalities. Since this is supposed to be an encyclopedia article, I would expect to see the following sorts of things, written in plain English:

  • Who invented the particle filter?
  • When?
  • In what fields is it used?
  • What is an intuitive explanation of the main idea, for non-experts? If you need to use technical terms like "model estimation" that make no sense to someone outside the field, then you need to say what you mean or provide an explanatory link. (The "estimation" link is useless, just like a "model" link would be.)
  • What sort of "models" (described in plain English) is it applicable to?
  • What are its advantages and disadvantages compared to other methods?
  • Can you show a very simple example?

-Matt 130.60.5.218 09:00, 29 September 2007 (UTC)[reply]

I too agree with Matt, and it is somewhat disappointing that in almost 7 years, no-one has been able to amend the article to deal with his points. His list of questions is a very good point to start improving this article, but the 4th point is probably the most important for Wikipedia: what's the main idea, for non-experts? Why 'particle'? What do the particles represent? What are the inputs and outputs of the filter, typically? Sangwine (talk) 15:43, 19 February 2014 (UTC)[reply]

Simple example please (Eh? no. 2)

I agree with Matt - see section Eh?: it would be great to have a simple example. The first 2 paragraphs read well, but then it gets technical without a leading example or illustration. I read about particle filtering in computer vision books and wanted to broaden my horizon on this topic, but it helped limitedly. I'm just not the right person to help out here - I understand too little. Hope someone else will find the time. Regards from Bucuresti, Romania. Rasche (talk) 17:01, 22 August 2013 (UTC) — Preceding unsigned comment added by Rasche (talkcontribs) 16:31, 22 August 2013 (UTC)[reply]

Computer vision category

I removed this article from the computer vision category. The P-filter is probably usful in some part of CV but

  1. It is not a concept developed within CV or specific to CV.
  2. There is no material in this article which relates it to CV.

--KYN 15:09, 28 July 2007 (UTC)[reply]

Direct version: missing notation

In the following line:

5) Generate another uniform u from

Maybe I missed something, but has not been specified.

Uliba 11:20, 31 October 2007 (UTC)[reply]

Kitagawa (1996) Cite Needed

Although the article mentions an article by Kitagawa, it gives no actual citation. Either supply the citation or remove the comment. Preferably the former. Bill Jefferys 02:26, 15 November 2007 (UTC)[reply]

Would the correct citation here be this one? I obtained it by Google search on "kitigawa statistics stratified resampling".

"Monte Carlo Filter and Smoother for Non-Gaussian Nonlinear State Space Models", Genshiro Kitagawa Journal of Computational and Graphical Statistics, Vol. 5, No. 1 (Mar., 1996), pp. 1-25

Would the person who added the comment about Kitagawa in the main article please state if this is the right citation? Bill Jefferys 22:10, 15 November 2007 (UTC)[reply]

Notation?

I've never seen the superscript-in-parentheses before. What does it mean? I'm guessing it's not exponentiation... Leptogenesis (talk) 06:22, 16 February 2009 (UTC)[reply]

It's to show a set of particles I believe -- <w^{(K}], x^{(K)}> for K = 1, 2, ..., n PirateAngel (talk) 13:27, 23 April 2009 (UTC)[reply]

The parentheses in superscript is used to distinguish a power from an index. That way if I have a collection of particles { x(i) : i = 1,2, ..., N }, there is no confusion as to what x(2) means. Without the parentheses, it might refer to the square of some quantity x. Bradweir (talk) 20:06, 12 July 2011 (UTC)[reply]

Uninformative and Misleading Figure

There is no explanation of how the plot was generated or even what variable is being estimated. Maybe it's the beta coefficient of a stock? Whatever it is needs to be clearly stated as do the observation and propagation models (pdfs). However, even with this additional information the plot is misleading since it shows the mean of the estimated variable which obfuscates one of the main advantages of the particle filter: that it is non-parametric. It would be an improvement to plot the ML estimate of the variable instead of the mean, but it would probably be even more informative to plot the particle ancestry of the particles alive at the final time step. This would help illustrate the multi-hypothesis nature of the particle filter. Mark 20:53, 15 June 2009 (UTC) —Preceding unsigned comment added by 209.211.131.111 (talk)

Conditional/Filter vs. Posterior

In the paragraph ...

All Bayesian estimates of xk follow from the posterior distribution p(xk | y0,y1,…,yk). In contrast, the MCMC or importance sampling approach would model the full posterior p(x0,x1,…,xk | y0,y1,…,yk).

I think the correct term for p(xk | y0,y1,…,yk) is just the conditional distribution, as what's called the "full posterior" is usually just called the posterior.

Also, since this is the "nowcast" distribution, it's also called the filter distribution, which I believe is used in other parts of the article.

Anyone reading this? I'm going to change the article to reflect reality if not.

Bradweir (talk) 04:28, 12 July 2011 (UTC)[reply]

Proposal Distribution

...is never explained. —Preceding unsigned comment added by IskaralPust (talkcontribs) 08:20, 12 May 2011 (UTC)[reply]

Is the "Model" Section Incorrect?

The current version of the "Model" section claims that particle filters assume the system state is 1st order Markov and that the observations depend only on the current state. I realize that most introductory descriptions of particle filters assume this for simplicity, but I didn't think that particle filters necessarily make that assumption.  ?? — Preceding unsigned comment added by 99.113.169.222 (talk) 07:07, 4 June 2012 (UTC)[reply]

SMC methods use a grid-based approach?

As I understand this statement, it is not true at all. The samples drawn by an SMC algorithm are not restricted to be discrete (i.e. defined in a grid-division of a continuous space). In other words, in principle the samples (or particles in a particle filtering context) can be anywhere within the support of the target distribution. Comment added by Iglesiasg 9:58, 1 April 2014 (UTC)