Talk:Linear discriminant analysis

From Wikipedia, the free encyclopedia
Jump to: navigation, search
          This article is of interest to the following WikiProjects:
WikiProject Psychology (Rated Start-class, Low-importance)
WikiProject icon This article is within the scope of WikiProject Psychology, a collaborative effort to improve the coverage of Psychology on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
Start-Class article Start  This article has been rated as Start-Class on the project's quality scale.
 Low  This article has been rated as Low-importance on the project's importance scale.
 
WikiProject Robotics (Rated Start-class, Mid-importance)
WikiProject icon Linear discriminant analysis is within the scope of WikiProject Robotics, which aims to build a comprehensive and detailed guide to Robotics on Wikipedia. If you would like to participate, you can choose to edit this article, or visit the project page (Talk), where you can join the project and see a list of open tasks.
Start-Class article Start  This article has been rated as Start-Class on the project's quality scale.
 Mid  This article has been rated as Mid-importance on the project's importance scale.
 
Note icon
This article has been marked as needing immediate attention.
WikiProject Statistics (Rated Start-class, Low-importance)
WikiProject icon

This article is within the scope of the WikiProject Statistics, a collaborative effort to improve the coverage of statistics on Wikipedia. If you would like to participate, please visit the project page or join the discussion.

Start-Class article Start  This article has been rated as Start-Class on the quality scale.
 Low  This article has been rated as Low-importance on the importance scale.
 

References[edit]

There is a problem with the references. The link to Martinez in the cited version mentions another year as the one that is found in the general reference list. This should be corrected.

Machine Learning vs Stats[edit]

changed 'nmachine learning' to statitics -- FLD was invented and used my statisticians a long time before all that ML nonsense!

---

That's true, but the wording said that it's currently used (rather than was developed in) the area called machine learning, so it was not an incorrect statement (not that I'm particularly bothered by the change, but a reader looking for related techniques would be better served by being referred to machine learning than to statistics in general).

BTW: I notice two references by H.Abdi have been added by user 129.110.8.39. Looking at this user's other edits, it seems as though a lot of other statistics based articles have been edited to refer to these references, leading me to believe that this is the author trying to publicise his/her books. Is there a wikipedia policy on this situation? My gut reaction would be to remove all of the references he added.

--Tcooke 02:30, 13 October 2006 (UTC)

Linear?[edit]

A few questions I had while learning about this technique that could be addressed here:

  • What is the significance of the word discriminant in this technique?
  • What about this technique is linear?

The problem to be solved is the discrimination between two classes of objects/events based on a number of measurements. The discriminant is a single variable which tries to capture all of the discriminating ability of these measurements. In this case, the discriminant function is a linear combination of the measurements.

--Tcooke 12:49, 22 July 2005 (UTC)


Implementation Details[edit]

I recently implemented Fisher's linear discriminant and found that internet resources (including wikipedia) were lacking in two respects

  • finding the threshold value c
  • finding the sign of \vec{w}

Most of the examples that I saw assumed that the data was centered about the origin required a zero threshold.

My solution for finding c was to naively search for the best value for my training set. I'm sure that this approach does not give the best generalization - I would guess calculating the maximal margin would be better.

With regards to the sign;

S=\frac{\sigma_{between}^2}{\sigma_{within}^2}= \frac{(\vec w \cdot \vec \mu_{y=1} - \vec w \cdot \vec \mu_{y=0})^2}{\vec w^T \Sigma_{y=1} \vec w + \vec w^T \Sigma_{y=0} \vec w} = \frac{(\vec w \cdot (\vec \mu_{y=1} - \vec \mu_{y=0}))^2}{\vec w^T (\Sigma_{y=0}+\Sigma_{y=1}) \vec w}

does not contain any information about the direction of the separator. What is the best way find the direction when using this formulation?

Are implementation details for algorithms relevant to wikipedia articles? If so, I'm sure a short note on the page would add to its usefulness.

128.139.226.34 06:58, 7 June 2007 (UTC)

LDA for two classes[edit]

This is very well written. However, a little more definition of \Sigma and \Sigma^{-1} might be nice. I realize they are mentioned as the "class covariances" but a formula or a ref would be great.

Also, the problem is stated as "find a good predictor for the class y .. given only an observation x." However, then the result is an enormous formula (QDA) or the simpler LDA. It would be nice to state the decision criterion in the original problem terms.

That is, the next-to-last sentence would be (I think!) something like: a sample x is from class 1 if p(x|y=1) or w * x < c. Maybe I'm wrong, but using the language of the original problem would be good.

dfrankow (talk) 21:49, 27 February 2008 (UTC)

Fisher Linear Discriminant VS LDA[edit]

My understanding is that Fisher's Linear Discriminant is the ONE dimensional space which it is best to project the data onto if you are trying to separate it into classes. LDA is a much different idea in that you are actually trying to find a hyperplane which divides the data. Can someone confirm or deny this? If it is correct, then I think Fisher's LD should just be mentioned, but should have a separate article.daviddoria (talk) 13:42, 2 October 2008 (UTC)

it says "which does not make some of the assumptions of LDA such as normally distributed classes or equal class covariances". I think the first part might have to be removed, or what else do the covariance matrixes then refer to? —Preceding unsigned comment added by 89.150.104.58 (talk) 15:32, 21 April 2011 (UTC)

Multiclass LDA[edit]

The Section "Multiclass LDA" contains the following text

This means that when  \vec w is an eigenvector of  \Sigma^{-1} \Sigma_b the separation will be equal to the corresponding eigenvalue. Since  \Sigma_b is of most rank C-1, then these non-zero eigenvectors identify a vector subspace containing the variability between features. These vectors are primarily used in feature reduction, as in PCA.

The text only says what separation the eigenvectors will produce (their corresponding eigenvalue), but it does not say if those separation values are in some way optimal.

It also does not say why the plane spanned by the eigenvectors is a good choice for the projection plane. —Preceding unsigned comment added by 217.229.206.43 (talk) 23:57, 10 July 2009 (UTC)

Robotics attention needed[edit]

  • Refs - inline need adding
  • Check content and structure
  • Reassess

Chaosdruid (talk) 11:10, 24 March 2012 (UTC)

Equation seems weird[edit]



The equation for two classes seems a bit strange to me:

 (\vec x- \vec \mu_0)^T \Sigma_{y=0}^{-1} ( \vec x- \vec \mu_0) + \ln|\Sigma_{y=0}| - (\vec x- \vec \mu_1)^T \Sigma_{y=1}^{-1} ( \vec x- \vec \mu_1) - \ln|\Sigma_{y=1}| \ < \ T
  • Why are we summing from y=0 to y=-1? what does y=-1 even mean in this context? I thought the classes were labelled 0 and 1?
  • What do \ln|\Sigma_{y=0}| and \ln|\Sigma_{y=1}| even mean? What is being summed?

--Slashme (talk) 08:16, 27 August 2013 (UTC)

Edit: I now realise that the Σ here is the covariance, but that article doesn't explain what the -1 means. I'd be interested to know the answer! - Of course, the -1 is the transpose of the covariance matrix. --Slashme (talk) 08:26, 27 August 2013 (UTC)