Talk:Linear discriminant analysis

From Wikipedia, the free encyclopedia
Jump to: navigation, search


The arrow notation for vectors in this article looks really ugly. No statistics text would use that. In this article, it would be easy enough just to have a box containing definitions for each symbol. E.g. x,w,\mu = vectors ...

YPawitan 14:48, 10 December 2015 (UTC)


There is a problem with the references. The link to Martinez in the cited version mentions another year as the one that is found in the general reference list. This should be corrected.

Machine Learning vs Stats[edit]

changed 'nmachine learning' to statitics -- FLD was invented and used my statisticians a long time before all that ML nonsense!


That's true, but the wording said that it's currently used (rather than was developed in) the area called machine learning, so it was not an incorrect statement (not that I'm particularly bothered by the change, but a reader looking for related techniques would be better served by being referred to machine learning than to statistics in general).

BTW: I notice two references by H.Abdi have been added by user Looking at this user's other edits, it seems as though a lot of other statistics based articles have been edited to refer to these references, leading me to believe that this is the author trying to publicise his/her books. Is there a wikipedia policy on this situation? My gut reaction would be to remove all of the references he added.

--Tcooke 02:30, 13 October 2006 (UTC)


A few questions I had while learning about this technique that could be addressed here:

  • What is the significance of the word discriminant in this technique?
  • What about this technique is linear?

The problem to be solved is the discrimination between two classes of objects/events based on a number of measurements. The discriminant is a single variable which tries to capture all of the discriminating ability of these measurements. In this case, the discriminant function is a linear combination of the measurements.

--Tcooke 12:49, 22 July 2005 (UTC)

Implementation Details[edit]

I recently implemented Fisher's linear discriminant and found that internet resources (including wikipedia) were lacking in two respects

  • finding the threshold value
  • finding the sign of

Most of the examples that I saw assumed that the data was centered about the origin required a zero threshold.

My solution for finding was to naively search for the best value for my training set. I'm sure that this approach does not give the best generalization - I would guess calculating the maximal margin would be better.

With regards to the sign;

does not contain any information about the direction of the separator. What is the best way find the direction when using this formulation?

Are implementation details for algorithms relevant to wikipedia articles? If so, I'm sure a short note on the page would add to its usefulness. 06:58, 7 June 2007 (UTC)

LDA for two classes[edit]

This is very well written. However, a little more definition of and might be nice. I realize they are mentioned as the "class covariances" but a formula or a ref would be great.

Also, the problem is stated as "find a good predictor for the class y .. given only an observation x." However, then the result is an enormous formula (QDA) or the simpler LDA. It would be nice to state the decision criterion in the original problem terms.

That is, the next-to-last sentence would be (I think!) something like: a sample x is from class 1 if p(x|y=1) or w * x < c. Maybe I'm wrong, but using the language of the original problem would be good.

dfrankow (talk) 21:49, 27 February 2008 (UTC)

Fisher Linear Discriminant VS LDA[edit]

My understanding is that Fisher's Linear Discriminant is the ONE dimensional space which it is best to project the data onto if you are trying to separate it into classes. LDA is a much different idea in that you are actually trying to find a hyperplane which divides the data. Can someone confirm or deny this? If it is correct, then I think Fisher's LD should just be mentioned, but should have a separate article.daviddoria (talk) 13:42, 2 October 2008 (UTC)

The hyperplane you mentioned is orthogonal to the one dimensional space on which to project the data. In other words, I perceive your two "different" statements of the problem to be essentially equivalent -- or at best "dual" statements of the problem. DavidMCEddy (talk) 21:41, 29 May 2016 (UTC)

it says "which does not make some of the assumptions of LDA such as normally distributed classes or equal class covariances". I think the first part might have to be removed, or what else do the covariance matrixes then refer to? —Preceding unsigned comment added by (talk) 15:32, 21 April 2011 (UTC)

Multiclass LDA[edit]

The Section "Multiclass LDA" contains the following text

This means that when is an eigenvector of the separation will be equal to the corresponding eigenvalue. Since is of most rank C-1, then these non-zero eigenvectors identify a vector subspace containing the variability between features. These vectors are primarily used in feature reduction, as in PCA.

The text only says what separation the eigenvectors will produce (their corresponding eigenvalue), but it does not say if those separation values are in some way optimal.

It also does not say why the plane spanned by the eigenvectors is a good choice for the projection plane. —Preceding unsigned comment added by (talk) 23:57, 10 July 2009 (UTC)

Robotics attention needed[edit]

  • Refs - inline need adding
  • Check content and structure
  • Reassess

Chaosdruid (talk) 11:10, 24 March 2012 (UTC)

Equation seems weird[edit]

The equation for two classes seems a bit strange to me:

  • Why are we summing from y=0 to y=-1? what does y=-1 even mean in this context? I thought the classes were labelled 0 and 1?
  • What do and even mean? What is being summed?

--Slashme (talk) 08:16, 27 August 2013 (UTC)

Edit: I now realise that the Σ here is the covariance, but that article doesn't explain what the -1 means. I'd be interested to know the answer! - Of course, the -1 is the transpose of the covariance matrix. --Slashme (talk) 08:26, 27 August 2013 (UTC)

NO: The -1 is the inverse not the transpose of the covariance matrix. DavidMCEddy (talk) 21:42, 29 May 2016 (UTC)

“Linear discriminant analysis” and “Discriminant function analysis”[edit]

What's the difference between “Linear discriminant analysis” and “Discriminant function analysis”? They look the same to me. DavidMCEddy (talk) 21:44, 29 May 2016 (UTC)

go ahead and merge. speaking as an expert who has proficiency with the aforementioned technique, try to keep as much of the content from the "Linear Discriminant Analysis" page, and just add the missing bits from "discriminant function analysis". there shouldn't be much to add. fyi: the "discriminant function analysis" page is much poorer in quality than the "discriminant analysis" page (imo). i didn't even know there was a discriminant function analysis page until you posted the merge request (thanks for that DavidMCEddy). (talk) 21:03, 31 May 2016 (UTC)
If someone else can take the lead in merging these articles, I can help. However, I doubt if I'll ever get the time to do it myself. DavidMCEddy (talk) 15:26, 1 June 2016 (UTC)
same. i wouldn't be opposed to deleting the discriminant function page entirely. it makes a lot of unusual claims, and has a lot of unreferenced ones too. maybe if you could put a tag up for deletion and see how it fares? (talk) 17:17, 1 June 2016 (UTC)