Talk:Kernel principal component analysis

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by 128.250.247.158 (talk) at 07:19, 29 July 2009 (→‎Expand?). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

WikiProject iconStatistics Unassessed
WikiProject iconThis article is within the scope of WikiProject Statistics, a collaborative effort to improve the coverage of statistics on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
???This article has not yet received a rating on Wikipedia's content assessment scale.
???This article has not yet received a rating on the importance scale.
WikiProject iconRobotics Start‑class Mid‑importance
WikiProject iconThis article is within the scope of WikiProject Robotics, a collaborative effort to improve the coverage of Robotics on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
StartThis article has been rated as Start-class on Wikipedia's content assessment scale.
MidThis article has been rated as Mid-importance on the project's importance scale.

On redirection to SVM

What is the relationship between kernel PCA and SVMs? I don't see any direct connection. //Memming 15:50, 17 May 2007 (UTC)[reply]

There is no relation, this is a common mistake. Not every kernel points to an SVM. Kernel is a more common thing in math.
Then I'll break the redirection to SVM. //Memming 12:00, 21 May 2007 (UTC)[reply]

Data reduction in the feature space

In the litterature, I found the way to center the input data in the feature space. Nevertheless, I never found a way to reduce the data in the feature space, so if anyone has knowledge about it, I would be glad if he could explain that toppic here or give few links

Expand?

This looks like an interesting subject, but I don't entirely follow what's here. I gather that the "kernel trick" essentially allows you to perform a nonlinear transform on your data. First, I think the example needs to be explained further. There are two output images that don't flow with the text. The text also only goes part way in describing how this is done. Here are some questions:

  1. How do you chose a kernel?
  2. How is the PCA performed? (Is it really just linear regression on transformed data by eigendecomposition of the covariance matrix of the transformed points?)
  3. If the nonlinear transformation is done implicitly by replacing the inner product in the PCA algorithm, then doesn't that mean you need to do something other than a simple eigendecomposition?

Thanks. —Ben FrantzDale (talk) 01:19, 28 April 2009 (UTC)[reply]


One easy, but significant, improvement that could be made is to include in the example a kernel equivalent to 'ordinary' PCA. At the moment it is not clear what the advantage is. For instance, the first kernel in the current example says "groups are distinguishable using the first component only", but this also seems to be true (as a layperson) for the second kernel in the current example. This should also be clarified.
It would also be interesting to know (at least broadly) how the technique is implemented conceptually, and whether it is supported in standard software packages.
—DIV (128.250.247.158 (talk) 07:19, 29 July 2009 (UTC))[reply]