Talk:Kernel principal component analysis
|WikiProject Robotics||(Rated Start-class, Mid-importance)|
|WikiProject Statistics||(Rated Start-class, Low-importance)|
On redirection to SVM
What is the relationship between kernel PCA and SVMs? I don't see any direct connection. //Memming 15:50, 17 May 2007 (UTC)
- There is no relation, this is a common mistake. Not every kernel points to an SVM. Kernel is a more common thing in math.
- Then I'll break the redirection to SVM. //Memming 12:00, 21 May 2007 (UTC)
Data reduction in the feature space
In the literature, I found the way to center the input data in the feature space. Nevertheless, I never found a way to reduce the data in the feature space, so if anyone has knowledge about it, I would be glad if he could explain that toppic here or give few links
Example of kPCA projection
There is something wrong with the example given. It looks like the kernel matrix was not centered before eigendecomposition. Is this a acceptable modification of the algorithm? If it is, in which cases it makes sense to do not center the kernel matrix before other calculations?
See more on: http://agbs.kyb.tuebingen.mpg.de/km/bb/showthread.php?tid=1062 —Preceding unsigned comment added by 188.8.131.52 (talk) 01:20, 13 January 2010 (UTC)
Also, is it possible to show a working Kernel PCA code to reproduce the example plots? I tried to use Gaussian kernel, and I only obtain similar results when I use \sigma = 2, not \sigma = 1. —Preceding unsigned comment added by 184.108.40.206 (talk) 10:36, 5 February 2011 (UTC)
This looks like an interesting subject, but I don't entirely follow what's here. I gather that the "kernel trick" essentially allows you to perform a nonlinear transform on your data. First, I think the example needs to be explained further. There are two output images that don't flow with the text. The text also only goes part way in describing how this is done. Here are some questions:
- How do you chose a kernel?
- How is the PCA performed? (Is it really just linear regression on transformed data by eigendecomposition of the covariance matrix of the transformed points?)
- If the nonlinear transformation is done implicitly by replacing the inner product in the PCA algorithm, then doesn't that mean you need to do something other than a simple eigendecomposition?
- One easy, but significant, improvement that could be made is to include in the example a kernel equivalent to 'ordinary' PCA. At the moment it is not clear what the advantage is. For instance, the first kernel in the current example says "groups are distinguishable using the first component only", but this also seems to be true (as a layperson) for the second kernel in the current example. This should also be clarified.
- It would also be interesting to know (at least broadly) how the technique is implemented conceptually, and whether it is supported in standard software packages.
- —DIV (220.127.116.11 (talk) 07:19, 29 July 2009 (UTC))
The mystery of Y
Normalization of eigenvectors
Should the normalization condition on the eigenvectors include a transpose on the first eigenvector?