Code covered by the BSD License  

Highlights from
Kernel PCA


4.6 | 5 ratings Rate this file 177 Downloads (last 30 days) File Size: 2.17 KB File ID: #27319
image thumbnail

Kernel PCA



Non-linear dimension reduction using kernel PCA.

| Watch this File

File Information

This technique takes advantage of the kernel trick that can be used in PCA. This is a tutorial only and is slow for large data sets.
In line 30 the kernel can be changed. Any Kernel should do it.
Ref :

MATLAB release MATLAB 7.9 (R2009b)
Tags for This File   Please login to tag files.
Please login to add a comment or rating.
Comments and Ratings (13)
13 Oct 2013 chen

Thank you for your file,it's great!

22 Sep 2013 Pei-feng


28 Sep 2011 BigZero

I think there are some mistake in this implementation, the last step the feature vector feature dimension reduction procedure is incorrect, since you can not do it in this way. If you do it in this way, how can you tell the difference between PCA and KPCA. we should do it by using inner product form.

02 Sep 2011 Md. Ali Hossain

I am unable to run the algorithm for large data set. I am working on hyperspectral image of size 191x370856. What can I do and what is the best solution.

14 Aug 2011 Natalie

Sorry Enrique I don't understand your second point. Surely it doesn't matter whether you normalize just the non-zero eigenvalues or all of the eigenvalues since the non-zero eigenvalues won't change the projections? Have I missed something? Thanks.

11 Jun 2011 rem

I used this code for face recognition, i chosed Gaussian kernel and ploynominal kernel, but i have not seen any improvement over Eigenface.

27 Apr 2011 Weizhi Li  
08 Apr 2011 Sanjay

I used your algorithm and works fine, i have taken the entire data that is 5x3000 and after simulation i got new data, can i use this new data for Support vector regression.

please advise

10 Feb 2011 Kasper Marstal

Thank you for a great submission Ambarish, helped a lot to figure out Schölkopf's "Nonlinear Component Analysis as a Kernel Eigenvalue

And thank for dear comments Enrique, I have thought alot about 1) and 2) as well but was afraid I was the one mistaken as you often are when learning new stuff.

Thanks a bunch you two!

18 Jan 2011 Enrique Corona

Sorry about my last two blank comments. Mouse double-click errors.

Very nice code! However I have a couple of subtle comments:

1) When you center the Kernel matrix be sure to divide the "ones" matrix by the number of samples i.e.
Line 43: one_mat = ones(size(K));
should read
Line 43: one_mat = ones(size(K))./size(data_in,2);

2) Eigenvector normalization implies dividing each of the columns of the eigenvector matrix by the sqrt of its corresponding eigenvalue. Do so by substituting:
Line 61: eigvec(:,col) = eigvec(:,col)./(sqrt(eig_val(col,col)));
to read
Line 61: eigvec(:,col) = eigvec(:,col)./(sqrt(eigval(col,col)));

3) Also, if you must do eigenvalue sorting, be careful to use only the diagonal of the "Lambda" matrix and not the whole matrix. Use:

Line 63: [dummy, index] = sort(diag(eigval),'descend');
instead of
Line 63: [dummy, index] = sort(eig_val,'descend');

You can also see "Learning with Kernels" by B. Scholkopf and A. Smola, Section 14.2 (particularly eqs 14.14 and 14.17). Hope this is helpful.

18 May 2010 Ambarish Jash

Lexander try [aa,index] = sort(eig_val,'descend');
clear aa
% May be you cannot ignore the output from sort.

13 May 2010 Alexander Patrushev

I use Matlab 2008 (7.6) and there is error at this line:
[~, index] = sort(eig_val,'descend');
Expression or statement is incorrect--possibly unbalanced (, {, or [.

How could I fix that?

13 May 2010 Alexander Patrushev

Good example!

Contact us