Thanks, it helped a lot for a good starting point in KKR.

I used it and it worked well up to now. But, not very efficient for large dataset.

1) Why not define something like

x_in=0.5*diag(ones(1,size(tot_data,2)));

instead of

x_in=zeros(size(tot_data,2),size(tot_data,2));

so that the loop of lines 40-42 can be simply removed if the calculation of the Kernel matrix of lines 33-38 is modified slightly as to not touch the diagonal since we know it is going to be 1 anyway.

2) Line 50 to obtain the alpha's uses the inv() function that is really slow and according to Matlab should basically be avoided as much as possible http://blogs.mathworks.com/loren/2007/05/16/purpose-of-inv/

3)It is clearly possible to modify the multiples loops to calculate the Kernel matrix and those when calculating final_ans so that to use only one loop. I came up with one possibility, really not sure it's the best (hence why I do not tell it here), but works for what I do. And, it enables me to consider for example in_data of size 2x10000 on my Laptop which is really really long in the original implementation (both because of the multiples loop use and the inv() use).

I think there are some mistake in this implementation, the last step the feature vector feature dimension reduction procedure is incorrect, since you can not do it in this way. If you do it in this way, how can you tell the difference between PCA and KPCA. we should do it by using inner product form.