Rank: 441 based on 264 downloads (last 30 days) and 7 files submitted
photo

Ambarish Jash

E-mail
Company/University
University of Colorado, Boulder

Personal Profile:

Currently I am the chief research analyst at WAVi Co.


 

Watch this Author's files

 

Files Posted by Ambarish Jash View all
Updated   File Tags Downloads
(last 30 days)
Comments Rating
03 Nov 2010 Screenshot Eigen Function of the Laplacian The main file Diffusion_Family.m gives a low dimensional embedding in 3 different ways. Author: Ambarish Jash signal processing, data exploration, modeling, nonlinear, manifold learning, dimension reduction 11 0
22 Apr 2010 Screenshot Mandelbrot Set Generating a mandelbrot set Author: Ambarish Jash fractal 9 0
20 Apr 2010 Screenshot Kernel PCA Non-linear dimension reduction using kernel PCA. Author: Ambarish Jash signal processing, nonlinear, kernel technique, dimension reduction, pca, kernel pca 185 12
  • 4.6
4.6 | 5 ratings
14 Apr 2010 Kernel Ridge Regression This is ridge regression implemented using the Gaussian Kernel. Author: Ambarish Jash signal processing, statistics, nonlinear, regression 31 2
  • 5.0
5.0 | 2 ratings
31 May 2009 Screenshot Variance while using xcov/xcorr The code calculates the variance in a calculated correlation function Author: Ambarish Jash signal processing, statistics 3 0
Comments and Ratings by Ambarish Jash View all
Updated File Comments Rating
18 May 2010 Kernel PCA Non-linear dimension reduction using kernel PCA. Author: Ambarish Jash

Lexander try [aa,index] = sort(eig_val,'descend');
clear aa
% May be you cannot ignore the output from sort.

22 Jun 2009 Google Earth Toolbox Various plotting/drawing functions that can be saved as KML output, and loaded in Google Earth Author: scott lee davis

22 Jun 2009 Google Earth Toolbox Various plotting/drawing functions that can be saved as KML output, and loaded in Google Earth Author: scott lee davis

for ge_quiver there is no option to scale the length of the arrows? could you please add that. -Thank you

02 Jun 2009 Speed using nvidia graphics card and accelerEyes jacket The code tests the increase in speed up of matrix multiplication. Author: Ambarish Jash

AccelerEyes jacket has to be installed. Also the cuda package has to be in the matlab path.

01 Jun 2009 Speed using nvidia graphics card and accelerEyes jacket The code tests the increase in speed up of matrix multiplication. Author: Ambarish Jash

Comments and Ratings on Ambarish Jash's Files View all
Updated File Comment by Comments Rating
21 Oct 2014 Kernel Ridge Regression This is ridge regression implemented using the Gaussian Kernel. Author: Ambarish Jash Jiyeoup Jeong

Louis Francois//
If u normalized ahead of this, x_in slightly different from you've seen before.

28 Oct 2013 Kernel Ridge Regression This is ridge regression implemented using the Gaussian Kernel. Author: Ambarish Jash Louis-Francois

Thanks, it helped a lot for a good starting point in KKR.

I used it and it worked well up to now. But, not very efficient for large dataset.

1) Why not define something like

x_in=0.5*diag(ones(1,size(tot_data,2)));

instead of

x_in=zeros(size(tot_data,2),size(tot_data,2));

so that the loop of lines 40-42 can be simply removed if the calculation of the Kernel matrix of lines 33-38 is modified slightly as to not touch the diagonal since we know it is going to be 1 anyway.

2) Line 50 to obtain the alpha's uses the inv() function that is really slow and according to Matlab should basically be avoided as much as possible http://blogs.mathworks.com/loren/2007/05/16/purpose-of-inv/

Using something like

Klam = x_in + lamda*eye(size(x_in));
alpha = Klam\out_data;

makes it much much faster.

3)It is clearly possible to modify the multiples loops to calculate the Kernel matrix and those when calculating final_ans so that to use only one loop. I came up with one possibility, really not sure it's the best (hence why I do not tell it here), but works for what I do. And, it enables me to consider for example in_data of size 2x10000 on my Laptop which is really really long in the original implementation (both because of the multiples loop use and the inv() use).

Thank again for the function

13 Oct 2013 Kernel PCA Non-linear dimension reduction using kernel PCA. Author: Ambarish Jash chen

Thank you for your file,it's great!

22 Sep 2013 Kernel PCA Non-linear dimension reduction using kernel PCA. Author: Ambarish Jash Pei-feng

good

28 Sep 2011 Kernel PCA Non-linear dimension reduction using kernel PCA. Author: Ambarish Jash BigZero

I think there are some mistake in this implementation, the last step the feature vector feature dimension reduction procedure is incorrect, since you can not do it in this way. If you do it in this way, how can you tell the difference between PCA and KPCA. we should do it by using inner product form.

Contact us