Which algorithm does SVD function take?

Hi Matlab supporter,
Could I you tell me which algorithm (algorithm name and basic description) does Matlab SVD function take for svd decomposion? (In Matlab 2020b). When I compared Matlab decomposition results with Scilab decomposition results, I find they are different, all meet SVD decomposion definition, but they have different U,V. ([U S V]=svd(A), U*S*V'=A)
Hope to get help from you.
Thank you,
Regards,
Bin

 Accepted Answer

Matt J
Matt J on 29 Jan 2024
Edited: Matt J on 29 Jan 2024
The SVD algorithm is not disclosed by MathWorks, but even if you could be sure the same algorithms were used by both Matlab and SciLab, it would not help you avoid or understand differences in the results. Since you have a matrix with non-unique SVDs, the SVD computation is unstable, and can result in large differences from one implementation to another, merely from the fact that the code implementations are different.

13 Comments

It is algorithm for early version Matlab SVD function. Do you have idea which algorithm the new version Matlab take?
Matt J
Matt J on 29 Jan 2024
Edited: Matt J on 29 Jan 2024
No, but again, I don't think it matters. Differences in the algorithm are not the only one thing that can contribute to different results.
I ever tested creating 1000x1000 and 4000x4000 random matrix A in Matlab, then get the decomposition matrix U, S, V in Matlab, then save the results to one *.mat file and load U, S, V and A in Scilab by *.mat file, after this step we can get U1,S1,V1 in Scilab SVD for matrix A. Comparing results showed that max difference in S and S1 reached 1e-14, but for U and U1 or for V and V1, max difference reached 0.xxx. Test max value of U*S*V' - A and U1*S1*V1' - A is close to 1e-14 (It means two decomposition meet SVD definition), but their decomposition matrix form are very different. So the algorithm matters.
SVD decomposition is only part of algorithm in our big algorithm step. Different SVD decomposition makes Matlab calculation results totally different from Scilab. That's why I am interested in SVD algrorithm in Matlab.
Matt J
Matt J on 29 Jan 2024
Edited: Matt J on 29 Jan 2024
Comparing results showed that max difference in S and S1 reached 1e-14, but for U and U1 or for V and V1, max difference reached 0.xxx.
Yes, the SVD is non-unique. Consider the matrix A=eye(N). It has continuously infinite SVDs, in which we can choose U=R and V=R' for any orthogonal matrix R.
but their decomposition matrix form are very different. So the algorithm matters.
By "algorithm", I'm talking about the mathematical recipe, not the way that it is coded. I'm saying that even if the mathematical recipe were the same in Matlab and SciLab, they are implemented with different code, and that would be enough to see large differences.
@Matt J is completely correct here. The vectors can have arbitrary factors of -1. That changes nothing to the result, and the "algorithm" would not help you to predict that.
And of course, the really small singular values may be close enough, that they are essentially equal. In that case, the corresponding singular vectors can also be non-unique, rotated arbitrarily in their subspace.
And all of this can reduce to tiny noises in the least significant bits of your numbers. And these are uncontrollable. Again, knowledge, even perfect knowledge of the algorithm will not help.
@Bin "I ever tested creating 1000x1000 and 4000x4000 random matrix A in Matlab, then get the decomposition matrix U, S, V in Matlab, then save the results to one *.mat file and load U, S, V and A in Scilab by *.mat file, after this step we can get U1,S1,V1 in Scilab SVD for matrix A. Comparing results showed that max difference in S and S1 reached 1e-14, but for U and U1 or for V and V1, max difference reached 0.xxx. Test max value of U*S*V' - A and U1*S1*V1' - A is close to 1e-14 (It means two decomposition meet SVD definition), but their decomposition matrix form are very different. So the algorithm matters."
Have you swap the sign of the singular vectors before comparing? (The sign can be numerical swap by consider the correlation between two eigen vectors after matching the eigen vaue).
It seems a random matrix of order 1000 have condition number around 1e5 (reasonable for double clcltion) and unlikely the singular values are repeat, so I woul expect both MATLAB and silab better match than 0.xxx.
To prove the point I compute the singular vectors by 2 methods, svd and eig, of a random 1000 x 1000 matrix. They match at 11 digits, not 0.xxx. IMO if you find the difference of 1 digit between skilab and matlab the comparision is not correctly carried out. You need to sort the eigen vectors and flip the sign before comparison.
One can obserbe the error is large for smaller singular value, this is expected, but still accurate of many digits (no less than 10).
n = 1000;
A=randn(n);
[U,S,V]=svd(A);
s = diag(S); % by defaut s from SCD is descensingly sorted
B = A'*A;
B = 1/2*(B + B');
[W,D]=eig(B);
s1 = sqrt(diag(D));
[s1,is] = sort(s1,'descend');
W = W(:,is);
sf = sign(sum(W.*V,1));
W = W.*sf;
errV = vecnorm(V-W, Inf, 1);
norm(errV, inf)
ans = 1.0861e-12
B = A*A';
B = 1/2*(B + B');
[X,D]=eig(B);
s2 = sqrt(diag(D));
[s2,is] = sort(s2,'descend');
X = X(:,is);
sf = sign(sum(X.*U,1));
X = X.*sf;
errU = vecnorm(U-X, Inf, 1);
norm(errU, inf)
ans = 5.6631e-13
% Three different SVDs
norm(A-U*S*V')
ans = 4.8837e-13
norm(A-X*diag(s1)*W')
ans = 5.2702e-11
norm(A-X*diag(s2)*W')
ans = 5.2709e-11
close all
figure
subplot(2,1,1)
semilogy(errV)
xlabel('singular vector j');
ylabel('inf norm (V(:,j)-W(:,j))')
subplot(2,1,2)
semilogy(errU)
xlabel('singular vector j');
ylabel('inf norm (U(:,j)-X(:,j))')
@Bruno Luong It is not allowed to upload attachment larger than 5MB, so I create 100x100 random matrix aa, calculate and save all the Matrix from Matlab and Scilab to temp1.mat file. u1, s1, v1 are Scilab SVD decomposition results, u, s, v are Matlab SVD decomposition results. You can download and run following codes in Matlab:
clear
load temp1.mat u s v u1 v1 s1 aa
disp('Scilab decomposition test');
[max(max(u1*s1*v1'-aa)) min(min(u1*s1*v1'-aa))]
disp('Matlab decomposition test');
[max(max(u*s*v'-aa)) min(min(u*s*v'-aa))]
disp('Max difference in u1 and u, Min difference in u1 and u');
[max(max(u1-u)) min(min(u1-u))]
disp('Max difference in v1 and v, Min difference in v1 and v');
[max(max(v1-v)) min(min(v1-v))]
disp('Test whether every frist element in every row in matrix u has the same sign');
[max(sign(u1(:,1))-sign(u(:,1))) min(sign(u1(:,1))-sign(u(:,1)))]
disp('Test whether every frist element in every row in matrix v has the same sign');
[max(sign(v1(:,1))-sign(v(:,1))) min(sign(v1(:,1))-sign(v(:,1)))]
The result is:
Scilab decomposition test
ans =
1.0e-14 *
0.7105 -0.4607
Matlab decomposition test
ans =
1.0e-14 *
0.5662 -0.5995
Max difference in u1 and u, Min difference in u1 and u
ans =
0.6751 -0.7299
Max difference in v1 and v, Min difference in v1 and v
ans =
0.6810 -0.7368
Test whether every frist element in every row in matrix u has the same sign
ans =
0 0
Test whether every frist element in every row in matrix v has the same sign
ans =
0 0
The flaw in your comparison method is that you have assumed the singular vectors are the rows of u and v, when in fact they are the columns.
Below is the proper way to align the directionality of the singular vectors, and I'm seeing excellent agreement between u,u1 and v,v1.
load('temp1.mat')
s=sign(dot(u,u1));
maxdifference = norm(reshape(u-s.*u1,[],1),inf)
maxdifference = 5.5812e-13
s=sign(dot(v,v1));
maxdifference = norm(reshape(v-s.*v1,[],1),inf)
maxdifference = 6.5797e-13
You are correct. Thank you very much for pointing out. SVD decomposition is only part of program in our code in Scilab and Matlab. Sign different in columns of u,v makes our final calculation results very different. I will try whether I can get W and X like your code in Scilab. I would like to get 'same' U, V in Scilab like that in Matlab to make our calculation results identical.
As stated your comparison is off. Here is correct comparison; the svd betweeb Matlab and scilab match at 12 digites for this small 100 x 100 matrix
load('temp1.mat')
[s,is] = sort( diag(s),'descend');
U = u(:,is); V = v(:,is);
[s1,is] = sort(diag(s1),'descend');
X = u1(:,is); W = v1(:,is);
sf = sign(sum(W.*V,1));
W = W.*sf;
errV = vecnorm(V-W, Inf, 1);
norm(errV, inf)
ans = 6.5797e-13
sf = sign(sum(X.*U,1));
X = X.*sf;
errU = vecnorm(U-X, Inf, 1);
norm(errU, inf)
ans = 5.5812e-13
close all
figure
subplot(2,1,1)
semilogy(errV)
xlabel('singular vector j');
ylabel('inf norm (V(:,j)-W(:,j))')
subplot(2,1,2)
semilogy(errU)
xlabel('singular vector j');
ylabel('inf norm (U(:,j)-X(:,j))')
Bruno Luong
Bruno Luong on 30 Jan 2024
Edited: Bruno Luong on 30 Jan 2024
Matt's answer did not reveal the problem you had, Catalic's answer is right on.
Bruno Luong
Bruno Luong on 30 Jan 2024
Edited: Bruno Luong on 30 Jan 2024
@Bin " I would like to get 'same' U, V in Scilab like that in Matlab to make our calculation results identical."
You waste your time. The signs of eigen vectors are unpreditable. You could however force the sign to your own specification (*), but they come out of the algorithm almost arbitray.
(*) for example pick the largest element in absolute value of an singular vector, then set the sign so that this element is positive (assuming there no draw with opposite sign). Also you cannot change the signs of the odd number of eigen vectors, since U and V must be unitary matrices (det = 1) by convention of SVD.

Sign in to comment.

More Answers (2)

Comparing results showed that max difference in S and S1 reached 1e-14, but for U and U1 or for V and V1, max difference reached 0.xxx.
We would have to see S and S1, so you might wish to attach them for us in a .mat file. If all the S(i) are distinct, as I would expect for a random matrix A, the singular vectors are unique up to a difference in sign. So, when you calculate the max difference, you should make sure you accounted for a sign flip.
If some singular values S(i) are repeated, then the singular vectors are unstable and, as @Matt J mentions, you can expect to see all kinds of numerical differences for all kinds of reasons.

3 Comments

@Bruno Luong I'm very pleased you consider my answer to merit +1 (but you didn't actually upvote it) :-)
I didn't because it actually does not answer the question "which algorithm", but I think you are right on on the diagnostic.

Sign in to comment.

Categories

Find more on Linear Algebra in Help Center and File Exchange

Products

Release

R2020b

Asked:

Bin
on 29 Jan 2024

Edited:

on 30 Jan 2024

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!