SVD & EVD of a Symmetric Matrix

17 views (last 30 days)
Nassi
Nassi on 26 Nov 2014
Edited: John D'Errico on 27 Nov 2014
I am using svd(.) command for computing the decomposition for my symmetric matrix.
The following page
mentions that for a symmetric matrix, EigenValue Decomposition and SingularValue Decomposition are essentially same. This implies all vectors in either of decompositions should be 'real'.
I have two problems when I use eig(A) and svd(A).
1. At times, few of the eigen vectors are complex. Why do I get these complex eigen vectors. Whereas theoretically it shouldn't be the case. 2. While using svd(.), U'*V is not really an identity matrix. All diagonal entries except last few are not 1. Also, the sign of few comes out to be negative.
Since I am repeating this over randomly generated data, the location of negative diagonal entries of U'*V varies. This is true for (1) as well.
Is this something related to numerical instability?
Thanks

Answers (2)

John D'Errico
John D'Errico on 26 Nov 2014
Edited: John D'Errico on 27 Nov 2014
Welcome to the wacky world of floating point arithmetic, which is only occasionally related to mathematics, despite that the two often look so similar. The fact is, what is true in mathematics is often not true on your computer, when done in floating point arithmetic.
It is difficult to know exactly what problems you have encountered since we are not shown what you tried.
Is your matrix truly symmetric? Or does it just look that way to the few decimal places you have displayed?
Is the matrix numerically singular?
Are there replicate eigenvalues?
Eigenvectors are not unique to within a sign change. This is also true for the SVD, so when you form U'*V, there is no reason to expect ones on the diagonal, -1 is entirely possible. In fact I would expect it with some frequency. For example...
A = rand(100);
A = A + A';
[U,S,V] = svd(A);
hist(diag(U'*V))
When I try that, I see half the diagonal elements are -1. I am not even remotely surprised.
That U'*V is not EXACTLY diagonal is also no surprise. Floating point arithmetic happens. Note that the off-diagonal entries are zero to within any meaningful tolerance.
  9 Comments
Nassi
Nassi on 27 Nov 2014
I am doing truncation. Thank you for your input.
John D'Errico
John D'Errico on 27 Nov 2014
Edited: John D'Errico on 27 Nov 2014
If you generate a matrix using floating point operations and linear algebra, it is a very good chance that indeed, it will NOT be truly symmetric. And in that case, eig need not yield entirely real eigenvalues & vectors. Sorry, but you need to learn this fact. You CAN symmetrize a matrix, as I showed you how.
Regardless, given a rank deficient matrix A, you are looking for the vector x such that x'*A*x is minimized.
Again, A is rank deficient!!!!!!!!!
ANY linear combination of the vectors in the nullspace of A will produce a minimum, ZERO! At least, it will be zero to within a tolerance of floating point trash.
Ok, so maybe you really want to find the unique x such that x'*A*x is greater than zero, yet still minimized. But A is singular! So you can add any linear combination of the nullspace vectors to x. I.e., there is no unique minimum.
You are worrying about the difference between what svd and eig return for a singular matrix here. Of course, since there are replicate zero singular values/eigenvalues at zero, those corresponding vectors are not unique.
Perhaps it is time to revisit a basic course in linear algebra. Maybe take a basic course in numerical linear algebra.

Sign in to comment.


Youssef  Khmou
Youssef Khmou on 27 Nov 2014
Edited: Youssef Khmou on 27 Nov 2014
Floating points problems occur as mentioned by @Jhon, you can verify if your matrix M is symmetric or not :
norm(M-M') % must be 0
Another way to see the floating point example, is via unitary matrix U, theory says U'=inv(U) if you try
norm(U'-inv(U))
you get a value around e-14 or e-15 .
  3 Comments
John D'Errico
John D'Errico on 27 Nov 2014
If your matrix is numerically singular, and you use the singular vector with minimum singular value, then you should expect to get numerical trash, which will randomly be negative or positive, near zero.
Theory is not relevant when you are using floating point arithmetic in this case.
Nassi
Nassi on 27 Nov 2014
Edited: Nassi on 27 Nov 2014
Yes I agree. So this is why I am using U-V to find out the location of smallest non-zero singular value….something similar to SVD truncation. But what puzzles me more are the complex eigen vectors which I am getting for my symmetric matrix A. I can't figure out the reason for that.

Sign in to comment.

Categories

Find more on Linear Algebra in Help Center and File Exchange

Tags

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!