Compute a set of normalized, A-conjugate vectors from basis W. Vectors can be A-normalized outside of function easily. See below.
W: A linear independet set of column vectors
A: A full rank matrix. An output is produced if A is not symmetric, but A must be symmetric in order for U'*A*U=0;
U: A set of mutually A-conjuate vectors that span the same space as W. If A if the identity matrix, so that A=eye(size(W)), U is a set of orthogonal vectors and the method is identical to the Gram-Schmidt
orthogonalization. The columns of U are normalized.
To A-normalize U(:,k), compute:
DESCRIPTION OF A-CONJUGACY
Two vectors u and v are A-conjugate if u'*(A*v) = 0. In a more general sense, two elements of a vector space are A-conjugate if <u,Av>=0. A-conjugacy arises out of generalized eigenvalue problems of the form:
Lu = lambda*A*u
If A is self-adjoint and positive definite, and L is self-adjoint, then the eigenvectors of operator L are A-conjugate to each other. Such problems arise all the time in vibrational mechanics. The Ritz method is a common aplication of A-conjugacy of eigenvectors. In seismology, non-spherical bodies will in general, have their wave-equation
eigenvectors A-conjugate to each other, rather than orthogonal, though the elements of A are likely to be quite small for the Earth.
Rayleigh-Ritz algorithms, polarization algorithms, Gram-Schmidt orthogonalization, eigenvalue problems.
Joshua Carmichael (2021). conjorth (https://www.mathworks.com/matlabcentral/fileexchange/17623-conjorth), MATLAB Central File Exchange. Retrieved .
My http://www.mathworks.com/matlabcentral/fileexchange/51-ortha-m does the same. However, mine also supports A given only as a function performing the product A*X, and should be much faster for large matrices A.
You are doing something like C * ((C'*C) \ C').
The latter part is (C'*C)\C' which is the normal eqn's form of just C\I. Isn't that prefered for accuracy? It is slower than the (C'*C \ C') form, a bit.
You did warn us that it's slow for large matrices ... (the time complexity is probably O(n^4)).
This cuts the time in half (18 seconds to 9 seconds for n=300):
for k = 3:Nw
C = A*U(:,1:k-1) ;
projv = (I - C*((C'*C)\C')) * W(:,k) ;
U(:,k) = projv / norm (projv) ;
That's odd. Yes, I did download and review the version with "inv" in it. The newer version looks much better. For this updated version, I would suggest writing
projv = (I-P)*W(:,k) ;
and remove the Pc and v variables, in the interest of speed and simplicity, but that's a minor point. (For that matter, the variable P can be removed too).
Tim appears to have looked at the first version of the code. The author has since fixed the problems I noted, including the use of inv.
There is also now error checking, plus an H1 line, plus preallocation for U, etc.
As for the issue of sparse A, my guess is that this code will be slow anyway. So if A is large enough that sparse storage is necessary, then get a cup of coffee.
John is right ... no one should ever multiply by the inverse. You should replace the line
P=(A*U) * (((A*U)'*(A*U)) \ (A*U)') ;
Multiplying by the inverse is shockingly common in files posted at the File Exchange. It should never be done. Even if A is diagonal, A\b is better than inv(A)*b.
Also, if you'd like to work with sparse W and A, you should replace the line:
if (issparse (A))
I = speye (size (A)) ;
I = eye (size (A)) ;
My guess is that if W and A are sparse then U will probably be all nonzero anyway, though, so the above probably isn't helpful. If you pass sparse W and A, then U will be returned full, with the current version of the code.
What is the statement v=W(:,k) doing inside the for loop? You don't use v later on at all, but you do use W(:,k) itself.
I would also recommend replacing the "disp" with "warning".
This is not bad code, but it has some problems.
What do I like? It has reasonably good help. I was pleased to see an explanation of A-conjugacy in the help, as well as a list of some places where you might use this code.
What were the problems? The code lacks any serious error checking, so if A and W are incompatible in shape, the code will not help you. One flaw in the help was that no place does it tell you that A and W must be compatible for matrix multiplication and what shapes they must be to do so.
This code lacks an H1 line, so that a year from now when you want to find that code you downloaded to solve exactly this problem, lookfor will be of no use at all to you.
The algorithm itself is fairly standard, but the author slipped there too, using inv to solve a problem that \ would do better. Just because that formula from your book shows a matrix inverse, this does not mean you should use one.
Finally, the author fails to preallocate the matrix U, then grows the array internally in a loop. This is very inefficient when large matrices are grown, yet it would have been trivially simple to preallocate the array.
Minor points are that while the author did provide internal comments, it appears that he got lazy towards the end of the code. The comments disappear towards the end. I'd also recommend more white space in the code. Blank lines are free, but they do improve the readability of your code. And, well, readable code is better code.
As usual, if these problems are repaired, I will happily re-evaluate this code.
Find the treasures in MATLAB Central and discover how the community can help you!Start Hunting!