Generalized Eigenvalue Problem - Hessenberg Matrix

25 views (last 30 days)
Greetings,
I am dealing with a generalized eignevalue problem which I need to solve every time-step.
Although I used the Matlab function eigs, the computation is still expensive since I need to generate a large sparse matrix A and apply boundary conditions.
Now I am trying to implement the Arnoldi iteration, this way I do not have to create the matrix A, since we are only interested in the action of A on the vector v.
My questions are:
Will I be treating the Hessenberg matrix as my new A, i.e. applying boundary conditions to it and using it to solve for the desired eigenvalues and eigenvectors? As far as I know, H is a projection of A and its eigenvalues are related to the eigenvalues of A.
Furthermore, if I have to solve for H, can I use gsvd to target a specific eigenvalue and its corresponding eigenvector like I do with eigs? (H is not a square matrix so I am unable now to use eig or eigs)
Lastly, how do I go about the fact that the eigenvectors of H have a different size (less) than my initial matrix A or the eigenvectors obtained from it?
Any insight regarding any of my questions is highly appreciated.
Jack

Accepted Answer

Christine Tobler
Christine Tobler on 25 Nov 2020
I agree with Steve Lord; if you're able to replicate the action of matrix A on a vector v, you can pass this to EIGS in a function handle and it will give the correct result. NOTE this assumes that you aren't computing eigenvalues that have smallest absolute value or that are closest to some shift (see the doc for details on that).
Is your matrix A changing with each timestep? Also, what sigma are you passing to EIGS?
Some details based on your comments:
  • The Arnoldi iteration you are proposing is very similar to what is going on internally inside of EIGS (use edit eigs to take a look at the details if you're interested - the method used here is a newer version based on similar principles, the Krylov-Schur method).
  • No room for using the GSVD on the Hessenberg matrix there. While GSVD is a generalization of SVD, and generalized eigenvalue problems are a generalization of simple ones, those two generalizations don't really map well onto each other. The Arnoldi iteration can be written so H is k+1-by-k, but the inner eigenproblem to be solved is then just H(1:k, :).
  2 Comments
Jack A.M.
Jack A.M. on 25 Nov 2020
Hi Christine,
So what you and Steve are saying is there is no need for me to utilize the Arnoldi iteration; instead use the function handle in eigs as the action of A on the vector?
Matrix A is changing with each timestep, but usually the changes are small so I have a good guess on what the eignevalue and eigenvector are. I am targeting the smallest eigenvalue (nearest to zero) and its corresponding eigenvector.
Thanks.
Christine Tobler
Christine Tobler on 25 Nov 2020
Hi Jack,
So if you're looking for the smallest eigenvalue, EIGS uses an Arnoldi-like algorithm underneath, but that kind of algorithm converges towards the eigenvalues of largest magnitude.
The way this is handled is that EIGS applies the Arnoldi-like algorithm to the inverse of A, since the smallest eigenvalue of A is matched with the largest eigenvalue of A's inverse. I would expect this is where most of the time in your algorithm is currently spent: After you've built a new matrix A, EIGS will first compute an LU factorization of that matrix, so that it can solve a linear system with A and a vector, instead of multiplying A with that vector. You could double-check that this is true by using the profiler.
I see two options that might improve performance - how well either works will depend a lot on your specific matrices, for example how much they change between iterations and how well conditioned they are. I'd suggest trying (1) first, since it's much easier and has fewer parameters to tune.
1) If you expect all eigenvalues of A to have positive real part, you could call EIGS as follows:
[U, D] = eigs(A, B, 1, "smallestreal", "StartVector", previousEigenVector);
While 'smallestreal' usually has slower convergence, it doesn't require A to be factorized (which I'm assuming is the current bottleneck). By using the previously found eigenvector of a very similar matrix A, a good start should already be given, so convergence speed is less of an issue.
2) Pass a function handle to eigs which solves a linear system with A, but try to avoid factorizing A in every timestep
Okay, first here's the code you would use if you factorize A in every timestep:
for % various timesteps
A = %compute new A
decA = decomposition(A);
[U, D] = eigs(@(b) decA\b, size(A, 1), B, 1, 'smallestabs', 'StartVector', previousEigenVector);
previousEigenVector = U;
end
The decomposition command returns an object that's able to efficiently solve a linear system with the given matrix (it's doing LU internally and then using those factors when backslash is called on it).
So great, we've just pulled outside what EIGS was doing on the inside at this point though. The decomposition call is now going to take most of the time, probably.
Here's where we try to make a function handle that solves a linear system when it only has a factorization of a similar matrix available:
A0 = % first matrix in iteration, or possibly middle one
% - a matrix that's pretty close to the various A matrices that will occur
decA0 = decomposition(A0);
for % various timesteps
A = %compute new A
fhandle = @(b) gmres(A, b, 10, 1e-5, 100, @(x) decA0\x);
[U, D] = eigs(fhandle, size(A, 1), B, 1, 'smallestabs', 'StartVector', previousEigenVector);
previousEigenVector = U;
end
I'm kind of shooting blindly with the code here, because iterative methods such as gmres have a lot of parameters that usually have to be tuned (see the doc for details, although in practice this tends to be trial and error).
The basic idea here is: EIGS needs a function handle that can solve a linear system with A, so let's focus on just that. To solve a linear system with a matrix A, if we have a helper that can solve a linear system with A0 which is close to A, we can use A0 as a preconditioner in an iterative solver.

Sign in to comment.

More Answers (1)

Steven Lord
Steven Lord on 25 Nov 2020
It sounds like you might want to use the syntax of eigs whose first input is Afun, a function handle that returns the result of certain operations based on the other parameters. See the section of the documentation page for eigs that discusses the Afun input for more information.

Categories

Find more on Linear Algebra in Help Center and File Exchange

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!