52 views (last 30 days)

Show older comments

I've been tasked with finding the eigenvalues and eigenvectors of any n x n matrix without using any hard functions like eig(), det(), etc. Simple functions like length() and randi() are permitted.

All I have right now is the code to create a random n x n matrix and make it symmetric:

n = randi([2,10]);

A = randi([-10,10],n);

A = (A + A.')/2;

I genuinely don't know how to go about doing this. Previous similar assignments included the use of a For Loop so I figured that would be the way to go. I also tried something along the lines of

syms x

lambda = A - x*eye(n)

in order to get something resembling the characteristic equation, but it's essentially unsolvable without the usage of det().

What I do have available to me is a previous code I wrote that would solve any n x n matrix using Gauss-Jordan elimination but that's about it.

Any hints?

James Tursa
on 9 Apr 2021

roots( ) forms the characteristic equation and calls eig( ) in the background.

Matt J
on 8 Apr 2021

Edited: Matt J
on 8 Apr 2021

What I do have available to me is a previous code I wrote that would solve any n x n matrix using Gauss-Jordan elimination but that's about it.Any hints?

I think the intention of the assignment may be for you to re-purpose the Gauss-Jordan elimination code that you wrote previously to create your own implementation of det(). Using the elimination steps, you can convert the original matrix to a diagonal matrix whose determinant is easy to compute. You would keep track of the elementary row operations done in your Gaussian elimination code to relate that determinant back to the determinant of your original matrix.

Once you can compute determinants, you can then sample the characterisitic polynomial, which you can then fit with polyfit if you're allowed to use that. If you're not allowed to use polyfit, it's simple enough write your own...

This approach would not be stable for large matrix orders, n, but I'm assuming a fully professional implementation is not the goal here...

John D'Errico
on 8 Apr 2021

Edited: John D'Errico
on 9 Apr 2021

Simpler than what Matt has suggested is to just use matrix multiplication, coupled with deflation.

That is, can you find the LARGEST magnitude eigenvalue? Yes. That is easy, as long as the eigenvalues are distinct.

- Choose some random vector. Call it V0. Scale V0 to have unit norm, so V0 = V0/norm(V0),
- In a loop, do this: V1 = A*V0; Now compute V1norm = norm(V1); If V1 is an eigenvector of A, then V1norm will be the eigenvalue associated with that eigenvector. If V0 has some component that lies in the direction of some other eigenvector, this process will tend to make the largest eigenvalue/vector come to the forefront.
- Replace V0 = V1/V1norm;
- Return to step 2, until the value of V1norm in this process converges. The result will be the largest magnitude eigenvalue of A.

Next, perform deflation, to kill off that eigenvalue/eigenvector pair. This is done as:

5. A = A - V1norm * V0 * V0';

6. Choose some new random vector V0. Scale V0 to have unit norm as you did in step 1.

7. and then return to the looping process in steps 2,3,4.

The code below does the above, except I realized my pseudo-code will have a subtle problem for negative eigenvalues.

A = randn(3);

A = A + A';

eig(A)

V0 = randn(size(A,2),1); V0 = V0/norm(V0);

lastval = inf;

val = 0;

valtol = 1.e-8;

iter = 0;

while abs(abs(val) - abs(lastval)) > valtol

lastval = val;

iter = iter + 1;

V1 = A*V0;

[~,maxel] = max(abs(V0));

val = V1(maxel)/V0(maxel);

V0 = V1/norm(V1);

end

iter

val

V0

So 101 iterations were required to find the largest eigenvalue of A. It may happen to be a negative number, but that is ok.

A2 = A - val*V0*V0'

eig(A2)

See that A2 has the same eigenvalues as A, but the largest magnitude eigenvalue has been killed off. Now you can repeat the above process, until we have found all three eigenvalues and eigenvectors.

This scheme will have problems only when there are replicate eigenvalues with the same magnitude. The scheme I have described is often called the power method.

James Tursa
on 9 Apr 2021

Here is an interesting eigenvalue/eigenvector article I just ran across this evening:

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!