How to efficiently solve a matrix eqation when the matrix is updated?

Dear All,
I need to solve a matrix linear equation: Ax = b. I know the most fast way to solve x is x = A\b. My question is: for a updated matrix A1 = A + deltaA, how can I make use of x=A\b to solve the updated equation A1x=b? The size of A is 5000 x 5000, and there are a very few non-zero elements in deltaA.
Thanks a lot.
Benson

7 Comments

Hi, Bruno,
Thanks for your reply.
Yes, I tried Sherman–Morrison formula. But I found it much slower than \ operator. That means, to directly use A1\b is much faster than using Sherman–Morrison formula. Because I need to run A1\b, A2\b, ..., A1000\b, so I want to save the time of each Ai\b. If I can make use of A\b, it should be faster, but how?
Best regards,
Benson
Try linsolve - it may be faster then A\b, just look the example i linked.
"Yes, I tried Sherman–Morrison formula. But I found it much slower than \ operator"
It depends what you assume to be known (by precomputing) and HOW you perform updatung. Please post the code in case you claim something so we know exactly what you are doing.
To my test Sherman–Morrison_formula is much faster for one element that is mofified (5000 fold). If you have many modified elements, the time scale up as the square of the number of elements.
n = 5000;
A0 = rand(n)+eye(n);
b = rand(n,1);
% Assume the inverse is precomputed
iA = inv(A0);
x0 = A0\b; % iA*b
% A1 := Update matrix A0 with a single element
A1 = A0;
i0=2; j0=4; dAij0=10;
A1(i0,j0) = A0(i0,j0)+dAij0;
% Direct compute
tic
x1=A1\b;
toc % Elapsed time is 1.191863 seconds.
% Use Sherman–Morrison_formula
tic
iAi=iA(:,i0); % inv(A0)*u
x1 = x0 - (dAij0*x0(j0)/(1+dAij0*iAi(j0)))*iAi;
% You need to update iA(:,i1) ... for the next modified elements if any.
toc % Elapsed time is 0.000168 seconds.
Hi, Bruno,
Thanks a lot for your demo. It sounds very promising. I will try to use Sherman–Morrison formula to solve my problem.
Best regards,
Benson
Hi, Stephen,
Thanks for your suggestions. I tried linesolve. For the case of Bruno, it took 0.92 s while \ took 0.98. It is really faster. Thanks a lot again.
Benson
Hi, Stephen,
I found linsolve can not handle sparse matrix. I have to convert my sprase matrix into full matrix first. I found linsolve is much slower than \ for sparse matrices.
Do you know how to handle this isssue?
Thanks a lot.
Benson

Sign in to comment.

Answers (1)

The problem is, it very much depends. If you can write the change in A as a rank 1 update, AND you write the code well, then you probably can gain by use of the infamous Sherman-Morrison formula. It will depend on the sparsity of the problem. Updates to a sparse matrix itself can be slow, so that alone can be a significant cost in time if new elements are created.
However, you state that the changes are few, but you do NOT state they can be written in the form of a rank 1 update. And if that is not the case, then you won't gain much by a rank 1 update formula. Now you would need to use Sherman-Morrison-Woodbury, a generalization. Where the break appears in efficiency is difficult to know. Again, it will be problem dependent. It will depend on the sparsity of your matrices. It will depend on their structure.
You may well be better off NOT computing the matrix inverse at all, instead computing a factorization of the matrix.
For example, consider this matrix:
n = 1000;
A = sparse(eye(n) + diag(rand(n-1,1),1) + diag(rand(n-1,1),-1));
A is tridiagonal. But the inverse?
spy(inv(A))
The inverse of A is full. And that will make it very slow to compute for large matrices.
However, if we look at an LU factorization of A, both factors are very well behaved, and still very sparse.
[L,U] = lu(A);
spy(L)
spy(U)
It is almost always the case that you should at least consider NOT computing a matrix inverse, as there are almost always better alternatives.
timeit(@() inv(A))
ans = 0.0136
timeit(@() lu(A))
ans = 1.8138e-04
And even for a matrix as small as 1Kx1K, the LU is way faster. The greater speed of the sparse LU may make it fast enough that you won't even bother with an update at all. And for much of what you may do that needs the matrix inverse, you can often use an LU factorization just as easily. (We have not been told why you THINK you need the inverse, only that you do. And many times, the real solution to a problem is to know why you think you need a particular tool, but to tell you there are better tools out there.)
My point is, while you may think you need to compute the inverse, understanding linear algebra will help you in many cases to not need to do so at all.

Categories

Find more on Programming in Help Center and File Exchange

Asked:

on 31 Jan 2021

Edited:

on 24 Mar 2021

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!