Code covered by the BSD License  

Highlights from
LSMR: An iterative algorithm for least-squares problems

5.0

5.0 | 3 ratings Rate this file 30 Downloads (last 30 days) File Size: 7.34 KB File ID: #27183

LSMR: An iterative algorithm for least-squares problems

by

 

07 Apr 2010 (Updated )

Official successor to the LSQR algorithm, developed by David Fong and Michael Saunders.

| Watch this File

File Information
Description

An iterative method is presented for solving linear systems and linear least-square systems. The method is based on the Golub-Kahan bidiagonalization process. It is analytically equivalent to the standard method of MINRES applied to the normal equation. Compared to LSQR, it is safer to terminate LSMR early.

Details about LSMR can be found on
http://www.stanford.edu/group/SOL/software/lsmr.html
http://www.stanford.edu/~clfong/lsmr.html

MATLAB release MATLAB 7.8 (R2009a)
Tags for This File   Please login to tag files.
Please login to add a comment or rating.
Comments and Ratings (10)
05 Jan 2012 Zahra Ghanian

I'm wondering how you tested the LSMR algorithm with the sparse Matrices? Please let me know.

I checked, by MATLAB, the result of one matrix (produced by mesh2d2.m) from this site: http://www.cise.ufl.edu/research/sparse
and the result was not good at all.

28 Sep 2011 Michael Völker

Fantastic! I would only like to pass an initial start guess to the function, so for me I changed the interface to lsmr(..., x0) and in the code I put a
if ~isempty(x0)
x = x0; clear x0
else
x = zeros(n,1);
end

Very useful code, thank you!

13 Aug 2010 Rational Adjustment

Hi David, I look forward to reading your upcoming paper on this new method. I am glad you have both *.m and *f90 versions, though I haven't tried the f90 yet (I prefer the simplicity of f77). I was very glad to see that you included the matrix condition number among other things as output (the matlab version of lsqr doesn't do this).

Any chance you can get LSMR to estimate the rank of A? What about calculating the Resolution and a posteriori covariance matrices, which are all super important to people who use these large matrix solvers for practical work. Years ago some teams came up with ways to use LSQR to estimate the Resolution and Covariance matrices (such as Nolet et al Geophysical Journal International, 138, 36-44, 1999), but I am not sure if anything better has come out since. If you had a version that calculated/estimated rank and Resolution and Covariance matrices you would be a hero.

I ran this on a ~400,000 x 90,000 matrix with 21,000,000 non-zero elements:

LSQR = 100 seconds with 596 iterations
LSMR = 68 seconds with 412 iterations
same tolerance criteria and damp = 0.
no reorthogonalization applied
cond(A) = 10

The LSMR solution seems a bit under-developed around the edges (smallest singular values)
as compared to LSQR. Perhaps a few more iterations of LSMR could be carried out to get an LSQR equivalent result (the LSMR results looks like a LSQR result with fewer iterations).

12 Apr 2010 John D'Errico

My thanks for fixing this. I like the explanation you have added for atol and btol.

My rating has now risen to where it belongs for this code - 5 stars.

10 Apr 2010 David

Thank you for the questions. I agreed that's a bit unclear. More docs are added. Note that atol and btol are not new in LSMR. They are part of the LSQR algorithm. (see LSQR: An algorithm for sparse linear equations and sparse least squares, TOMS 8(1), 43--71 (1982). ) It's just that Mathwork people skipped this while implementing LSQR. They are available in the System Optimization Laboratory version of LSQR if you are interested.
http://www.stanford.edu/group/SOL/software/lsqr.html

08 Apr 2010 John D'Errico

I'm pretty happy with this code. It produces results efficiently for large sparse systems, generally a bit faster than did lsqr on the same random systems. (The reduction in time was typically in the 10-50% range over lsqr in my tests on some random matrices.)

One more question. In the help, I find this explanation of atol and btol:

atol, btol are stopping tolerances. If both are 1.0e-6 (say),
the final residual norm should be accurate to about 6 digits.
(The final x will usually have fewer correct digits,
depending on cond(A) and the size of damp.) If atol or btol
[], then a default value of 1.0e-6 would be used.

The problem is, in no place does it tell me how to use them. What do they control? Yes, smaller values for these numbers will probably make it run longer, with a tighter accuracy on the final result. Is one of them a relative tolerance, one an absolute tolerance? What are they?

08 Apr 2010 John D'Errico

Ok. Much better, as I can now appreciate the help. An H1 line. Starting to test it now, I find the following line that causes a crash.

itnlim = min(m,n,20);

I assume that you wanted to find the smallest of the numbers [n,m,20]. However, min does not work in that way in MATLAB. If you wanted that result, try this instead:

itnlim = min([m,n,20]);

If I make that change, lsmr now runs with all default parameters. I'll keep playing for a little while before I upgrade my rating.

08 Apr 2010 David

Thank you so much for your valuable comments. I've updated the docs and default parameters. The new file should appear after review by Mathworks. Please let me know if you come up with more suggestion.

07 Apr 2010 John D'Errico

No defaults for the parameters are supplied, i.e. those parameters that we don't get told how to use anyway. So while the authors claim that this is an effective replacement for lsqr, it is not. A quick test points this out...

x = lsmr(A,b);
??? Input argument "show" is undefined.

Error in ==> lsmr at 85
if show

What documentation is provided is confusing, not terribly well written. If anything, my initial rating seems perhaps too high. I'll hope the authors are willing to improve this, as it might be useful.

07 Apr 2010 John D'Errico

Possibly good code, but poor help. Why do I say poor? There is no H1 line. This is what lookfor uses to help you remember the name of this code next month or next year, when you don't recall the letters lsmr.

An H1 line should be a good one line description of the code. It should include good keywords that would logically be chosen to find this code in a search. The H1 line should be the VERY first line of help in the help block.

Next, in order to learn how to use this code, you need to EDIT the code. The author put a blank line in the middle of the help. This causes help to not display the entire help block. So if you want to know what the parameters mean, you need to edit the blasted code.

I'll happily review the code more deeply, hopefully improving my review, if these flaws are repaired.

Updates
08 Apr 2010

Updated h1 line, some documentation and default parameters.

08 Apr 2010

Bug fix for the default value of itnlim.

10 Apr 2010

Better formatting of printout.
Updates to documentation.

14 Apr 2010

Updated documentation to MATLAB style.
Added testing code.

03 Jun 2010

Added the option to use local or full reorthogonalization on the v_k vectors. This reduces the number of iterations to convergence by using extra memory to store some of the v_k's.

10 Mar 2011

Fixing a bug in local reorthogonalization that the 1st V vector is stored twice. (suggested by David Gleich)

Contact us