4.6

4.6 | 32 ratings Rate this file 100 Downloads (last 30 days) File Size: 859 KB File ID: #17534
image thumbnail

LMFnlsq - Solution of nonlinear least squares

by

 

15 Nov 2007 (Updated )

Efficient and stable Levenberg-Marquard-Fletcher method for solving of nonlinear equations

| Watch this File

File Information
Description

The function The LMFnlsq.m serves for finding optimal solution of an overdetermined system of nonlinear equations in the least-squares sense. The standard Levenberg- Marquardt algorithm was modified by Fletcher and coded in FORTRAN many years ago (see the Reference). This version of LMFnlsq is its complete MATLAB implementation complemented by setting parameters of iterations as options. This part of the code has been strongly influenced by Duane Hanselman's function mmfsolve.m.

Calling of the function is rather simple and is one of the following:
    LMFnlsq % for help output
    Options = LMFnlsq('default');
    Options = LMFnlsq(Name1,Value1,Name2,Value2,...);
    x = LMFnlsq(Eqns,X0);
    x = LMFnlsq(Eqns,X0,'Name',Value,...);
    x = LMFnlsq(Eqns,X0,Options);
    [x,ssq] = LMFnlsq(Eqns,...);
    [x,ssq,cnt] = LMFnlsq(Eqns,...);
    [x,ssq,cnt,nfJ] = LMFnlsq(Eqns,...);
    [x,ssq,cnt,nfJ,XY] = LMFnlsq(Eqns,...);

In all cases, the applied variables have the following meaning:
% Eqns is a function name or a handle defining a set of equations,
% X0 is a vector of initial estimates of solutions,
% x is the least-squares solution,
% ssq is sum of squares of equation residuals,
% cnt is a number of iterations,
% nfJ is a sum of calls of Eqns and function for Jacobian matrix,
% xy is a matrix of iteration results for 2D problem [x(1), x(2)].
% Options is a list of Name-Value pairs, which may be set by the calls
Options = LMFnlsq; % for default values,
Options = LMFnlsq('Name',Value,...); % for users' chosen parameters,
Options = LMFnlsq(Options,'Name',Value,...); % for updating Options.
If no Options is defined, default values of options are used.

Field names 'Name' of the structure Options are:
% 'Display' for control of iteration results,
% 'MaxIter' for setting maximum number of iterations,
% 'ScaleD' for defining diagonal matrix of scales,
% 'FunTol' for tolerance of final function values,
% 'XTol' scalar or vector for tolerance of final solution increments,
% 'Trace' for control of iteration saving,
% 'Lambda' for setting of initial value of the parameter lambda.
% 'Jacobian' for a handle of function, which evaluates Jacobian matrix.
If no handle is declared, internal function for finite difference approximation of the matrix is used.

Example 1:
The general Rosenbrock's function has the form
    f(x) = 100(x(2)-x(1)^2)^2 + (1-x(1))^2
Optimum solution gives f(x)=0 for x(1)=x(2)=1. Function f(x) can be expressed in the form
    f(x) = f1(x)^2 + f2(x)^2, where f1(x) = 10(x(2)-x(1)^2), and f2(x) = 1-x(1).
Values of the functions f1(x) and f2(x) can be used as residuals. The parameter Eqns has a form of either named function:
% function r = rosen(x)
%% ROSEN Residuals of the Rosenbrock valey:
% r = [ 10*(x(2)-x(1)^2) % first part, f1(x)
% 1-x(1) % second part, f2(x)
% ];
or a handle of the anonymous function:
% rosen = @(x) [10*(x(2)-x(1)^2); 1-x(1)];
The calls are different:
[x,...] = LMFnlsq('rosen',...); % in case of named function, or
[x,...] = LMFnlsq(rosen,...); % in case of the function handle.
LMFnlsq finds the exact solution of this problem in 17 iterations.

Example 2:
Regression of experimental data.
Let us have experimental data simulated by the decaying function
  y = c(1) + c(2)*exp(c(3)*x) + w*randn(size(x));
for column vector x. An initial guess of unknown parameters is obtained from approximation y(x) for x=0 and x->Inf as
c1 = y(end);
c2 = y(1)-c(1); and
c3=x(2:end-1)\log((y(2:end-1)-c1)/c2)
The anonymous function is defined for predefined column vector x as
    res = @(c) c(1)+c(2)*exp(c(3)*x) - y;
and the call, say
    [x,ssq,cnt] = LMFnlsq(res,[c1;c2;c3],'Display',1);
Provided w=0 (without errors), x=(0:.1:2)' with c=[1,2,-1], the initial estimates of unknown coefficients are
c0 = [1.2707, 1.7293, -1.6226].
The call
    [c,ssq,cnt] = LMFnlsq(res,[c1,c2,c3])
gives the exact solution c=[1, 2, -1] in 9 iterations.

Notes:
* Users having old MATLAB versions earlier than 7, which has no anonymous functions implemented, have to call LMFnlsq with named functions for evaluation of residuals.
* See LMFnlsqtest for a short explanation and solved examples.
* If values of unknowns are different in orders, it is recommended to use variable scaling (see help),
* The script BoxBOD.m from NIST testing problems is complemented with the solution.

Reference:
Fletcher, R., (1971): A Modified Marquardt Subroutine for Nonlinear Least Squares. Rpt. AERE-R 6799, Harwell

Acknowledgements

Lm Fsolve.M: Levenberg Marquardt Fletcher Algorithm For Nonlinear Least Squares Problems inspired this file.

This file inspired Lm Fsolve.M: Levenberg Marquardt Fletcher Algorithm For Nonlinear Least Squares Problems and Im Graft.

MATLAB release MATLAB 7.3 (R2006b)
Other requirements The testing script LMFnlsqtest exploids functions inp, fig and separator from FEX. Inform me on bugs in LMFnlsq or possible improvements by e-mail.
Tags for This File   Please login to tag files.
Please login to add a comment or rating.
Comments and Ratings (53)
09 Jul 2014 Aslak Grinsted

Thank you for this. I use it in imgraft.glaciology.net for optimising camera parameters and it works great. I have also used it in another project where it also delivered a massive performance boost.

25 Jun 2014 Aslak Grinsted  
03 Jan 2014 Miroslav Balda

Sorry, I repair a wrong copying.
The right solutions are
x =
-0.0000
1.0000
ssq =
2.6880e-009
cnt =
149
===================
x =
0.0000
1.0000
ssq =
1.2586e-021
cnt =
4
===================
x =
0.0000
1.0000
ssq =
1.2586e-021
cnt =
4

Miroslav

02 Jan 2014 Miroslav Balda

Dear Amin
I thank you for your interest in my function LMFnlsq. The difference in solutions between LMFnlsq and fminsearch is caused by default preset tolerances 'XTol' and FunTol'.By changing them, you would obtain just the same result. It is obvious from the first group of results from the script Amin.m with improved required tolerances:
% Amin.m
%%%%%%%%% 2014-01-03
clc
clear all
close all
format compact

y =@(x) x^2+1;
c =@(x) complex(x(1),x(2));
fx=@(x) abs(y(c(x)));

[x,ssq,cnt] = LMFnlsq(fx,[0.1;1.5],'XTol',1e-10,'FunTol',1e-10)

fprintf('===================\n');

% solution of 2 equations: Re c^2 + 1; Im c^2 = 0;
fx=@(x) [real(y(c(x))); imag(y(c(x)))];
[x,ssq,cnt] = LMFnlsq(fx,[0.1;1.5])

fprintf('===================\n');

% solution of 2 equations: Re c^2 + 1; Im c^2 = 0;
c = @(x) complex(x(1),x(2))^2;
fx = @(x) [real(c(x))+1; imag(c(x))];
[x,ssq,cnt] = LMFnlsq(fx,[0.1;1.5])

Solutions of all cases are:

% Amin.m
%%%%%%%%% 2014-01-03
clc
clear all
close all
format compact

y =@(x) x^2+1;
c =@(x) complex(x(1),x(2));
fx=@(x) abs(y(c(x)));

[x,ssq,cnt] = LMFnlsq(fx,[0.1;1.5],'XTol',1e-10,'FunTol',1e-10)

fprintf('===================\n');

% solution of 2 equations: Re f + 1; Im c^2 = 0;
fx=@(x) [real(y(c(x))); imag(y(c(x)))];
[x,ssq,cnt] = LMFnlsq(fx,[0.1;1.5])

fprintf('===================\n');

% solution of 2 equations: Re c^2 + 1; Im c^2 = 0;
c = @(x) complex(x(1),x(2))^2;
fx = @(x) [real(c(x))+1; imag(c(x))];
[x,ssq,cnt] = LMFnlsq(fx,[0.1;1.5])

The first solution corresponds the example solved by fminsearch, however, with smaler tolerances. The solution is the same.

If you wanted to solve the >>complex<< problem f(x) = x^2 + 1, then the anonymous functionsare different, and are given in example 2 and 3. It is clear that the almost exact solution is reached in only 4 steps (cnt).

All the best in the New Year!
Miroslav

02 Jan 2014 Amin

Dear Prof. Balda
Happy New Year
I have a question form the LMFnlsq:
Currently I am working on a problem that I should numerically solve a nonlinear equation, that the result will be complex numbers. I googles it and then there are two solution I have found;
One of them is with your code and the other one is with the fminsearch:
I checked the solutions for the simple problem:


clc
clear all
close all

y =@(x) x^2+1;
c =@(x) complex(x(1),x(2));
fx=@(x) abs(y(c(x)));

[x,ssq,cnt] = LMFnlsq(fx,[0.1;1.5])
[sol,fval]=fminsearch(fx,[0.1;1.5])

The results of the above equation are:


x =

-0.0004
1.0009

ssq =

4.1579e-06

cnt =

93

sol =

0.0000
1.0000

fval =

6.7954e-05

I wondering that it is possible to use your code in this case and any special option should be chosen for that?
Thank you very much
Best Regards

07 Nov 2013 Jing Qian  
11 Sep 2013 Jose

Dear Prof. Balda,
I apologise for jumping to conclusions. You are correct and I have a test here that shows how LMFnlsq gives the same results as nlinfit and lsqnonlin.

hu1 = @(b,x) (b(1)*x(:,2) - x(:,3)/b(5))./(1 + b(2)*x(:,1) + b(3)*x(:,2) + b(4)*x(:,3));
x1=[470 285 470 470 470 100 100 470 100 100 100 285 285]';
x2=[300 80 300 80 80 190 80 190 300 300 80 300 190]';
x3=[10 10 120 120 10 10 65 65 54 120 120 10 120]';
y=[8.55 3.79 4.82 0.02 2.75 14.39 2.54 4.35 13.00 8.50 0.05 11.32 3.13]';
x=[x1 x2 x3];
beta=[1 0.05 0.02 0.1 2];
[betahat,f,J]=nlinfit(x,y,hu1,beta);
disp(betahat);
% 1.2526 0.0628 0.0400 0.1124 1.1914

fun = @(b) (b(1)*x2 - x3/b(5))./(1 + b(2)*x1 + b(3)*x2 + b(4)*x3) - y;
[C,ssq,cnt] = LMFnlsq(fun, beta, 'Display',[-10,0], 'FunTol',1e-10);
disp(C);
% 1.2526 0.0628 0.0400 0.1124 1.1914

lsqnonlin(fun, beta)
% 1.2526 0.0628 0.0400 0.1124 1.1914

09 Sep 2013 Miroslav Balda

@Jose
There are no bugs in LMFnlsq and LMFnlsq2 that you have anounced. There is an error in the data, because all vectors x1, x2, x3, and y should be COLUMN vectors. More to it, the division operator in fun should be an array operator './'. The repaired code follows:

% Repaired data: Column vectors!
x1 = [470, 285, 470, 470, 470, 100, 100, 470, 100, 100, 100, 285, 285]';
x2 = [300, 80, 300, 80, 80, 190, 80, 190, 300, 300, 80, 300, 190]';
x3 = [ 10, 10, 120, 120, 10, 10, 65, 65, 54, 120, 120, 10, 120]';
y = [8.55, 3.79, 4.82, 0.02, 2.75, 14.39, 2.54, 4.35, 13.00, 8.50, 0.05, 11.32, 3.13]';
b0 = rand(5,1);
fun = @(b) (b(1)*x2 - x3/b(5))./(1 + b(2)*x1 + b(3)*x2 + b(4)*x3) - y; % ./
[C,ssq,cnt] = LMFnlsq2(fun, b0, 'Display',[-10,0], 'FunTol',1e-10);

08 Sep 2013 Jose

Dear Prof. Balda,
There is a bug on both LMFnlsq and LMFnlsq2, as shown in the code:

x1 = [470, 285, 470, 470, 470, 100, 100, 470, 100, 100, 100, 285, 285];
x2 = [300, 80, 300, 80, 80, 190, 80, 190, 300, 300, 80, 300, 190];
x3 = [ 10, 10, 120, 120, 10, 10, 65, 65, 54, 120, 120, 10, 120];
y = [8.55, 3.79, 4.82, 0.02, 2.75, 14.39, 2.54, 4.35, 13.00, 8.50, 0.05, 11.32, 3.13];
fun = @(b) ( b(1)*x2 - x3/b(5) )/(1 + b(2)*x1 + b(3)*x2 + b(4)*x3) - y;
[C,ssq,cnt] = LMFnlsq(fun, ones(5,1))

Error using *
Inner matrix dimensions must agree.

Error in LMFnlsq>getAv (line 434)
v = J'*r;

Error in LMFnlsq (line 279)
[A,v] = getAv(FUN,JAC,x,r,epsx);

28 Aug 2013 Emmanuel Farhi

That's a good contribution for LM fitting. I use it without constraints/restraints.

09 Apr 2013 Miroslav Balda

@Jose
Yes, your residuals are in order, but the brackets. However, we apply penalty functions in case, when we want to constrain a solution. Your proposal does not influence the solution at all, because all iterations remain in the square of the side = 4. You may try what happens, if you introduce bounds +-0.5 or +-0.25. Try it. Or look at the example from LMFnlsqtest file, where is more complex (circular) area, where the solution should be.

Weights are active only for current iteration outside bounds. It means, that it is not necessary to try to find "optimal" weights, because no optimal weight exists. The solution should be insensitive to the weight value. Therefore, only residuals corresponding penalty functions have to generate bigger residuals than other equations to keep iterations within bounds, where they are zero.. It may be very fast to guess that w=(10 up to 1000) could be appropriate. The final solution may not depend on the right value of the weight.

07 Apr 2013 Jose

Dear Prof. Balda,
Thank you so much, I understand it better now!
For the bounded Rosenbrock problem:
-2 <= x[1] <= 2
-2 <= x[2] <= 2
We need 4 penalty functions:
(x[1]>2)*(x[1]-2)*w2,
(x[1]<-2)*(x[1]+2)*w2,
(x[2]>2)*(x[2]-2)*w2,
(x[2]<-2)*(x[2]+2)*w2
And we try several values for w2, for example 0.1, 0.2, 0.4, ..., 1.9 to get the "best" value for w2.
Is this correct?

04 Apr 2013 Miroslav Balda

@Jose 3.4.2013
No, it can not operate due to three reasons:
1. Formally, the chosen interval does not reduce the domain of feasible solution, that is within the interval. However, the most important reasons are:
2. your condition
(x(2)>2)*(x(2)<-2)*x(2)*w2 is a combination of two conditions. Thus you have to set up two residuals:
(x(2)>2)*(x(2)-2)*w2;
(x(2)<-2)*(x(2)+2)*w2;
Both residuals are zero for false condition, however they are linearly dependent on the distance from the limit value due to the second term (before the weight w2).
3. your proposed condition is identically equal zero, because the product of two logical terms equals zero for any x.

The weight w2 should be chosen carefully in such a way that it gives the solution just on the limit in case when the minimum lays outside the interval. It should be not to small and simultaneously not too large.

03 Apr 2013 Jose

Dear Prof. Balda,
Thank you for your reply.
So, for the Rosenbrock problem with bounds:
-2 <= x <= 2
The penalty function is:
(x(2) > 2)*(x(2) < -2)*x(2)*w2
What is the best way to find experimentally w2?

02 Apr 2013 Miroslav Balda

@Kat
I am preparing the function npeaks.m that decomposes a function into a sum of Gaussian functions with the use of LMFnlsq function. It will occur in a short time.

@Jose
Bounds can be introduced by additional residuals in the form of penalty functions.
The residual (x(4)<0)*x(4)*w4
will keep x(4) nonnegative, if the user-defined weight w4 be suitable positive real value. It should be found experimentally.

02 Apr 2013 Jose

Dear Prof. Balda,
Excellent results, thank you for sharing!

How can we add parameter bounds?

28 Mar 2013 Kat

My data looks a bit like a Gaussian (Normal) curve. I fitted it using 1, 2, 3 and 4 gaussians added together where the sigmas, centre positions and heights were variable. It worked well for the 2 and 4 gaussians. I noticed for function with odd number of gaussians added together as 'res', the final fit (based on the x values outputted after cnt # of iterations) height doesn't match the data. i.e if you just make a simple Gaussian function as the data you want to fit and have your guess function as a Gaussian with the same parameters you created the data with, the final x residual/variables will not be the same. The height will be off by a noticeable amount. This only happened with odd numbers though. For even number of gaussians added together in the 'guess' fit, it worked perfectly.

11 Mar 2013 Surpan

@prof misrolav balda
here is my question in mathlab answer
http://www.mathworks.com/matlabcentral/answers/65291-imaginary-value-from-solution-nonlinier-least-square
check it there please

11 Mar 2013 Surpan

@ prof Misrolav Balda
actually my data consist from so many wavelength but this i give you from two wavelength
for 690nm
F = [0.779543143 0.624771688 0.387776261 0.266986051 0.174732672 0.130313689 0.110540421 0.051412959]

a is in range = 0.09 - 0.8
b is in range = 7 - 15

for c = [0.8 0.9 1 1.1 1.2 1.3 1.4 1.6]

i always get the imaginary solution with your code, please help me

01 Mar 2013 Kat

I figured it out! it took a while, i just split the function in half and kept the rest of the form the same res=@(y)eqn(1)(x<b)+eqn(2)(b<=x). took a few tries to get the syntax right. Thanks for the amazing algorithm!!!

01 Mar 2013 Miroslav Balda

@Surpan
Your problem is solvable after introducing new variable d=a+b and constraints on the variables to be real. If you send me your data, I'll try to solve it.

28 Feb 2013 Surpan

hi
im trying to contcat you by email because i need to solve my case so i ended up here

i have function with 2 parameter, like this
F = (0.6/(a + b))*(sqrt(3*a*(a + b))+(1/c))*(exp(-sqrt(3*a*(a + b ))*c))/(c^2)

F is known reflectance data
a and b is parameter
c is known vector distance

can i use LMFnlsq to solve that with lavenberg marquardt algorithm? im tring to use that code but the solution give me imaginary a and b

28 Feb 2013 Kat

I'm fitting with a discontinuous function. My data resembles a normal curve, but is wider on one side so I hope to fit it with 2 halves of a normal curve with different sigmas (spreads) on either side of the centre point. How do I input this into the res function. The residuals are the height, the two sigmas and the centrepoint. Furthermore, will the fact that the centre is what I'm trying to make more accurate a big problem if I'm using it to identify which side of the function (which sigma) I'm on?

31 Jan 2013 Miroslav Balda

The algorithm LMFnlsq is able to find the best approximation of a set of points by a curve supplied by the user. The curve is described by a formula with a number of (at tne beginning] unknown parameters. The function LMFnlsq modifies values of the parameters in such a way that a sum of squares of differences between the measured points and values of the function at positions corresponding the measuted points becomes minimum. In your case, you may chose all pixels describing a rib as the measured set of points, and should the form of a suitable function be known, the function LMFnlsq might be used for finding the optimum parameters of the function.

31 Jan 2013 Munie Ahmad

Dear Prof Balda,

I have a case where I need to fit multiple curve in one image. The image is a rough trace of ribs in BW from chest x-ray. The ribs are processed one side at a time (left and right). I want to use multiple curve fitting to get the exact curve of each rib for left/right. I would like to know if this algorithm is able to solve my problem. Please advise, and many thanks in advance.

(miz.jaggy18@gmail.com)

22 Dec 2012 Charles Nelatury  
28 Sep 2012 L Pir  
11 Jul 2012 Richard Crozier

very useful, thanks!

02 Jan 2012 Sadik

I thank you so much for providing this code. Implementation has its own challenges but your code worked like a charm to show how beautifully theory and practice meet. With good initial estimates, the algorithm converged in only a few iterations.

FYI: I used it for T1 estimation in MR where the equations in our case were much more complicated than single exponential recovery equations.

Thanks.

17 Dec 2011 Shayan

Man thank you so much, it was amazing!

14 Dec 2011 David Zhang

@Miroslav:

I was using analytical Jacobian. I also will not obey your evaluation that a failed lsqcurvefit does not deserve a 1 star. The NIST solutions are tested using different methods (hence the standard deviation is also available), so failing even one of them means there are codes out there that performs better, even using the poor starting condition.

Furthermore, 3 of the 5 failure cases are observed data. They are not some cooked up example designed to cause the optimization to fail. Therefore, if used in real world examples, there is a possibility that it will not work. I am simply letting everybody know a caveat cantor that the program solution should be treated with a grain of salt, even if the sum of squares dropped by 8 orders. Although as the website says, passing all tests is not a pre-requisite to be a good optimization software, but don't you think not passing the tests from an agency that establishes industry standards would not be considered as "industry standard"?

I admit giving it 1 star is too aggressive. I do this to get your attention to respond. Indeed it worked. Unfortunately I can't go back and change it. Because this program works for all easy to intermediate difficulty problems, it deserves a 4 star.

14 Dec 2011 Miroslav Balda

@ David:
I thank you for thorough testing of my function LMFnlsq. You have found that 5 functions of the highest difficulty out of total 27 failed. It is true only partially. They failed only for the most bad initial estimates of the solution. The good solution has been obtained with the more close estimate for all above examples but Bennett5. In this case only one digit of sum of squares was valid, however, when sum of squares dropped by 8 orders! All those results have been obtained with embedded function for finite difference Jacobian matrix estimates, what strongly diminishes precission of results. Only analytical Jacobian matrix ensures top result. I am affraid that you also used finite difference approximates.

The NIST collection of testing problems is only a recommendation for software authors. In contrary to your opinion, I am satisfied by the results I have obtained. Especially, after testing a less difficult problem Hahn1, which converged for both estimates of the solution, while the function lsqcurvefit from the Optimization Toolbox (OT) did not find any solution. Would you evaluate it also by one star? Not me, because OT is much better. However, my function is smaler and takes fewer iterations to solve the problem. This is the reason, why I regard your evaluation as a bad joke.
One star means: "withdraw your function from FEX!". I will not obey your evaluation, because there are hundreds and may be thousands of people who downloaded and successfully applied my function for solving their problems. Some of them gave the function the top evaluation, even that I never did not declare it as an "industry standard".

02 Dec 2011 David Zhang

By the way, the failures occurs in

MGH17
BoxBOD
MGH10
Eckerle4
Bennett5

02 Dec 2011 David Zhang

This doesn't work for all the least square tests established by the National Institute of Standard and Technology (NIST).

http://www.itl.nist.gov/div898/strd/nls/nls_main.shtml

So this can't be called the industry standard.

23 Jul 2011 Ryan Muir

Works fast and well! However, it has a HUGE learning curve if you don't write code at a very high level...the included examples I found to be less than completely understandable to me, and don't work unless you know how to use them. Instead, here is a VERY SIMPLE program that uses this algorithm to fit a function to a simulated dataset. I took it and modified it from a forum post elsewhere on the internets. Recall that chi^2 = sum(((x - mu)/sigma).^2), and the point of this function is to determine what parameters in mu minimize chi^2:

% Ender 2008-07-08
a = 10, b = -8, c = -.2 %Model parameters
t = (0:.01:1)'; %should be a column vector

data = a + b*(1-exp(t/c)) + .1*randn(size(t)); %Function plus random error; simulates data

res = @(x) data - (x(1) + x(2)*(1-exp(t/x(3))) ); %Numerator of chi^2, with quantity squared implied for fitting in LMFnlsq; also known as residuals

x0 = [5,3,-.1]; %rather poor initial guess of parameters

[x,ssq,cnt] = LMFnlsq(res,x0) %Returns the parameter values which minimized the value of the numerator

%Plot the data
hold on
plot(t,data)
plot(t,(x(1) + x(2)*(1-exp(t/x(3)))),'r'), grid

09 Mar 2011 Ryan Muir

This program is excellent. I found that it was able to do my nonlinear regression very quickly, with multiple quick and easy options for both analysis and displaying the data correctly. Further, once you figure out how to use the function, performing a nonlinear regression was quite easy.

However, trying to initially figure out how to use the software was quite hard =P. I initially attempted to follow the instructions posted here, which are not complete or correct in its explanation or equations. Once I discovered the included test file, which is correct and will run, I was able to copy and paste the relevant section of code (example 3) into matlab and get instant results. I was then able to pick apart and transform the example for my own purposes. Figuring all that out took quite a while. I took off one star for that upset, but it's really an excellent software package!

16 Feb 2011 Xylar Asay-Davis

This function was extremely helpful for the problem I'm working on: least-squares fit to multidimensional rational polynomials.

Thanks very much!

09 Nov 2010 Neil Dalchau

This is an excellent improvement to LMFsolve, which I only became aware of today. This upgrade has considerably improved some of my existing analyses. Convergence is now 100% for my problem.

Many thanks Miroslav!

12 Oct 2009 Miroslav Balda

Read my answer dated 04 jun 2009. I'll answer you later by e-mail.

01 Oct 2009 Shin-ichi Uehara

Hi,

I want to do curve fitting of some laboratory results with a model.
The model which I want to use can not explained by a single function,
but is explained by a series of differential equations.
In this case, I do not know how to apply LMFnlsq.
It is very helpful if you give any advice.

Thanks.

(I am applying "fminsearch" to this problem so far, but
the description of "fminsearch" mentions that this code is
not suitable to this kind of problems.)

15 Aug 2009 Jon Doan

Very nice

11 Jun 2009 thomas  
04 Jun 2009 Miroslav Balda

This part of the submission is intended for Comments and Ratings. More over, I pleased anybody to write me by e-mail on errors and improvements, see the section Other requirements. This is the reason why I'll not communicate here anymore.

03 Jun 2009 Hossein SOLEIMANI

Hi

how can I give an interval for the results, i don't need negative values for answers or I need some answers higher
than some values,

you also helped me before Mira

thanks.

03 Jun 2009 jeyasenthil

i try to run the Example Rosenbrock's function ( SQP)with "xy" in the output arguments ..but it not work ...it shows xy =[ ] empty matrix...can any body clear this to me?...

function ros = rosen(x)
d = sqrt(x(1)^2+x(2)^2)-1.5;
r=1.5;
ros = [ 10*(x(2)-x(1)^2) % first part, f1(x)
1-x(1) % second part, f2(x)
(d>0)*d*1000 ];


x0=[-1.9,2];
>> options=LMFnlsq;
>> [x,ssq,cnt,nfJ,XY] = LMFnlsq('rosen',x0,options)

24 May 2009 Kevin

Professor Blada:
I truly apprecate you helping me out to solve nonlinear equation.
Your code worked beautifully for me and thanks again for your prompt feedback in weekend time.
Thanks again, Dr. Balda.

27 Jan 2009 Polarman  
28 Oct 2008 Steven White

This is a great bit of code that does exactly what it says. I used it to replace lsqcurvefit as I no longer have the optimisation toolbox.

I would suggest that the author also considers using additional examples, as the ones given are very complicated to understand if you come from a very different field. However, after a little search I found this by the author, which I found much more useful:
http://mathforum.org/kb/message.jspa?messageID=6283753&tstart=0

It tells you how to use the function to fit a curve to some data. This should be easily modified for fitting other functions/data etc.

25 Sep 2008 C Schwalm  
24 Sep 2008 Levente Hunyadi  
15 Jul 2008 Jon Jackson

Works well, thanks.

For others who may be looking for the LMFtest dependencies, they are file ID numbers 9033, 9035 and 11725 (as documented in the pdf file appendix).

08 Jul 2008 Miroslav Balda

I tested the function LMFnlsq once again after reading your comments and an evaluation. I am sorry with you that you are not able to copy and paste ascii lines without errors. You did not try the script LMFtest, which is included in the zip file together with LMFnlsq, where you could get results without any copying. Was it sou difficult to look at it and find how to use the function LMFnlsq. I am afraid of the reaction of other people, who could use even better function than that they may find in the Optimization Toolbox, but seeing your evaluation they will pass it. It is not fair. I thing that you evaluated you yourself.

07 Jul 2008 Ender Wiggin

The solution does not work. I get an error whenever I copy and paste the code than run it on MATLAB

Updates
15 Nov 2007

Formal changes of the description for better outlook.

09 Jan 2009

Improved functionality, removed bug in assembling analytical Jacobian matrix, complemented example for curve fitting.

27 Jan 2009

Implemented new subfunction for printing intermediate results, modifiedcation both the function and a testing script code. The description has been complemented by a description of the LMF method.

20 Dec 2011

Improved description (help), improved printing of results, complemented results of NIST test by BoxBOD problem.

20 Dec 2011

Forgotten zip-file of functions appended

25 Feb 2012

A small bug in setting of the vector XTol removed

Contact us