Solve nonlinear curvefitting (datafitting) problems in leastsquares sense
Find coefficients x that solve the problem
$$\underset{x}{\mathrm{min}}{\Vert F(x,xdata)ydata\Vert}_{2}^{2}=\underset{x}{\mathrm{min}}{\displaystyle \sum _{i}{\left(F\left(x,xdat{a}_{i}\right)ydat{a}_{i}\right)}^{2}},$$
given input data xdata, and the observed output ydata, where xdata and ydata are matrices or vectors, and F (x, xdata) is a matrixvalued or vectorvalued function of the same size as ydata.
Optionally, the components of x can have lower and upper bounds lb, and ub. x, lb, and ub can be vectors or matrices; see Matrix Arguments.
The lsqcurvefit
function uses the same
algorithm as lsqnonlin
. lsqcurvefit
simply
provides a convenient interface for datafitting problems.
x = lsqcurvefit(fun,x0,xdata,ydata)
x = lsqcurvefit(fun,x0,xdata,ydata,lb,ub)
x = lsqcurvefit(fun,x0,xdata,ydata,lb,ub,options)
x = lsqcurvefit(problem)
[x,resnorm] = lsqcurvefit(...)
[x,resnorm,residual] = lsqcurvefit(...)
[x,resnorm,residual,exitflag] = lsqcurvefit(...)
[x,resnorm,residual,exitflag,output]
= lsqcurvefit(...)
[x,resnorm,residual,exitflag,output,lambda]
= lsqcurvefit(...)
[x,resnorm,residual,exitflag,output,lambda,jacobian]
= lsqcurvefit(...)
x = lsqcurvefit(fun,x0,xdata,ydata)
starts
at x0
and finds coefficients x
to
best fit the nonlinear function fun(x,xdata)
to
the data ydata
(in the leastsquares sense). ydata
must
be the same size as the vector (or matrix) F
returned
by fun
.
Note:
Passing Extra Parameters explains
how to pass extra parameters to

x = lsqcurvefit(fun,x0,xdata,ydata,lb,ub)
defines
a set of lower and upper bounds on the design variables in x
so
that the solution is always in the range lb
≤ x
≤ ub
.
You can fix the solution component x(i)
by specifying lb(i) = ub(i)
.
x = lsqcurvefit(fun,x0,xdata,ydata,lb,ub,options)
minimizes
with the optimization options specified in options
.
Use optimoptions
to set these
options. Pass empty matrices for lb
and ub
if
no bounds exist.
x = lsqcurvefit(problem)
finds the minimum
for problem
, where problem
is
a structure described in Input Arguments.
Create the problem
structure by exporting
a problem from Optimization app, as described in Exporting Your Work.
[x,resnorm] = lsqcurvefit(...)
returns
the value of the squared 2norm of the residual at x
: sum((fun(x,xdata)ydata).^2)
.
[x,resnorm,residual] = lsqcurvefit(...)
returns
the value of the residual fun(x,xdata)ydata
at
the solution x
.
[x,resnorm,residual,exitflag] = lsqcurvefit(...)
returns
a value exitflag
that describes the exit condition.
[x,resnorm,residual,exitflag,output]
= lsqcurvefit(...)
returns a structure output
that
contains information about the optimization.
[x,resnorm,residual,exitflag,output,lambda]
= lsqcurvefit(...)
returns a structure lambda
whose
fields contain the Lagrange multipliers at the solution x
.
[x,resnorm,residual,exitflag,output,lambda,jacobian]
= lsqcurvefit(...)
returns the Jacobian of fun
at
the solution x
.
Note:
If the specified input bounds for a problem are inconsistent,
the output Components of 
Function Arguments contains
general descriptions of arguments passed into lsqcurvefit
.
This section provides functionspecific details for fun
, options
,
and problem
:
 The function you want to fit. x = lsqcurvefit(@myfun,x0,xdata,ydata) where function F = myfun(x,xdata) F = ... % Compute function values at x, xdata
f = @(x,xdata)x(1)*xdata.^2+x(2)*sin(xdata); x = lsqcurvefit(f,x0,xdata,ydata);
If the Jacobian can also be computed and the
Jacobian option is options = optimoptions('lsqcurvefit','Jacobian','on') then the function function [F,J] = myfun(x,xdata) F = ... % objective function values at x if nargout > 1 % two output arguments J = ... % Jacobian of the function evaluated at x end If  
 Options provides the functionspecific
details for the  
problem 
 Objective function of x and xdata  
 Initial point for x ,
active set algorithm only  
 Input data for objective function  
 Output data to be matched by objective function  
lb  Vector of lower bounds  
ub  Vector of upper bounds  
 'lsqcurvefit'  
 Options created with optimoptions 
Function Arguments contains
general descriptions of arguments returned by lsqcurvefit
.
This section provides functionspecific details for exitflag
, lambda
,
and output
:
 Integer identifying the
reason the algorithm terminated. The following lists the values of  
 Function converged to a solution  
 Change in  
 Change in the residual was less than the specified tolerance.  
 Magnitude of search direction smaller than the specified tolerance.  
 Number of iterations exceeded  
 Output function terminated the algorithm.  
 Problem is infeasible: the bounds  
 Optimization could not make further progress.  
 Structure containing the
Lagrange multipliers at the solution  
lower  Lower bounds  
upper  Upper bounds  
 Structure containing information about the optimization. The fields of the structure are  
firstorderopt  Measure of firstorder optimality  
iterations  Number of iterations taken  
funcCount  Number of function evaluations  
cgiterations  Total number of PCG iterations (trustregionreflective algorithm only)  
algorithm  Optimization algorithm used  
stepsize  Final displacement in  
message  Exit message 
Note The sum of squares should not be formed explicitly. Instead, your function should return a vector of function values. See the examples below. 
Optimization options used by lsqcurvefit
.
Some options apply to all algorithms, some are only relevant when
using the trustregionreflective algorithm, and others are only relevant
when you are using the LevenbergMarquardt algorithm. Use optimoptions
to set or change options
.
See Algorithm Options for detailed
information.
The Algorithm
option specifies a preference
for which algorithm to use. It is only a preference, because certain
conditions must be met to use the trustregionreflective or LevenbergMarquardt
algorithm. For the trustregionreflective algorithm, the nonlinear
system of equations cannot be underdetermined; that is, the number
of equations (the number of elements of F
returned
by fun
) must be at least as many as the length
of x
. Furthermore, only the trustregionreflective
algorithm handles bound constraints.
Both algorithms use the following option:
 Choose between The 
DerivativeCheck  Compare usersupplied derivatives
(gradients of objective or constraints) to finitedifferencing derivatives.
The choices are 
Diagnostics  Display diagnostic information
about the function to be minimized or solved. The choices are 
 Maximum change in variables for
finitedifference gradients (a positive scalar). The default is 
 Minimum change in variables for
finitedifference gradients (a positive scalar). The default is 
 Level of display:

FinDiffRelStep  Scalar or vector step size factor. When you set
and central finite differences are
Scalar 
FinDiffType  Finite differences, used to estimate gradients,
are either The algorithm is careful to obey bounds when estimating both types of finite differences. So, for example, it could take a backward, rather than a forward, difference to avoid evaluating at a point outside bounds. 
 Check whether function values are
valid. 
 If 
 Maximum number of function evaluations
allowed, a positive integer. The default is 
 Maximum number of iterations allowed,
a positive integer. The default is 
OutputFcn  Specify one or more userdefined
functions that an optimization function calls at each iteration, either
as a function handle or as a cell array of function handles. The default
is none ( 
 Plots various measures of progress
while the algorithm executes, select from predefined plots or write
your own. Pass a function handle or a cell array of function handles.
The default is none (
For information on writing a custom plot function, see Plot Functions. 
 Termination tolerance on the function value, a positive scalar. The default is 1e6. 
 Termination tolerance on 
 Typical 
The trustregionreflective algorithm uses the following options:
 Function handle for
Jacobian multiply function. For largescale structured problems, this
function computes the Jacobian matrix product W = jmfun(Jinfo,Y,flag) where [F,Jinfo] = fun(x)
In each case,
See Minimization with Dense Structured Hessian, Linear Equalities and Jacobian Multiply Function with Linear Least Squares for similar examples.  
 Sparsity pattern of the Jacobian
for finite differencing. Set Use In
the worst case, if the structure is unknown, do not set  
 Maximum number of PCG (preconditioned
conjugate gradient) iterations, a positive scalar. The default is  
 Upper bandwidth of preconditioner
for PCG, a nonnegative integer. The default  
 Termination tolerance on the PCG
iteration, a positive scalar. The default is 
The LevenbergMarquardt algorithm uses the following options:
 Initial value of the LevenbergMarquardt parameter, a
positive scalar. Default is 


Given vectors of data xdata and ydata, suppose you want to find coefficients x to find the best fit to the exponential decay equation
$$ydata(i)\text{=}x(1){e}^{x(2)xdata(i)}$$
That is, you want to minimize
$$\underset{x}{\mathrm{min}}{\displaystyle \sum _{i}{\left(F\left(x,xdat{a}_{i}\right)ydat{a}_{i}\right)}^{2}},$$
where m is the length of xdata
and ydata
,
the function F
is defined by
F(x,xdata) = x(1)*exp(x(2)*xdata);
and the starting point is x0 = [100; 1];
.
First, write a file to return the value of F
(F
has n
components).
function F = myfun(x,xdata) F = x(1)*exp(x(2)*xdata);
Next, invoke an optimization routine:
% Assume you determined xdata and ydata experimentally xdata = ... [0.9 1.5 13.8 19.8 24.1 28.2 35.2 60.3 74.6 81.3]; ydata = ... [455.2 428.6 124.1 67.3 43.2 28.1 13.1 0.4 1.3 1.5]; x0 = [100; 1] % Starting guess [x,resnorm] = lsqcurvefit(@myfun,x0,xdata,ydata);
At the time that lsqcurvefit
is called, xdata
and ydata
are
assumed to exist and are vectors of the same size. They must be the
same size because the value F
returned by fun
must
be the same size as ydata
.
After 27 function evaluations, this example gives the solution
x,resnorm x = 498.8309 0.1013 resnorm = 9.5049
There may be a slight variation in the number of iterations
and the value of the returned x
, depending on the
platform and release.
You can use the trustregion reflective algorithm in lsqnonlin
, lsqcurvefit
,
and fsolve
with small to mediumscale
problems without computing the Jacobian in fun
or
providing the Jacobian sparsity pattern. (This also applies to using fmincon
or fminunc
without
computing the Hessian or supplying the Hessian sparsity pattern.)
How small is small to mediumscale? No absolute answer is available,
as it depends on the amount of virtual memory in your computer system
configuration.
Suppose your problem has m
equations and n
unknowns.
If the command J = sparse(ones(m,n))
causes
an Out of memory
error on your machine,
then this is certainly too large a problem. If it does not result
in an error, the problem might still be too large. You can only find
out by running it and seeing if MATLAB runs within the amount
of virtual memory available on your system.
The trustregionreflective method does not allow equal upper
and lower bounds. For example, if lb(2)==ub(2)
, lsqcurvefit
gives the error
Equal upper and lower bounds not permitted.
lsqcurvefit
does not handle equality constraints,
which is another way to formulate equal bounds. If equality constraints
are present, use fmincon
, fminimax
, or fgoalattain
for
alternative formulations where equality constraints can be included.
The function to be minimized must be continuous. lsqcurvefit
might
only give local solutions.
lsqcurvefit
can
solve complexvalued problems directly with the levenbergmarquardt
algorithm.
However, this algorithm does not accept bound constraints. For a complex
problem with bound constraints, split the variables into real and
imaginary parts, and use the trustregionreflective
algorithm.
See Fit a Model to ComplexValued Data.
Note:
The Statistics and Machine Learning Toolbox™ function The 
The trustregionreflective algorithm for lsqcurvefit
does
not solve underdetermined systems; it requires that the number of
equations, i.e., the row dimension of F, be at
least as great as the number of variables. In the underdetermined
case, the LevenbergMarquardt algorithm is used instead.
The preconditioner computation used in the preconditioned conjugate gradient part of the trustregionreflective method forms J^{T}J (where J is the Jacobian matrix) before computing the preconditioner; therefore, a row of J with many nonzeros, which results in a nearly dense product J^{T}J, can lead to a costly solution process for large problems.
If components of x have no upper (or lower)
bounds, then lsqcurvefit
prefers that the corresponding
components of ub
(or lb
) be
set to inf
(or inf
for lower
bounds) as opposed to an arbitrary but very large positive (or negative
for lower bounds) number.
TrustRegionReflective Problem Coverage and Requirements
For Large Problems 


The LevenbergMarquardt algorithm does not handle bound constraints.
Since the trustregionreflective algorithm does not handle
underdetermined systems and the LevenbergMarquardt does not handle
bound constraints, problems with both these characteristics cannot
be solved by lsqcurvefit
.
[1] Coleman, T.F. and Y. Li, "An Interior, Trust Region Approach for Nonlinear Minimization Subject to Bounds," SIAM Journal on Optimization, Vol. 6, pp. 418445, 1996.
[2] Coleman, T.F. and Y. Li, "On the Convergence of Reflective Newton Methods for LargeScale Nonlinear Minimization Subject to Bounds," Mathematical Programming, Vol. 67, Number 2, pp. 189224, 1994.
[3] Dennis, J. E. Jr., "Nonlinear LeastSquares," State of the Art in Numerical Analysis, ed. D. Jacobs, Academic Press, pp. 269312, 1977.
[4] Levenberg, K., "A Method for the Solution of Certain Problems in LeastSquares," Quarterly Applied Math. 2, pp. 164168, 1944.
[5] Marquardt, D., "An Algorithm for LeastSquares Estimation of Nonlinear Parameters," SIAM Journal Applied Math., Vol. 11, pp. 431441, 1963.
[6] More, J. J., "The LevenbergMarquardt Algorithm: Implementation and Theory," Numerical Analysis, ed. G. A. Watson, Lecture Notes in Mathematics 630, Springer Verlag, pp. 105116, 1977.