## Documentation Center |

On this page… |
---|

Jacobians and Memory in Nonlinear Least Squares Step 1: Write a file myfun.m that computes the objective function values. |

You can use the trust-region reflective algorithm in `lsqnonlin`, `lsqcurvefit`,
and `fsolve` with small- to medium-scale
problems without computing the Jacobian in `fun` or
providing the Jacobian sparsity pattern. (This example also applies
when using `fmincon` or `fminunc` without computing the Hessian or
supplying the Hessian sparsity pattern.) How small is small- to medium-scale?
No absolute answer is available, as it depends on the amount of virtual
memory in your computer system configuration.

Suppose your problem has `m` equations and `n` unknowns.
If the command `J = sparse(ones(m,n))` causes
an `Out of memory` error on your machine,
then this is certainly too large a problem. If it does not result
in an error, the problem might still be too large. You can only find
out by running it and seeing if MATLAB^{®} runs within the amount
of virtual memory available on your system.

Suppose you have a small problem with 10 equations and 2 unknowns,
such as finding *x* that minimizes

starting at the point `x = [0.3,0.4]`.

Because `lsqnonlin` assumes
that the sum of squares is not explicitly formed in the user function,
the function passed to `lsqnonlin` should instead
compute the vector valued function

for *k* = 1 to 10 (that is, `F` should
have 10 components).

function F = myfun(x) k = 1:10; F = 2 + 2*k-exp(k*x(1))-exp(k*x(2));

x0 = [0.3 0.4]; % Starting guess [x,resnorm,res,eflag,output1] = lsqnonlin(@myfun,x0); % Invoke optimizer

Because the Jacobian is not computed in `myfun.m`,
and no Jacobian sparsity pattern is provided by the `JacobPattern` option
in `options`, `lsqnonlin` calls
the trust-region reflective algorithm with `JacobPattern` set
to `Jstr = sparse(ones(10,2))`.
This is the default for `lsqnonlin`. Note that the `Jacobian` option
in `options` is set to `'off'` by
default.

When the finite-differencing routine is called initially, it
detects that `Jstr` is actually a dense matrix, i.e.,
no speed benefit is derived from storing it as a sparse matrix. From
then on, the finite-differencing routine uses `Jstr = ones(10,2)` (a
full matrix) for the optimization computations.

After about 24 function evaluations, this example gives the solution

x,resnorm x = 0.2578 0.2578 resnorm = 124.3622

Most computer systems can handle much larger full problems,
say into the hundreds of equations and variables. But *if* there
is some sparsity structure in the Jacobian (or Hessian) that can be
taken advantage of, the large-scale methods always runs faster if
this information is provided.

The objective function is simple enough to calculate its Jacobian. Following the definition in Jacobians of Vector Functions, a Jacobian function represents the matrix

Here, *F _{k}*(

so

Modify the objective function file.

function [F,J] = myfun(x) k = 1:10; F = 2 + 2*k-exp(k*x(1))-exp(k*x(2)); if nargout > 1 J = zeros(10,2); J(k,1) = -k.*exp(k*x(1)); J(k,2) = -k.*exp(k*x(2)); end

Set options so the solver uses the Jacobian.

opts = optimoptions(@lsqnonlin,'Jacobian','on');

Run the solver.

x0 = [0.3 0.4]; % Starting guess [x,resnorm,res,eflag,output2] = lsqnonlin(@myfun,x0,[],[],opts);

The solution is the same as before.

x,resnorm x = 0.2578 0.2578 resnorm = 124.3622

The advantage to using a Jacobian is that the solver takes fewer function evaluations, 24 instead of 72.

[output1.funcCount,output2.funcCount] ans = 72 24

Was this topic helpful?