Main Content

This example shows how to supply a Jacobian Sparsity Pattern to solve large a sparse system of equations effectively. The example uses the objective function, defined for a system of *n* equations,

$$\begin{array}{l}F(1)=3{x}_{1}-2{x}_{1}^{2}-2{x}_{2}+1,\\ F(i)=3{x}_{i}-2{x}_{i}^{2}-{x}_{i-1}-2{x}_{i+1}+1,\\ F(n)=3{x}_{n}-2{x}_{n}^{2}-{x}_{n-1}+1.\end{array}$$

The equations to solve are $${F}_{i}(x)=0$$, $$1\le i\le n$$. The example uses $$n=1000$$. The `nlsf1a`

helper function at the end of this example implements the objective function $$F(x)$$.

In the example Large Sparse System of Nonlinear Equations with Jacobian, which solves the same system, the objective function has an explicit Jacobian. However, sometimes you cannot compute the Jacobian explicitly, and yet you can determine which elements of the Jacobian are nonzero. In this example, the Jacobian is tridiagonal, because the only variables that appear in the definition of $$F(i)$$ are $${x}_{i-1}$$, $${x}_{i}$$, and $${x}_{i+1}$$. So the only nonzero parts of the Jacobian are on the main diagonal and its two neighboring diagonals. The Jacobian sparsity pattern is a matrix whose nonzero elements correspond to (potentially) nonzero elements in the Jacobian.

Create a sparse $$n$$-by-$$n$$ tridiagonal matrix of ones representing the Jacobian sparsity pattern. (For an explanation of this code, see `spdiags`

.)

n = 1000; e = ones(n,1); Jstr = spdiags([e e e],-1:1,n,n);

View the structure of `Jstr`

.

spy(Jstr)

Set `fsolve`

options to use the `'trust-region'`

algorithm, which is the only `fsolve`

algorithm that can make use of a Jacobian sparsity pattern.

options = optimoptions('fsolve','Algorithm','trust-region','JacobPattern',Jstr);

Set the initial point `x0`

to a vector of –1 entries.

x0 = -1*ones(n,1);

Solve the problem.

[x,fval,exitflag,output] = fsolve(@nlsf1a,x0,options);

Equation solved. fsolve completed because the vector of function values is near zero as measured by the value of the function tolerance, and the problem appears regular as measured by the gradient.

Examine the resulting residual norm and the number of function evaluations that `fsolve`

takes.

disp(norm(fval))

1.0522e-09

disp(output.funcCount)

25

The residual norm is near zero, indicating that `fsolve`

solves the problem correctly. The number of function evaluations is fairly small, just a few dozen. Compare this number of function evaluations to the number without a supplied Jacobian sparsity pattern.

options = optimoptions('fsolve','Algorithm','trust-region'); [x,fval,exitflag,output] = fsolve(@nlsf1a,x0,options);

Equation solved. fsolve completed because the vector of function values is near zero as measured by the value of the function tolerance, and the problem appears regular as measured by the gradient.

disp(norm(fval))

1.0659e-09

disp(output.funcCount)

5005

The solver reaches essentially the same solution as before, but takes thousands of function evaluations to do so.

This code creates the `nlsf1a`

function.

function F = nlsf1a(x) % Evaluate the vector function n = length(x); F = zeros(n,1); i = 2:(n-1); F(i) = (3-2*x(i)).*x(i)-x(i-1)-2*x(i+1) + 1; F(n) = (3-2*x(n)).*x(n)-x(n-1) + 1; F(1) = (3-2*x(1)).*x(1)-2*x(2) + 1; end