Quantcast

Documentation Center

  • Trial Software
  • Product Updates

Nonlinear Equations with Finite-Difference Jacobian

In the example Nonlinear Equations with Analytic Jacobian, the function bananaobj evaluates F and computes the Jacobian J. What if the code to compute the Jacobian is not available? By default, if you do not indicate that the Jacobian can be computed in the objective function (by setting the Jacobian option in options to 'on'), fsolve, lsqnonlin, and lsqcurvefit instead use finite differencing to approximate the Jacobian. This is the default Jacobian option. You can select finite differencing by setting Jacobian to 'off' using optimoptions.

This example uses bananaobj from the example Nonlinear Equations with Analytic Jacobian as the objective function, but sets Jacobian to 'off' so that fsolve approximates the Jacobian and ignores the second bananaobj output.

n = 64;  
x0(1:n,1) = -1.9; 
x0(2:2:n,1) = 2;
options = optimoptions(@fsolve,'Display','iter','Jacobian','off');
[x,F,exitflag,output,JAC] = fsolve(@bananaobj,x0,options);

The example produces the following output:

                                    Norm of    First-order   Trust-region
Iteration  Func-count   f(x)        step       optimality    radius
    0         65       8563.84                       615               1
    1        130       3093.71            1          329               1
    2        195       225.104          2.5         34.8             2.5
    3        260        212.48         6.25         34.1            6.25
    4        261        212.48         6.25         34.1            6.25
    5        326       102.771       1.5625         6.39            1.56
    6        327       102.771      3.90625         6.39            3.91
    7        392       87.7443     0.976562         2.19           0.977
    8        457       74.1426      2.44141         6.27            2.44
    9        458       74.1426      2.44141         6.27            2.44
   10        523        52.497     0.610352         1.52            0.61
   11        588       41.3297      1.52588         4.63            1.53
   12        653       34.5115      1.52588         6.97            1.53
   13        718       16.9716      1.52588         4.69            1.53
   14        783       8.16797      1.52588         3.77            1.53
   15        848       3.55178      1.52588         3.56            1.53
   16        913       1.38476      1.52588         3.31            1.53
   17        978      0.219553      1.16206         1.66            1.53
   18       1043             0    0.0468565            0            1.53

Equation solved.

fsolve completed because the vector of function values is near zero
as measured by the default value of the function tolerance, and
the problem appears regular as measured by the gradient.

The finite-difference version of this example requires the same number of iterations to converge as the analytic Jacobian version in the preceding example. It is generally the case that both versions converge at about the same rate in terms of iterations. However, the finite-difference version requires many additional function evaluations. The cost of these extra evaluations might or might not be significant, depending on the particular problem.

Was this topic helpful?