Documentation |
On this page… |
---|
The initial point seems to be a local minimum or solution because the first-order optimality measure is close to 0. You might be unhappy with this result, since the solver did not improve your initial point.
If you are unsure that the initial point is truly a local minimum, try:
Starting from different points — see Change the Initial Point.
Checking that your objective and constraints are defined correctly (for example, do they return the correct values at some points?) — see Check your Objective and Constraint Functions. Check that an infeasible point does not cause an error in your functions; see Iterations Can Violate Constraints.
Changing tolerances, such as TolFun, TolCon, and TolX — see Use Appropriate Tolerances.
Scaling your problem so each coordinate has about the same effect — see Rescale the Problem.
Providing gradient and Hessian information — see Provide Analytic Gradients or Jacobian and Provide a Hessian.
The solver might have reached a local minimum, but cannot be certain because the first-order optimality measure is not less than the TolFun tolerance. (To learn more about first-order optimality measure, see First-Order Optimality Measure.) To see if the reported solution is reliable, consider the following suggestions.
If you try to minimize a nonsmooth function, or have nonsmooth constraints, "Local Minimum Possible" can be the best exit message. This is because the first-order optimality conditions do not apply at a nonsmooth point.
To satisfy yourself that the solution is adequate, try to Check Nearby Points.
Restarting an optimization at the final point can lead to a solution with a better first-order optimality measure. A better (lower) first-order optimality measure gives you more reason to believe that the answer is reliable.
For example, consider the following minimization problem, taken from an example (echodemo symbolic_optim_demo):
options = optimoptions('fminunc','Algorithm','quasi-newton'); funh = @(x)log(1 + (x(1) - 4/3)^2 + 3*(x(2) - (x(1)^3 - x(1)))^2); [xfinal fval exitflag] = fminunc(funh,[-1;2],options) Local minimum possible. fminunc stopped because it cannot decrease the objective function along the current search direction. xfinal = 1.3333 1.0370 fval = 8.5265e-014 exitflag = 5
The exit flag value of 5 indicates that the first-order optimality measure was above the function tolerance. Run the minimization again starting from xfinal:
[xfinal2 fval2 exitflag2] = fminunc(funh,xfinal,options) Local minimum found. Optimization completed because the size of the gradient is less than the default value of the function tolerance. xfinal2 = 1.3333 1.0370 fval2 = 6.5281e-014 exitflag2 = 1
The local minimum is "found," not "possible," and the exitflag is 1, not 5. The two solutions are virtually identical. Yet the second run has a more satisfactory exit message, since the first-order optimality measure was low enough: 7.5996e-007, instead of 3.9674e-006.
Many solvers give you a choice of algorithm. Different algorithms can lead to the use of different stopping criteria.
For example, Rerun Starting At Final Point returns exitflag 5 from the first run. This run uses the quasi-newton algorithm.
The trust-region algorithm requires a user-supplied gradient. betopt.m contains a calculation of the objective function and gradient:
function [f gradf] = betopt(x) g = 1 + (x(1)-4/3)^2 + 3*(x(2) - (x(1)^3-x(1)))^2; f = log(g); gradf(1) = 2*(x(1)-4/3) + 6*(x(2) - (x(1)^3-x(1)))*(1-3*x(1)^2); gradf(1) = gradf(1)/g; gradf(2) = 6*(x(2) - (x(1)^3 -x(1)))/g;
Running the optimization using the trust-region algorithm results in a different exitflag:
options = optimoptions('fminunc','Algorithm','trust-region','GradObj','on'); [xfinal3 fval3 exitflag3] = fminunc(@betopt,[-1;2],options) Local minimum possible. fminunc stopped because the final change in function value relative to its initial value is less than the default value of the function tolerance. xfinal3 = 1.3333 1.0370 fval3 = 7.6659e-012 exitflag3 = 3
The exit condition is better than the quasi-newton condition, though it is still not the best. Rerunning the algorithm from the final point produces a better point, with extremely small first-order optimality measure:
[xfinal4 fval4 eflag4 output4] = fminunc(@betopt,xfinal3,options) Local minimum found. Optimization completed because the size of the gradient is less than the default value of the function tolerance. xfinal4 = 1.3333 1.0370 fval4 = 0 eflag4 = 1 output4 = iterations: 1 funcCount: 2 cgiterations: 1 firstorderopt: 7.5497e-11 algorithm: 'trust-region' message: 'Local minimum found. Optimization completed because the size o...' constrviolation: []
Sometimes tightening or loosening tolerances leads to a more satisfactory result. For example, choose a smaller value of TolFun in the Try a Different Algorithm section:
options = optimoptions('fminunc','Algorithm','trust-region',... 'TolFun',1e-8,'GradObj','on'); % default=1e-6 [xfinal3 fval3 eflag3 output3] = fminunc(@betopt,[-1;2],options) Local minimum found. Optimization completed because the size of the gradient is less than the selected value of the function tolerance. xfinal3 = 1.3333 1.0370 fval3 = 0 eflag3 = 1 output3 = iterations: 15 funcCount: 16 cgiterations: 12 firstorderopt: 7.5497e-11 algorithm: 'trust-region' message: 'Local minimum found. Optimization completed because the size...' constrviolation: []
fminunc took one more iteration than before, arriving at a better solution.
Try to have each coordinate give about the same effect on the objective and constraint functions by scaling and centering. For examples, see Center and Scale Your Problem.
Evaluate your objective function and constraints, if they exist, at points near the final point. If the final point is a local minimum, nearby feasible points have larger objective function values. See Check Nearby Points for an example.
If you have a Global Optimization Toolbox license, try running the patternsearch solver from the final point. patternsearch examines nearby points, and accepts all types of constraints.
Central finite differences are more time-consuming to evaluate, but are much more accurate. Use central differences when your problem can have high curvature.
To choose central differences at the command line, use optimoptions to set 'FinDiffType' to 'central', instead of the default 'forward'.
To choose central differences in the Optimization app, set Options > Approximated derivatives > Type to be central differences.
If you do not provide gradients or Jacobians, solvers estimate gradients and Jacobians by finite differences. Therefore, providing these derivatives can save computational time, and can lead to increased accuracy.
For constrained problems, providing a gradient has another advantage. A solver can reach a point x such that x is feasible, but finite differences around x always lead to an infeasible point. In this case, a solver can fail or halt prematurely. Providing a gradient allows a solver to proceed.
Provide gradients or Jacobians in the files for your objective function and nonlinear constraint functions. For details of the syntax, see Writing Scalar Objective Functions, Writing Vector and Matrix Objective Functions, and Nonlinear Constraints.
To check that your gradient or Jacobian function is correct, use the DerivativeCheck option, as described in Checking Validity of Gradients or Jacobians.
If you have a Symbolic Math Toolbox™ license, you can calculate gradients and Hessians programmatically. For an example, see Symbolic Math Toolbox Calculates Gradients and Hessians.
For examples using gradients and Jacobians, see Minimization with Gradient and Hessian, Nonlinear Constraints with Gradients, Symbolic Math Toolbox Calculates Gradients and Hessians, Nonlinear Equations with Analytic Jacobian, and Nonlinear Equations with Jacobian.
Solvers often run more reliably and with fewer iterations when you supply a Hessian.
The following solvers and algorithms accept Hessians:
fmincon interior-point. Write the Hessian as a separate function. For an example, see fmincon Interior-Point Algorithm with Analytic Hessian.
fmincon trust-region-reflective. Give the Hessian as the third output of the objective function. For an example, see Minimization with Dense Structured Hessian, Linear Equalities.
fminunc trust-region. Give the Hessian as the third output of the objective function. For an example, see Minimization with Gradient and Hessian.
If you have a Symbolic Math Toolbox license, you can calculate gradients and Hessians programmatically. For an example, see Symbolic Math Toolbox Calculates Gradients and Hessians.
The example in Symbolic Math Toolbox Calculates Gradients and Hessians shows fmincon taking 77 iterations without a Hessian, but only 19 iterations with a Hessian.