MathWorks Machine Translation
The automated translation of this page is provided by a general purpose third party translator tool.
MathWorks does not warrant, and disclaims all liability for, the accuracy, suitability, or fitness for purpose of the translation.
Find minimum of unconstrained multivariable function
Nonlinear programming solver.
Finds the minimum of a problem specified by
$$\underset{x}{\mathrm{min}}f(x)$$
where f(x) is a function that returns a scalar.
x is a vector or a matrix; see Matrix Arguments.
x = fminunc(fun,x0)
x = fminunc(fun,x0,options)
x = fminunc(problem)
[x,fval]
= fminunc(___)
[x,fval,exitflag,output]
= fminunc(___)
[x,fval,exitflag,output,grad,hessian]
= fminunc(___)
starts
at the point x
= fminunc(fun
,x0
)x0
and attempts to find a local minimum x
of
the function described in fun
. The point x0
can
be a scalar, vector, or matrix.
Passing Extra Parameters explains how to pass extra parameters to the objective function and nonlinear constraint functions, if necessary.
fminunc
is for nonlinear problems without
constraints. If your problem has constraints, generally use fmincon
. See Optimization Decision Table.
minimizes x
= fminunc(fun
,x0
,options
)fun
with
the optimization options specified in options
.
Use optimoptions
to set these
options.
finds
the minimum for x
= fminunc(problem
)problem
, where problem
is
a structure described in Input Arguments.
Create the problem
structure by exporting a problem
from Optimization app, as described in Exporting Your Work.
Minimize the function $$f(x)=3{x}_{1}^{2}+2{x}_{1}{x}_{2}+{x}_{2}^{2}4{x}_{1}+5{x}_{2}$$.
Write an anonymous function that calculates the objective.
fun = @(x)3*x(1)^2 + 2*x(1)*x(2) + x(2)^2  4*x(1) + 5*x(2);
Call fminunc
to find a minimum of fun
near [1,1]
.
x0 = [1,1]; [x,fval] = fminunc(fun,x0);
After a few iterations, fminunc
returns the solution,
x
, and the value of the function at
x
,
fval
.
x,fval
x = 2.2500 4.7500 fval = 16.3750
fminunc
can be faster
and more reliable when you provide derivatives.
Write an objective function that returns the gradient as well as the function value. Use the conditionalized form described in Including Gradients and Hessians. The objective function is Rosenbrock's function,
$$f(x)=100{\left({x}_{2}{x}_{1}^{2}\right)}^{2}+{(1{x}_{1})}^{2},$$
which has gradient
$$\nabla f(x)=\left[\begin{array}{c}400\left({x}_{2}{x}_{1}^{2}\right){x}_{1}2\left(1{x}_{1}\right)\\ 200\left({x}_{2}{x}_{1}^{2}\right)\end{array}\right].$$
function [f,g] = rosenbrockwithgrad(x) % Calculate objective f f = 100*(x(2)  x(1)^2)^2 + (1x(1))^2; if nargout > 1 % gradient required g = [400*(x(2)x(1)^2)*x(1)2*(1x(1)); 200*(x(2)x(1)^2)]; end
Save this code as a file on your MATLAB^{®} path, named rosenbrockwithgrad.m
.
Create options to use the objective function’s
gradient. Also, set the algorithm to 'trustregion'
.
options = optimoptions('fminunc','Algorithm','trustregion','SpecifyObjectiveGradient',true);
Set the initial point to [1,2]
. Then
call fminunc
.
x0 = [1,2]; fun = @rosenbrockwithgrad; x = fminunc(fun,x0,options)
Local minimum found. Optimization completed because the size of the gradient is less than the default value of the function tolerance. <stopping criteria details> x = 1.0000 1.0000
Solve the same problem as in Supply the Gradient using a problem structure instead of separate arguments.
Write an objective function that returns the gradient as well as the function value. Use the conditionalized form described in Including Gradients and Hessians. The objective function is Rosenbrock's function,
$$f(x)=100{\left({x}_{2}{x}_{1}^{2}\right)}^{2}+{(1{x}_{1})}^{2},$$
which has gradient
$$\nabla f(x)=\left[\begin{array}{c}400\left({x}_{2}{x}_{1}^{2}\right){x}_{1}2\left(1{x}_{1}\right)\\ 200\left({x}_{2}{x}_{1}^{2}\right)\end{array}\right].$$
function [f,g] = rosenbrockwithgrad(x) % Calculate objective f f = 100*(x(2)  x(1)^2)^2 + (1x(1))^2; if nargout > 1 % gradient required g = [400*(x(2)x(1)^2)*x(1)2*(1x(1)); 200*(x(2)x(1)^2)]; end
Save this code as a file on your MATLAB path, named rosenbrockwithgrad.m
.
Create options to use the objective function’s
gradient. Also, set the algorithm to 'trustregion'
.
options = optimoptions('fminunc','Algorithm','trustregion','SpecifyObjectiveGradient',true);
Create a problem structure including the initial point x0
= [1,2]
.
problem.options = options; problem.x0 = [1,2]; problem.objective = @rosenbrockwithgrad; problem.solver = 'fminunc';
Solve the problem.
x = fminunc(problem)
Local minimum found. Optimization completed because the size of the gradient is less than the default value of the function tolerance. <stopping criteria details> x = 1.0000 1.0000
Find both the location of the minimum of a nonlinear function and the value of the function at that minimum.
The objective function is
$$f(x)=x(1){e}^{{\Vert x\Vert}_{2}^{2}}+{\Vert x\Vert}_{2}^{2}/20.$$
fun = @(x)x(1)*exp((x(1)^2 + x(2)^2)) + (x(1)^2 + x(2)^2)/20;
Find the location and objective function value of the
minimizer starting at x0 = [1,2]
.
x0 = [1,2]; [x,fval] = fminunc(fun,x0)
Local minimum found. Optimization completed because the size of the gradient is less than the default value of the function tolerance. <stopping criteria details> x = 0.6691 0.0000 fval = 0.4052
Choose fminunc
options and outputs to examine the solution process.
Set options to obtain iterative display and use the 'quasinewton'
algorithm.
options = optimoptions(@fminunc,'Display','iter','Algorithm','quasinewton');
The objective function is
fun = @(x)x(1)*exp((x(1)^2 + x(2)^2)) + (x(1)^2 + x(2)^2)/20;
Start the minimization at x0 = [1,2]
, and obtain outputs that enable you to examine the solution quality and process.
x0 = [1,2]; [x,fval,exitflag,output] = fminunc(fun,x0,options)
Firstorder Iteration Funccount f(x) Stepsize optimality 0 3 0.256738 0.173 1 6 0.222149 1 0.131 2 9 0.15717 1 0.158 3 18 0.227902 0.438133 0.386 4 21 0.299271 1 0.46 5 30 0.404028 0.102071 0.0458 6 33 0.404868 1 0.0296 7 36 0.405236 1 0.00119 8 39 0.405237 1 0.000252 9 42 0.405237 1 7.97e07 Local minimum found. Optimization completed because the size of the gradient is less than the default value of the optimality tolerance.
x = 1×2
0.6691 0.0000
fval = 0.4052
exitflag = 1
output = struct with fields:
iterations: 9
funcCount: 42
stepsize: 2.9343e04
lssteplength: 1
firstorderopt: 7.9721e07
algorithm: 'quasinewton'
message: 'Local minimum found....'
The exit flag 1
shows that the solution is a local optimum.
The output
structure shows the number of iterations, number of function evaluations, and other information.
The iterative display also shows the number of iterations and function evaluations.
fun
— Function to minimizeFunction to minimize, specified as a function handle or function
name. fun
is a function that accepts a vector or
array x
and returns a real scalar f
,
the objective function evaluated at x
.
Specify fun
as a function handle for a file:
x = fminunc(@myfun,x0)
where myfun
is a MATLAB function such
as
function f = myfun(x) f = ... % Compute function value at x
You can also specify fun
as a function handle
for an anonymous function:
x = fminunc(@(x)norm(x)^2,x0);
If you can compute the gradient of fun
and the SpecifyObjectiveGradient
option
is set to true
, as set by
options = optimoptions('fminunc','SpecifyObjectiveGradient',true)
fun
must
return the gradient vector g(x)
in the second output
argument.
If you can also compute the Hessian matrix and the HessianFcn
option
is set to 'objective'
via options = optimoptions('fminunc','HessianFcn','objective')
and the Algorithm
option
is set to 'trustregion'
, fun
must
return the Hessian value H(x)
, a symmetric matrix,
in a third output argument. fun
can give a sparse
Hessian. See Hessian for fminunc trustregion or fmincon trustregionreflective algorithms for
details.
The trustregion
algorithm allows you to
supply a Hessian multiply function. This function gives the result
of a Hessiantimesvector product without computing the Hessian directly.
This can save memory. See Hessian Multiply Function.
Example: fun = @(x)sin(x(1))*cos(x(2))
Data Types: char
 function_handle
 string
x0
— Initial pointInitial point, specified as a real vector or real array. Solvers
use the number of elements in, and size of, x0
to
determine the number and size of variables that fun
accepts.
Example: x0 = [1,2,3,4]
Data Types: double
options
— Optimization optionsoptimoptions
 structure such as optimset
returnsOptimization options, specified as the output of optimoptions
or
a structure such as optimset
returns.
Some options apply to all algorithms, and others are relevant for particular algorithms. See Optimization Options Reference for detailed information.
Some options are absent from the optimoptions
display.
These options are listed in italics. For details, see View Options.
All Algorithms  
 Choose the The 
CheckGradients  Compare usersupplied derivatives
(gradient of objective) to finitedifferencing derivatives. Choices
are For 
Diagnostics  Display diagnostic information
about the function to be minimized or solved. Choices are 
DiffMaxChange  Maximum change in variables for
finitedifference gradients (a positive scalar). The default is 
DiffMinChange  Minimum change in variables for
finitedifference gradients (a positive scalar). The default is 
Display  Level of display (see Iterative Display):

FiniteDifferenceStepSize  Scalar or vector step size factor for finite differences. When
you set
sign′(x)
= sign(x) except sign′(0) = 1 .
Central finite differences are
FiniteDifferenceStepSize expands
to a vector. The default is sqrt(eps) for forward
finite differences, and eps^(1/3) for central finite
differences.The
trustregion algorithm uses For 
FiniteDifferenceType  Finite differences, used to estimate
gradients, are either For 
FunValCheck  Check whether objective function
values are valid. The default setting, 
MaxFunctionEvaluations  Maximum number of function evaluations
allowed, a positive integer. The default value is For 
MaxIterations  Maximum number of iterations allowed,
a positive integer. The default value is For 
OptimalityTolerance  Termination tolerance on the firstorder optimality, a positive
scalar. The default is For 
OutputFcn  Specify one or more userdefined functions that an optimization
function calls at each iteration. Pass a function handle
or a cell array of function handles. The default is none
( 
PlotFcn  Plots various measures of progress while the algorithm executes;
select from predefined plots or write your own. Pass a
builtin plot function name, a function handle, or a
cell array of builtin plot function names or function
handles. For custom plot functions, pass function
handles. The default is none
(
For information on writing a custom plot function, see Plot Functions. For 
SpecifyObjectiveGradient  Gradient for the objective function
defined by the user. See the description of For 
StepTolerance  Termination tolerance on For 
TypicalX  Typical The 
trustregion Algorithm  
FunctionTolerance  Termination tolerance on the function
value, a positive scalar. The default is For 
HessianFcn  If set to If set to For 
HessianMultiplyFcn  Hessian multiply function, specified as a function handle. For
largescale structured problems, this function computes
the Hessian matrix product W = hmfun(Hinfo,Y) where
The first
argument is the same as the third argument returned by
the objective function [f,g,Hinfo] = fun(x)
NoteTo use the For an example, see Minimization with Dense Structured Hessian, Linear Equalities. For 
HessPattern  Sparsity pattern of the Hessian
for finite differencing. Set Use When the structure is unknown,
do not set 
MaxPCGIter  Maximum number of preconditioned
conjugate gradient (PCG) iterations, a positive scalar. The default
is 
PrecondBandWidth  Upper bandwidth of preconditioner
for PCG, a nonnegative integer. By default, 
SubproblemAlgorithm  Determines how the iteration step
is calculated. The default, 
TolPCG  Termination tolerance on the PCG
iteration, a positive scalar. The default is 
quasinewton Algorithm  
HessUpdate  Method for choosing the search direction in the QuasiNewton algorithm. The choices are:

ObjectiveLimit  A tolerance (stopping criterion)
that is a scalar. If the objective function value at an iteration
is less than or equal to 
UseParallel  When 
Example: options = optimoptions('fminunc','SpecifyObjectiveGradient',true)
problem
— Problem structureProblem structure, specified as a structure with the following fields:
Field Name  Entry 

 Objective function 
 Initial point for x 
 'fminunc' 
 Options created with optimoptions 
The simplest way to obtain a problem
structure
is to export the problem from the Optimization app.
Data Types: struct
x
— SolutionSolution, returned as a real vector or real array. The size
of x
is the same as the size of x0
.
Typically, x
is a local solution to the problem
when exitflag
is positive. For information on
the quality of the solution, see When the Solver Succeeds.
fval
— Objective function value at solutionObjective function value at the solution, returned as a real
number. Generally, fval
= fun(x)
.
exitflag
— Reason fminunc
stoppedReason fminunc
stopped, returned as an
integer.
 Magnitude of gradient is smaller than the 
 Change in 
 Change in the objective function value was less than
the 
 Predicted decrease in the objective function was less
than the 
 Number of iterations exceeded 
 Algorithm was terminated by the output function. 
 Objective function at current iteration went below 
output
— Information about the optimization processInformation about the optimization process, returned as a structure with fields:
iterations  Number of iterations taken 
funcCount  Number of function evaluations 
firstorderopt  Measure of firstorder optimality 
algorithm  Optimization algorithm used 
cgiterations  Total number of PCG iterations ( 
lssteplength  Size of line search step relative to search direction
( 
stepsize  Final displacement in 
message  Exit message 
grad
— Gradient at the solutionGradient at the solution, returned as a real vector. grad
gives
the gradient of fun
at the point x(:)
.
hessian
— Approximate HessianApproximate Hessian, returned as a real matrix. For the meaning
of hessian
, see Hessian.
The quasinewton
algorithm uses the BFGS
QuasiNewton method with a cubic line search procedure. This quasiNewton
method uses the BFGS ([1],[5],[8],
and [9]) formula for updating the
approximation of the Hessian matrix. You can select the DFP ([4],[6],
and [7]) formula, which approximates
the inverse Hessian matrix, by setting the HessUpdate
option
to 'dfp'
(and the Algorithm
option
to 'quasinewton'
). You can select a steepest descent
method by setting HessUpdate
to 'steepdesc'
(and Algorithm
to 'quasinewton'
),
although this setting is usually inefficient. See fminunc quasinewton Algorithm.
The trustregion
algorithm requires that
you supply the gradient in fun
and
set SpecifyObjectiveGradient
to true
using optimoptions
. This algorithm is a subspace
trustregion method and is based on the interiorreflective Newton
method described in [2] and [3]. Each iteration involves the approximate
solution of a large linear system using the method of preconditioned
conjugate gradients (PCG). See fminunc trustregion Algorithm, TrustRegion Methods for Nonlinear Minimization and Preconditioned Conjugate Gradient Method.
[1] Broyden, C. G. “The Convergence of a Class of DoubleRank Minimization Algorithms.” Journal Inst. Math. Applic., Vol. 6, 1970, pp. 76–90.
[2] Coleman, T. F. and Y. Li. “An Interior, Trust Region Approach for Nonlinear Minimization Subject to Bounds.” SIAM Journal on Optimization, Vol. 6, 1996, pp. 418–445.
[3] Coleman, T. F. and Y. Li. “On the Convergence of Reflective Newton Methods for LargeScale Nonlinear Minimization Subject to Bounds.” Mathematical Programming, Vol. 67, Number 2, 1994, pp. 189–224.
[4] Davidon, W. C. “Variable Metric Method for Minimization.” A.E.C. Research and Development Report, ANL5990, 1959.
[5] Fletcher, R. “A New Approach to Variable Metric Algorithms.” Computer Journal, Vol. 13, 1970, pp. 317–322.
[6] Fletcher, R. “Practical Methods of Optimization.” Vol. 1, Unconstrained Optimization, John Wiley and Sons, 1980.
[7] Fletcher, R. and M. J. D. Powell. “A Rapidly Convergent Descent Method for Minimization.” Computer Journal, Vol. 6, 1963, pp. 163–168.
[8] Goldfarb, D. “A Family of Variable Metric Updates Derived by Variational Means.” Mathematics of Computing, Vol. 24, 1970, pp. 23–26.
[9] Shanno, D. F. “Conditioning of QuasiNewton Methods for Function Minimization.” Mathematics of Computing, Vol. 24, 1970, pp. 647–656.
You clicked a link that corresponds to this MATLAB command:
Run the command by entering it in the MATLAB Command Window. Web browsers do not support MATLAB commands.
Choose your location to get translated content where available and see local events and offers. Based on your location, we recommend that you select: .
SelectYou can also select a location from the following list:
Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.