Find minimum of unconstrained multivariable function
Finds the minimum of a problem specified by
$$\underset{x}{\mathrm{min}}f(x)$$
where f(x) is a function that returns a scalar.
x is a vector or a matrix; see Matrix Arguments.
x = fminunc(fun,x0)
x = fminunc(fun,x0,options)
x = fminunc(problem)
[x,fval] = fminunc(...)
[x,fval,exitflag] = fminunc(...)
[x,fval,exitflag,output] = fminunc(...)
[x,fval,exitflag,output,grad] = fminunc(...)
[x,fval,exitflag,output,grad,hessian]
= fminunc(...)
fminunc
attempts to find a minimum of a scalar
function of several variables, starting at an initial estimate. This
is generally referred to as unconstrained nonlinear optimization.
Note: Passing Extra Parameters explains how to pass extra parameters to the objective function, if necessary. 
x = fminunc(fun,x0)
starts
at the point x0
and attempts to find a local minimum x
of
the function described in fun
. x0
can
be a scalar, vector, or matrix.
x = fminunc(fun,x0,options)
minimizes
with the optimization options specified in options
.
Use optimoptions
to set these
options.
x = fminunc(problem)
finds the minimum
for problem
, where problem
is
a structure described in Input Arguments.
Create the problem
structure by exporting
a problem from Optimization app, as described in Exporting Your Work.
[x,fval] = fminunc(...)
returns
in fval
the value of the objective function fun
at
the solution x
.
[x,fval,exitflag] = fminunc(...)
returns
a value exitflag
that describes the exit condition.
[x,fval,exitflag,output] = fminunc(...)
returns
a structure output
that contains information about
the optimization.
[x,fval,exitflag,output,grad] = fminunc(...)
returns
in grad
the value of the gradient of fun
at
the solution x
.
[x,fval,exitflag,output,grad,hessian]
= fminunc(...)
returns in hessian
the
value of the Hessian of the objective function fun
at
the solution x
. See Hessian.
Function Arguments contains
general descriptions of arguments passed into fminunc
.
This section provides functionspecific details for fun
, options
,
and problem
:
 The function to be minimized. x = fminunc(@myfun,x0) where function f = myfun(x) f = ... % Compute function value at x
x = fminunc(@(x)norm(x)^2,x0); If
the gradient of options = optimoptions(@fminunc,'GradObj','on') then
the function If
the Hessian matrix can also be computed and the
Hessian option is Writing Scalar Objective Functions explains
how to "conditionalize" the gradients and Hessians for
use in solvers that do not accept them. Passing Extra Parameters explains how to parameterize  
 Options provides the functionspecific
details for the  
problem 
 Objective function  
 Initial point for x  
 'fminunc'  
 Options created with optimoptions 
Function Arguments contains
general descriptions of arguments returned by fminunc
.
This section provides functionspecific details for exitflag
and output
:
 Integer identifying the
reason the algorithm terminated. The following lists the values of  
 Magnitude of gradient smaller than the  
 Change in  
 Change in the objective function value was less than
the  
 Predicted decrease in the objective function was less
than the  
 Number of iterations exceeded  
 Algorithm was terminated by the output function.  
 Objective function at current iteration went below  
 Gradient at  
 Hessian at  
 Structure containing information about the optimization. The fields of the structure are  
iterations  Number of iterations taken  
funcCount  Number of function evaluations  
firstorderopt  Measure of firstorder optimality  
algorithm  Optimization algorithm used  
cgiterations  Total number of PCG iterations (  
stepsize  Final displacement in  
message  Exit message 
fminunc
computes the output argument hessian
as
follows:
When using the 'quasinewton'
algorithm,
the function computes a finitedifference approximation to the Hessian
at x
using
The gradient grad
if you supply
it
The objective function fun
if you
do not supply the gradient
When using the 'trustregion'
algorithm,
the function uses
options.Hessian
, if you supply
it, to compute the Hessian at x
A finitedifference approximation to the Hessian at x
,
if you supply only the gradient
fminunc
uses these optimization options.
Some options apply to all algorithms, some are only relevant when
you are using the trustregion
algorithm, and others
are only relevant when you are using the quasinewton
algorithm.
Use optimoptions
to set or change options
.
See Optimization Options Reference for detailed
information.
All fminunc
algorithms use the following
options:
If
you use  Choose the The 
DerivativeCheck  Compare usersupplied derivatives
(gradient of objective) to finitedifferencing derivatives. The choices
are 
Diagnostics  Display diagnostic information
about the function to be minimized or solved. The choices are 
 Maximum change in variables for
finitedifference gradients (a positive scalar). The default is 
 Minimum change in variables for
finitedifference gradients (a positive scalar). The default is 
Display  Level of display:

FinDiffRelStep  Scalar or vector step size factor. When you set
and central finite differences are
Scalar The
trustregion algorithm uses 
FinDiffType  Finite differences, used to estimate
gradients, are either 
FunValCheck  Check whether objective function
values are valid. 
GradObj  Gradient for the objective function
defined by the user. See the preceding description of 
If
you use  Use The 
MaxFunEvals  Maximum number of function evaluations
allowed, a positive integer. The default value is 
MaxIter  Maximum number of iterations allowed,
a positive integer. The default value is 
OutputFcn  Specify one or more userdefined
functions that an optimization function calls at each iteration, either
as a function handle or as a cell array of function handles. The default
is none ( 
 Plots various measures of progress
while the algorithm executes. Select from predefined plots or write
your own. Pass a function handle or a cell array of function handles.
The default is none (
For information on writing a custom plot function, see Plot Functions. 
TolFun  Termination tolerance on the function
value, a positive scalar. The default is 
TolX  Termination tolerance on 
 Typical The 
trustregion
Algorithm OnlyThe trustregion
algorithm uses the following
options:
Hessian  If If  
HessMult  Function handle for
Hessian multiply function. For largescale structured problems, this
function computes the Hessian matrix product W = hmfun(Hinfo,Y) where The first
argument must be the same as the third argument returned by the objective
function [f,g,Hinfo] = fun(x)
See Minimization with Dense Structured Hessian, Linear Equalities for an example.  
HessPattern  Sparsity pattern of the Hessian
for finite differencing. Set Use In
the worst case, when the structure is unknown, do not set  
MaxPCGIter  Maximum number of PCG (preconditioned
conjugate gradient) iterations, a positive scalar. The default is  
PrecondBandWidth  Upper bandwidth of preconditioner
for PCG, a nonnegative integer. By default,  
TolPCG  Termination tolerance on the PCG
iteration, a positive scalar. The default is 
quasinewton
Algorithm OnlyThe quasinewton
algorithm uses the following
options:
HessUpdate  Method for choosing the search direction in the QuasiNewton algorithm. The choices are:

InitialHessMatrix This option will be removed in a future release.  Initial quasiNewton matrix. This
option is only available if you set

InitialHessType This option will be removed in a future release.  Initial quasiNewton matrix type. The options are:

ObjectiveLimit  A tolerance (stopping criterion)
that is a scalar. If the objective function value at an iteration
is less than or equal to 
Minimize the function $$f(x)=3{x}_{1}^{2}+2{x}_{1}{x}_{2}+{x}_{2}^{2}$$.
Create a file myfun.m
:
function f = myfun(x) f = 3*x(1)^2 + 2*x(1)*x(2) + x(2)^2; % Cost function
Then call fminunc
to find a minimum of myfun
near [1,1]
:
x0 = [1,1]; [x,fval] = fminunc(@myfun,x0);
After a few iterations, fminunc
returns
the solution, x
, and the value of the function
at x
, fval
:
x,fval x = 1.0e006 * 0.2541 0.2029 fval = 1.3173e013
To minimize this function with the gradient provided, modify myfun.m
so
the gradient is the second output argument:
function [f,g] = myfun(x) f = 3*x(1)^2 + 2*x(1)*x(2) + x(2)^2; % Cost function if nargout > 1 g(1) = 6*x(1)+2*x(2); g(2) = 2*x(1)+2*x(2); end
Indicate that the gradient value is available by creating optimization
options with the GradObj
option set to 'on'
using optimoptions
. Choose the 'trustregion'
algorithm,
which requires a gradient.
options = optimoptions('fminunc','GradObj','on','Algorithm','trustregion'); x0 = [1,1]; [x,fval] = fminunc(@myfun,x0,options);
After several iterations fminunc
returns
the solution, x
, and the value of the function
at x
, fval
:
x,fval x = 1.0e015 * 0.1110 0.8882 fval = 6.2862e031
To minimize the function f(x) = sin(x) + 3
using
an anonymous function
f = @(x)sin(x)+3; x = fminunc(f,4);
fminunc
returns a solution
x x = 4.7124
fminunc
is not the preferred choice for solving
problems that are sums of squares, that is, of the form
$$\underset{x}{\mathrm{min}}{\Vert f(x)\Vert}_{2}^{2}=\underset{x}{\mathrm{min}}\left({f}_{1}{(x)}^{2}+{f}_{2}{(x)}^{2}+\mathrm{...}+{f}_{n}{(x)}^{2}\right)$$
Instead use the lsqnonlin
function,
which has been optimized for problems of this form.
To use the trustregion method, you must provide the gradient
in fun
(and set the GradObj
option
to 'on'
using optimoptions
).
A warning is given if no gradient is provided and the Algorithm
option
is 'trustregion'
.
The function to be minimized must be continuous. fminunc
might
only give local solutions.
fminunc
only minimizes over the real numbers,
that is, x must only consist of real numbers and f(x)
must only return real numbers. When x has complex
variables, they must be split into real and imaginary parts.
To use the trustregion
algorithm, you must
supply the gradient in fun
(and GradObj
must
be set 'on'
in options
).
Trust Region Algorithm Coverage and Requirements
Additional Information Needed  For Large Problems 

Must provide gradient for 

[1] Broyden, C.G., "The Convergence of a Class of DoubleRank Minimization Algorithms," Journal Inst. Math. Applic., Vol. 6, pp. 7690, 1970.
[2] Coleman, T.F. and Y. Li, "An Interior, Trust Region Approach for Nonlinear Minimization Subject to Bounds," SIAM Journal on Optimization, Vol. 6, pp. 418445, 1996.
[3] Coleman, T.F. and Y. Li, "On the Convergence of Reflective Newton Methods for LargeScale Nonlinear Minimization Subject to Bounds," Mathematical Programming, Vol. 67, Number 2, pp. 189224, 1994.
[4] Davidon, W.C., "Variable Metric Method for Minimization," A.E.C. Research and Development Report, ANL5990, 1959.
[5] Fletcher, R., "A New Approach to Variable Metric Algorithms," Computer Journal, Vol. 13, pp. 317322, 1970.
[6] Fletcher, R., "Practical Methods of Optimization," Vol. 1, Unconstrained Optimization, John Wiley and Sons, 1980.
[7] Fletcher, R. and M.J.D. Powell, "A Rapidly Convergent Descent Method for Minimization," Computer Journal, Vol. 6, pp. 163168, 1963.
[8] Goldfarb, D., "A Family of Variable Metric Updates Derived by Variational Means," Mathematics of Computing, Vol. 24, pp. 2326, 1970.
[9] Shanno, D.F., "Conditioning of QuasiNewton Methods for Function Minimization," Mathematics of Computing, Vol. 24, pp. 647656, 1970.