Find minimum of unconstrained multivariable function using derivative-free method
x = fminsearch(fun,x0)
x = fminsearch(fun,x0,options)
x = fminsearch(problem)
[x,fval] = fminsearch(...)
[x,fval,exitflag] = fminsearch(...)
[x,fval,exitflag,output] = fminsearch(...)
fminsearch finds the minimum of a scalar
function of several variables, starting at an initial estimate. This
is generally referred to as unconstrained nonlinear optimization.
x = fminsearch(fun,x0) starts
at the point
x0 and returns a value
is a local minimizer of the function described in
be a scalar, vector, or matrix.
fun is a function
handle; see Function Handles.
x = fminsearch(fun,x0,options) minimizes
with the optimization parameters specified in the structure
You can define these parameters using the
options structure fields:
Level of display.
Check whether objective function values are valid.
Maximum number of function evaluations allowed, a positive
integer. The default is
Maximum number of iterations allowed, a positive integer.
The default value is
Specify one or more user-defined functions that an optimization
function calls at each iteration, either as a function handle or as
a cell array of function handles. The default is none (
Plots various measures of progress while the algorithm
executes, select from predefined plots or write your own. Pass a function
handle or a cell array of function handles. The default is none (
See Plot Functions in MATLAB Mathematics for more information.
Termination tolerance on the function value, a positive
scalar. The default is
Termination tolerance on
x = fminsearch(problem) finds the minimum
a structure with the following fields:
|Initial point for |
|Options structure created using |
[x,fval] = fminsearch(...) returns
fval the value of the objective function
[x,fval,exitflag] = fminsearch(...) returns
exitflag that describes the exit condition
Maximum number of function evaluations or iterations was reached.
Algorithm was terminated by the output function.
[x,fval,exitflag,output] = fminsearch(...) returns
output that contains information about
the optimization in the following fields:
Number of function evaluations
Number of iterations
fun is the function to be minimized. It
accepts an input
x and returns a scalar
the objective function evaluated at
x. The function
be specified as a function handle for a function file
x = fminsearch(@myfun, x0)
myfun is a function file such as
function f = myfun(x) f = ... % Compute function value at x
or as a function handle for an anonymous function, such as
x = fminsearch(@(x)sin(x^2), x0);
Other arguments are described in the syntax descriptions above.
The Rosenbrock banana function is a classic test example for multidimensional minimization:
The minimum is at
(1,1) and has the value
The traditional starting point is
anonymous function shown here defines the function and returns a function
banana = @(x)100*(x(2)-x(1)^2)^2+(1-x(1))^2;
Pass the function handle to
[x,fval] = fminsearch(banana,[-1.2, 1])
x = 1.0000 1.0000 fval = 8.1777e-010
This indicates that the minimizer was found to at least four decimal places with a value near zero.
fun is parameterized, you can use anonymous
functions to capture the problem-dependent parameters. For example,
suppose you want to minimize the objective function
by the following function file:
function f = myfun(x,a) f = x(1)^2 + a*x(2)^2;
myfun has an extra parameter
so you cannot pass it directly to
optimize for a specific value of
a, such as
Assign the value to
a = 1.5; % define parameter first
a one-argument anonymous function that captures that value of
myfun with two arguments:
x = fminsearch(@(x) myfun(x,a),[0,1])
You can modify the first example by adding a parameter a to the second term of the banana function:
This changes the location of the minimum to the point
To minimize this function for a specific value of
for example a =
create a one-argument anonymous function that captures the value of
a = sqrt(2); banana = @(x)100*(x(2)-x(1)^2)^2+(a-x(1))^2;
Then the statement
[x,fval] = fminsearch(banana, [-1.2, 1], ... optimset('TolX',1e-8));
seeks the minimum
[sqrt(2), 2] to an accuracy
higher than the default on
fminsearch only minimizes over the real numbers,
that is, x must only consist of real numbers and f(x) must
only return real numbers. When x has complex variables,
they must be split into real and imaginary parts.
fminsearch uses the simplex search method of Lagarias et al. . This is a direct search method that does not use numerical or analytic gradients.
n is the length of
a simplex in
n-dimensional space is characterized
n+1 distinct vectors that are its vertices.
In two-space, a simplex is a triangle; in three-space, it is a pyramid.
At each step of the search, a new point in or near the current simplex
is generated. The function value at the new point is compared with
the function's values at the vertices of the simplex and, usually,
one of the vertices is replaced by the new point, giving a new simplex.
This step is repeated until the diameter of the simplex is less than
the specified tolerance.
For more information, see fminsearch Algorithm.
 Lagarias, J.C., J. A. Reeds, M. H. Wright, and P. E. Wright, "Convergence Properties of the Nelder-Mead Simplex Method in Low Dimensions," SIAM Journal of Optimization, Vol. 9 Number 1, pp. 112-147, 1998.