There are four general categories of Optimization Toolbox™ solvers:
This group of solvers attempts to find a local minimum of the objective function near a starting point x0. They address problems of unconstrained optimization, linear programming, quadratic programming, and general nonlinear programming.
This group of solvers attempts to either minimize the maximum value of a set of functions (fminimax), or to find a location where a collection of functions is below some prespecified values (fgoalattain).
This group of solvers attempts to find a solution to a scalar- or vector-valued nonlinear equation f(x) = 0 near a starting point x0. Equation-solving can be considered a form of optimization because it is equivalent to finding the minimum norm of f(x) near x0.
This group of solvers attempts to minimize a sum of squares. This type of problem frequently arises in fitting a model to data. The solvers address problems of finding nonnegative solutions, bounded or linearly constrained solutions, and fitting parameterized nonlinear models to data.
For more information see Problems Handled by Optimization Toolbox Functions. See Optimization Decision Table for aid in choosing among solvers for minimization.
Minimizers formulate optimization problems in the form
possibly subject to constraints. f(x) is called an objective function. In general, f(x) is a scalar function of type double, and x is a vector or scalar of type double. However, multiobjective optimization, equation solving, and some sum-of-squares minimizers, can have vector or matrix objective functions F(x) of type double. To use Optimization Toolbox solvers for maximization instead of minimization, see Maximizing an Objective.
Write the objective function for a solver in the form of a function file or anonymous function handle. You can supply a gradient ∇f(x) for many solvers, and you can supply a Hessian for several solvers. See Writing Objective Functions. Constraints have a special form, as described in Writing Constraints.