Optimization techniques are used to find a set of design parameters or decisions that give the best possible result. An optimization problem is a model of a design or decision problem. You can model the design parameters or decisions as optimization variables. The other components of an optimization problem include objective functions and, if applicable, constraints. The objective function calculates the desired quantity to be minimized or maximized. Constraints limit the possible values for the optimization variables.
Least squares problems and nonlinear equation systems are optimization problems because they are solved by minimizing a sum of squares.
Define your objective and constraints with the problem-based workflow using expressions built with the optimization variables. This approach can be used for linear and mixed-integer linear optimization problems.
Define your objective and constraints using the solver-based workflow. You can use functions or matrices with the optimization variables that your selected solver requires. This approach can be used with all the solvers in Optimization Toolbox™.
Optimization Toolbox contains different solvers for different types of objectives and constraints. The Optimization Decision Table helps you choose the best solver for your problem when using the solver-based approach. When using the problem-based approach, the appropriate solver is automatically selected for you.
Optimization Toolbox provides a broad range of functions so you can solve many types of optimization problems using a single function call. You can also use these functions to build a more complex optimization algorithm, such as a column generation algorithm, or in algorithms where optimization is one of the steps, such as motion planning for robots.
Optimization Toolbox solvers minimize nonlinear functions by estimating the partial derivatives of the objective function using finite differences. You can significantly reduce the overhead of the derivative estimation step by defining functions that calculate the values of the partial derivatives,
Calculating partial derivatives of an objective function can be tedious. You can use built-in functions for automatically calculating objective function partial derivatives by expressing the problem symbolically using Symbolic Math Toolbox™. You can then generate MATLAB® code that you can use with Optimization Toolbox solvers.
You can use Optimization Toolbox solvers with MATLAB Compiler™ to create decision support tools that can be shared with users who do not have MATLAB. These standalone applications can be deployed royalty-free to an unlimited number of end users. You can also integrate MATLAB optimization algorithms with other languages, such as C++, Java®, Python®, and .NET using MATLAB Compiler SDK™.
Optimization Toolbox provides widely used optimization algorithms for solving nonlinear programming problems in MATLAB. The toolbox includes solvers for unconstrained and constrained nonlinear optimization.
Optimization Toolbox uses three algorithms to solve unconstrained nonlinear minimization problems:
Constrained nonlinear optimization problems are composed of linear or nonlinear objective functions and may be subject to linear and nonlinear constraints. Optimization Toolbox uses four algorithms to solve these problems:
The interior-point and trust-region reflective algorithms enable you to estimate Hessian matrices using different approaches.
For the interior-point algorithm, you can estimate Hessian matrices using:
For the trust-region reflective algorithm, you can use:
To lower memory usage, the interior-point and trust-region reflective algorithms enable you to calculate Hessian-times-vector products in a function without having to form the Hessian matrix explicitly.
Optimization Toolbox can solve large-scale linear and quadratic programming problems.
Linear programming problems involve minimizing or maximizing a linear objective function subject to bounds, linear equality, and inequality constraints. Linear programming is used in finance, energy, operations research, and other applications where relationships between variables can be expressed linearly.
Optimization Toolbox includes two algorithms used to solve linear programming problems:
Quadratic programming problems involve minimizing a multivariate quadratic function subject to bounds, linear equality, and inequality constraints. Quadratic programming is used for portfolio optimization in finance, power generation optimization for electrical utilities, design optimization in engineering, and other applications.
Optimization Toolbox includes two algorithms for solving quadratic programs:
Both the interior-point-convex and trust-region-reflective algorithms are large scale, meaning they can handle large, sparse problems. Furthermore, the interior-point-convex algorithm has optimized internal linear algebra routines and a presolve module that can improve speed, numerical stability, and the detection of infeasibility.
For some optimization problems, the variables should not take on fractional values. For instance, if a variable represents the number of stock shares to purchase, it should take on only integer values. Similarly, if a variable represents the on/off state of a generator, it should take on only binary values (0 or 1). The mixed-integer linear programming problem allows this behavior to be modeled by adding the constraint that these variables should take on only integers, or whole numbers, in the optimal solution.
Optimization Toolbox solves mixed-integer linear programming problems using an algorithm that:
Multiobjective optimization is concerned with the minimization of multiple objective functions that are subject to a set of constraints. Optimization Toolbox provides functions for solving two formulations of multiobjective optimization problems:
Optimization Toolbox transforms both types of multiobjective problems into standard constrained optimization problems and then solves them using an active-set approach.
Global Optimization Toolbox provides an additional multiobjective solver for nonsmooth problems.
The toolbox includes two algorithms for solving constrained linear least-squares problems:
The toolbox includes two algorithms for solving nonlinear least-squares problems:
The toolbox provides a specialized interface for data fitting problems in which you want to find the member of a family of nonlinear functions that best fits a set of data points. The toolbox uses the same algorithms for data fitting problems that it uses for nonlinear least-squares problems.
Optimization Toolbox implements a dogleg trust-region algorithm for solving a system of nonlinear equations where there are as many equations as unknowns. The toolbox can also solve this problem using trust-region and Levenberg-Marquardt algorithms.
Optimization Toolbox can be used with Parallel Computing Toolbox to solve problems that benefit from parallel computation. You can decrease time to solution by enabling built-in parallel computing support or by defining a custom parallel computing implementation of an optimization problem.
Built-in support for parallel computing in Optimization Toolbox enables you to accelerate the gradient estimation step in solvers for general nonlinear optimization problems, nonlinear least squares, nonlinear systems of equations, and for multiobjective problems.
You can customize a parallel computing implementation by explicitly defining the optimization problem to use parallel computing functionality. You can define either an objective or a constraint function to use parallel computing, enabling you to decrease the time required to evaluate the objective or constraint.
Optimization Toolbox solvers for nonlinear problems use gradient-based methods for minimizing or maximizing an objective. Information about the gradient of the objective function can be either estimated by the solver using finite differences, or supplied to the solver by the user.