Optimization Toolbox

Key Features

  • Nonlinear and multiobjective optimization
  • Solvers for nonlinear least squares, data fitting, and nonlinear equations
  • Quadratic and linear programming
  • Mixed-integer linear programming
  • Optimization app for defining and solving optimization problems and monitoring solution progress
  • Acceleration of constrained nonlinear solvers with Parallel Computing Toolbox™
Using a mixed-integer linear programming problem to determine the best way to supply sales locations from warehouses and factories.
Using a mixed-integer linear programming problem to determine the best way to supply sales locations from warehouses and factories.

Defining and Solving Optimization Problems

Defining an Optimization Problem

Optimization techniques are used to find a set of design parameters that give the best possible result. There are two key components in an optimization problem:

The objective function calculates the desired quantity to be minimized or maximized. Constraints can be added that limit the possible values for the design parameters.

Mathematical Modeling with Optimization, Part 1 8:51
Transform a problem description into a mathematical program that can be solved using optimization, using a steam and electric power plant example.

Mathematical Modeling with Optimization, Part 2 10:46
Solve a linear program using Optimization Toolbox™ solvers, using a steam and electric power plant example.

Using the Optimization App

You can access Optimization Toolbox functions and solver options programmatically, or with the Optimization app.

The Optimization app simplifies common optimization tasks. It enables you to:

  • Select a solver and define an optimization problem
  • Set and inspect optimization options and their default values for the selected solver
  • Run problems and visualize intermediate and final results
  • View solver-specific documentation in the optional quick reference window
  • Import and export problem definitions, algorithm options, and results between the MATLAB® workspace and the Optimization app
  • Automatically generate MATLAB code to capture work and automate tasks
  • Access Global Optimization Toolbox solvers

Introduction to Optimization Graphical User Interface 6:08
Set up and run optimization problems and visualize intermediate and final results.

Choosing a Solver

Optimization Toolbox contains different solvers for different types of objectives and constraints. The Optimization Decision Table helps you choose the best solver for your problem.

Setting Options

Solver options enable you to tune or modify the optimization process and visualize solver progress. Setting options can be done programmatically or with the Optimization app.

Setting Options for Optimizations 4:48
Set options with optimoptions in Optimization Toolbox™ to tune solvers and monitor optimization progress.

Nonlinear Optimization

Optimization Toolbox provides widely used optimization algorithms for solving nonlinear programming problems in MATLAB. The toolbox includes solvers for unconstrained and constrained nonlinear optimization and solvers for least-squares optimization.

Unconstrained Nonlinear Optimization

Optimization Toolbox uses three algorithms to solve unconstrained nonlinear minimization problems:

  • The Quasi-Newton algorithm uses a mixed quadratic and cubic line search procedure and the Broyden-Fletcher-Goldfarb-Shanno (BFGS) formula for updating the approximation of the Hessian matrix.
  • The Nelder-Mead algorithm (or downhill simplex) is a direct-search algorithm that uses only function values (does not require derivatives) and handles nonsmooth objective functions. Global Optimization Toolbox provides additional derivative-free optimization algorithms for nonlinear optimization.
  • The trust-region algorithm is used for unconstrained nonlinear problems and is especially useful for large-scale problems where sparsity or structure can be exploited.
Unconstrained nonlinear programming used to search an engine performance map for peak efficiency.
Unconstrained nonlinear optimization used to search an engine performance map for peak efficiency.

Constrained Nonlinear Optimization

Constrained nonlinear optimization problems are composed of linear or nonlinear objective functions and may be subject to linear and nonlinear constraints. Optimization Toolbox uses four algorithms to solve these problems:

  • The interior point algorithm is used for general nonlinear optimization. It is especially useful for large-scale problems that have sparsity or structure, and tolerates user-defined objective and constraint function evaluation failures. It is based on a barrier function, and optionally keeps all iterations strictly feasible with respect to bounds during the optimization run.
  • The SQP algorithm is used for general nonlinear optimization. It honors bounds at all iterations and tolerates user-defined objective and constraint function evaluation failures.
  • The active-set algorithm is used for general nonlinear optimization.
  • The trust-region reflective algorithm is used for bound constrained problems or linear equalities only. It is especially useful for large-scale problems.

The interior point and trust-region reflective algorithms enable you to estimate Hessian matrices using different approaches.

For the interior point algorithm, you can estimate Hessian matrices using:

  • BFGS (dense)
  • Limited memory BFGS (for large-scale problems)
  • Hessian-multiply function
  • Actual Hessian (sparse or dense)
  • Finite difference of gradients, without requiring knowledge of sparsity structure

For the trust-region reflective algorithm, you can use:

  • Finite difference of gradients, sparsity structure of the Hessian
  • Actual Hessian (sparse or dense)
  • Hessian-multiply function

Additionally, the interior point and trust-region reflective algorithms enable you to calculate Hessian-times-vector products in a function without having to form the Hessian matrix explicitly.

Constrained nonlinear programming used to design an optimal suspension system.
Constrained nonlinear optimization used to design an optimal suspension system.

Linear and Quadratic Programming

Optimization Toolbox can solve large-scale linear and quadratic programming problems.

Linear Programming

Linear programming problems involve minimizing or maximizing a linear objective function subject to bounds, linear equality, and inequality constraints. Linear programming is used in finance, energy, operations research, and other applications where relationships between variables can be expressed linearly.

Optimization Toolbox includes three algorithms used to solve linear programming problems:

  • The simplex algorithm is a systematic procedure for generating and testing candidate vertex solutions to a linear program. The simplex algorithm is the most widely used algorithm for linear programming.
  • The interior point algorithm is based on a primal-dual predictor-corrector algorithm used for solving linear programming problems. Interior point is especially useful for large-scale problems that have structure or can be defined using sparse matrices.
  • The active-set algorithm minimizes the objective at each iteration over the active set (a subset of the constraints that are locally active) until it reaches a solution.
Linear programming used in the design of a plant for generating steam and electrical power.
Linear programming used in the design of a plant for generating steam and electrical power.

Quadratic Programming

Quadratic programming problems involve minimizing a multivariate quadratic function subject to bounds, linear equality, and inequality constraints. Quadratic programming is used for portfolio optimization in finance, power generation optimization for electrical utilities, design optimization in engineering, and other applications.

Optimization Toolbox includes three algorithms for solving quadratic programs:

  • The interior-point-convex algorithm solves convex problems with any combination of constraints.
  • The trust-region-reflective algorithm solves bound constrained problems or linear equality constrained problems.
  • The active-set algorithm solves problems with any combination of constraints.

Optimization in MATLAB: An Introduction to Quadratic Programming 36:35
In this webinar, you will learn how MATLAB can be used to solve optimization problems using an example quadratic optimization problem and the symbolic math tools in MATLAB.

Both the interior-point-convex and trust-region-reflective algorithms are large scale, meaning they can handle large, sparse problems. Furthermore, the interior-point-convex algorithm has optimized internal linear algebra routines and a new presolve module that can improve speed, numerical stability, and the detection of infeasibility.

Quadratic programming used to perform a returns-based style analysis for three mutual funds.
Quadratic programming used to perform a returns-based style analysis for three mutual funds.

Mixed-Integer Linear Programming

Mixed-integer linear programming expands the linear programming problem with the additional constraint that some or all of the variables in the optimal solution must be integers. 

For some optimization problems, the variables should not take on fractional values. For instance, if a variable represents the number of stock shares to purchase, it should take on only integer values. Similarly, if a variable represents the on/off state of a generator, it should take on only binary values (0 or 1). The mixed-integer linear programming problem allows this behavior to be modeled by adding the constraint that these variables should take on only integers, or whole numbers, in the optimal solution.

Mixed-Integer Linear Programming in MATLAB 34:08
Learn how to use the new optimization solver for mixed-integer linear programming in Release 2014a. This new solver enables you to solve optimization problems in which some or all of the variables are constrained to take on integer values.

Optimization Toolbox solves mixed-integer linear programming problems using an algorithm that:

  • Performs integer programming preprocessing to tighten the feasible region
  • Applies cutting planes to tighten the feasible region
  • Uses heuristics to search for integer feasible solutions
  • Verifies that no better feasible solution is possible with a branch and bound algorithm that solves a series of linear programming relaxation problems
Using an integer programming problem to determine which investments should be made.
Using an integer programming problem to determine which investments should be made.

Deployment

You can use Optimization Toolbox solvers with MATLAB Compiler™ to create decision support tools that can be shared with users who do not have MATLAB.  These standalone applications can be deployed royalty-free to an unlimited number of end users. You can also integrate MATLAB optimization algorithms with other languages, such as Java® and .NET, using MATLAB Builder™ products.

Multiobjective Optimization

Multiobjective optimization is concerned with the minimization of multiple objective functions that are subject to a set of constraints. Optimization Toolbox provides functions for solving two formulations of multiobjective optimization problems:

  • The goal attainment problem involves reducing the value of a linear or nonlinear vector function to attain the goal values given in a goal vector. The relative importance of the goals is indicated using a weight vector. The goal attainment problem may also be subject to linear and nonlinear constraints.
  • The minimax problem involves minimizing the worst-case value of a set of multivariate functions, possibly subject to linear and nonlinear constraints.

Optimization Toolbox transforms both types of multiobjective problems into standard constrained optimization problems and then solves them using an active-set approach.

Global Optimization Toolbox provides an additional multiobjective solver for nonsmooth problems.

Multiobjective optimization used to design a low-pass filter.
Multiobjective optimization used to design a low-pass filter.

Nonlinear Least Squares, Data Fitting, and Nonlinear Equations

Optimization Toolbox can solve linear and nonlinear least-squares problems, data fitting problems, and nonlinear equations.

Linear and Nonlinear Least-Squares Optimization

The toolbox uses two algorithms for solving constrained linear least-squares problems:

  • The active-set algorithm is used to solve problems with bounds and linear inequalities or equalities.
  • The trust-region-reflective algorithm is used to solve large-scale problems that have only bound constraints.

The toolbox uses two algorithms for solving nonlinear least-squares problems:

  • The trust-region-reflective algorithm implements the Levenberg-Marquardt algorithm using a trust-region approach. It is used for unconstrained and bound-constrained problems.
  • The Levenberg-Marquardt algorithm implements a standard Levenberg-Marquardt method. It is used for unconstrained problems.
Fitting a transcendental equation using nonlinear least squares.
Fitting a transcendental equation using nonlinear least squares.

Data Fitting

The toolbox provides a specialized interface for data fitting problems in which you want to find the member of a family of nonlinear functions that best fits a set of data points. The toolbox uses the same algorithms for data fitting problems that it uses for nonlinear least-squares problems.

Fitting a nonlinear exponential equation using least-squares curve fitting.
Fitting a nonlinear exponential equation using least-squares curve fitting.

Nonlinear Equation Solving

Optimization Toolbox implements a dogleg trust-region algorithm for solving a system of nonlinear equations where there are as many equations as unknowns. The toolbox can also solve this problem using the trust-region reflective and Levenberg-Marquardt algorithms.

Solving an n-dimensional Rosenbrock function using the nonlinear equation solver.
Solving an n-dimensional Rosenbrock function using the nonlinear equation solver.

Parallel Computing and Derivatives

Optimization Toolbox solvers for nonlinear problems use gradient-based methods for minimizing or maximizing an objective. Information about the gradient of the objective function can be either estimated by the solver using finite differences, or supplied to the solver by the user.

Parallel Computing

Optimization Toolbox can be used with Parallel Computing Toolbox to solve problems that benefit from parallel computation. You can use parallel computing to decrease time to solution by enabling built-in parallel computing support or by defining a custom parallel computing implementation of an optimization problem.

Built-in support for parallel computing in Optimization Toolbox enables you to accelerate the gradient estimation step in select solvers for constrained nonlinear optimization problems and for multiobjective goal attainment and minimax problems.

Accelerating time to solution for an electrostatics problem using the built-in support for parallel computing.
Accelerating time to solution for an electrostatics problem using the built-in support for parallel computing in a nonlinear optimization solver. The built-in functionality is enabled by specifying the UseParallel option (left) for the objective (middle right) and constraint (bottom right) functions, with the solution shown in the top right.

You can customize a parallel computing implementation by explicitly defining the optimization problem to use parallel computing functionality. You can define either an objective function or a constraint function to use parallel computing, enabling you to decrease the time required to evaluate the objective or constraint.

Accelerating time to solution for a suspension system design.
Accelerating time to solution (top right) for a suspension system design (bottom left and bottom right) subject to uncertainty by customizing the objective function with a single line change in code (top left).

Speeding Up Optimization Problems Using Parallel Computing 55:41
In this webinar, we will use two case studies to demonstrate how you can use parallel computing to speed up single-level and multilevel optimization problems in MATLAB.

Supplying Derivatives

Optimization Toolbox solvers minimize nonlinear functions by estimating the partial derivatives of the objective function using finite differences. Alternatively, you can define functions that calculate the values of the partial derivatives, significantly reducing the overhead of the derivative estimation step. 

Calculating partial derivatives of an objective function can be a tedious task. By expressing the problem symbolically using Symbolic Math Toolbox™, you can use built-in functions for automatically calculating objective function partial derivatives. MATLAB code can then be generated for use with Optimization Toolbox solvers.

Optimization Using Symbolic Derivatives (Technical Article)

Try Optimization Toolbox

Get trial software

Tips and Tricks - Getting Started Using Optimization with MATLAB

View webinar

Learn to solve your optimization problems with MATLAB

View course info