# Documentation

### This is machine translation

Translated by
Mouseover text to see original. Click the button below to return to the English verison of the page.

Quadratic programming is the problem of finding a vector x that minimizes a quadratic function, possibly subject to linear constraints:

`$\underset{x}{\mathrm{min}}\frac{1}{2}{x}^{T}Hx+{c}^{T}x$`

such that A·x ≤ b, Aeq·x = beq, l ≤ x ≤ u.

### `interior-point-convex``quadprog` Algorithm

The `interior-point-convex` algorithm performs the following steps:

### Note

The algorithm has two code paths. It takes one when the Hessian matrix H is an ordinary (full) matrix of doubles, and it takes the other when H is a sparse matrix. For details of the sparse data type, see Sparse Matrices (MATLAB). Generally, the algorithm is faster for large problems that have relatively few nonzero terms when you specify H as `sparse`. Similarly, the algorithm is faster for small or relatively dense problems when you specify H as `full`.

#### Presolve/Postsolve

The algorithm begins by attempting to simplify the problem by removing redundancies and simplifying constraints. The tasks performed during the presolve step include:

• Check if any variables have equal upper and lower bounds. If so, check for feasibility, and then fix and remove the variables.

• Check if any linear inequality constraint involves just one variable. If so, check for feasibility, and change the linear constraint to a bound.

• Check if any linear equality constraint involves just one variable. If so, check for feasibility, and then fix and remove the variable.

• Check if any linear constraint matrix has zero rows. If so, check for feasibility, and delete the rows.

• Check if the bounds and linear constraints are consistent.

• Check if any variables appear only as linear terms in the objective function and do not appear in any linear constraint. If so, check for feasibility and boundedness, and fix the variables at their appropriate bounds.

• Change any linear inequality constraints to linear equality constraints by adding slack variables.

If algorithm detects an infeasible or unbounded problem, it halts and issues an appropriate exit message.

The algorithm might arrive at a single feasible point, which represents the solution.

If the algorithm does not detect an infeasible or unbounded problem in the presolve step, it continues, if necessary, with the other steps. At the end, the algorithm reconstructs the original problem, undoing any presolve transformations. This final step is the postsolve step.

For details, see Gould and Toint [63].

#### Generate Initial Point

The initial point `x0` for the algorithm is:

1. Initialize `x0` to `ones(n,1)`, where `n` is the number of rows in H.

2. For components that have both an upper bound `ub` and a lower bound `lb`, if a component of `x0` is not strictly inside the bounds, the component is set to `(ub + lb)/2`.

3. For components that have only one bound, modify the component if necessary to lie strictly inside the bound.

4. Take a predictor step (see Predictor-Corrector), with minor corrections for feasibility, not a full predictor-corrector step. This places the initial point closer to the central path without entailing the overhead of a full predictor-corrector step. For details of the central path, see Nocedal and Wright [7], page 397.

#### Predictor-Corrector

The sparse and full interior-point-convex algorithms differ mainly in the predictor-corrector phase. The algorithms are similar, but differ in some details. For the basic algorithm description, see Mehrotra [47].

Sparse Predictor-Corrector.  Similar to the `fmincon` interior-point algorithm, the sparse `interior-point-convex` algorithm tries to find a point where the Karush-Kuhn-Tucker (KKT) conditions hold. For the quadratic programming problem described in Quadratic Programming Definition, these conditions are:

Here

• $\overline{A}$ is the extended linear inequality matrix that includes bounds written as linear inequalities. $\overline{b}$ is the corresponding linear inequality vector, including bounds.

• s is the vector of slacks that convert inequality constraints to equalities. s has length m, the number of linear inequalities and bounds.

• z is the vector of Lagrange multipliers corresponding to s.

• y is the vector of Lagrange multipliers associated with the equality constraints.

The algorithm first predicts a step from the Newton-Raphson formula, then computes a corrector step. The corrector attempts to better enforce the nonlinear constraint sizi = 0.

Definitions for the predictor step:

• rd, the dual residual:

`${r}_{d}=Hx+c-{A}_{eq}^{T}y-{\overline{A}}^{T}z.$`
• req, the primal equality constraint residual:

`${r}_{eq}={A}_{eq}x-{b}_{eq}.$`
• rineq, the primal inequality constraint residual, which includes bounds and slacks:

`${r}_{ineq}=\overline{A}x-\overline{b}-s.$`
• rsz, the complementarity residual:

rsz = Sz.

S is the diagonal matrix of slack terms, z is the column matrix of Lagrange multipliers.

• rc, the average complementarity:

`${r}_{c}=\frac{{s}^{T}z}{m}.$`

In a Newton step, the changes in x, s, y, and z, are given by:

 $\left(\begin{array}{cccc}H& 0& -{A}_{eq}^{T}& -{\overline{A}}^{T}\\ {A}_{eq}& 0& 0& 0\\ \overline{A}& -I& 0& 0\\ 0& Z& 0& S\end{array}\right)\left(\begin{array}{c}\Delta x\\ \Delta s\\ \Delta y\\ \Delta z\end{array}\right)=-\left(\begin{array}{c}{r}_{d}\\ {r}_{eq}\\ {r}_{ineq}\\ {r}_{sz}\end{array}\right).$ (10-1)

However, a full Newton step might be infeasible, because of the positivity constraints on s and z. Therefore, `quadprog` shortens the step, if necessary, to maintain positivity.

Additionally, to maintain a “centered” position in the interior, instead of trying to solve sizi = 0, the algorithm takes a positive parameter σ, and tries to solve

sizi = σrc.

`quadprog` replaces rsz in the Newton step equation with rsz + ΔsΔz – σrc1, where 1 is the vector of ones. Also, `quadprog` reorders the Newton equations to obtain a symmetric, more numerically stable system for the predictor step calculation.

After calculating the corrected Newton step, the algorithm performs more calculations to get both a longer current step, and to prepare for better subsequent steps. These multiple correction calculations can improve both performance and robustness. For details, see Gondzio [4].

Full Predictor-Corrector.  The full predictor-corrector algorithm does not combine bounds into linear constraints, so it has another set of slack variables corresponding to the bounds. The algorithm shifts lower bounds to zero. And, if there is only one bound on a variable, the algorithm turns it into a lower bound of zero, by negating the inequality of an upper bound.

$\overline{A}$ is the extended linear matrix that includes both linear inequalities and linear equalities. $\overline{b}$ is the corresponding linear equality vector. $\overline{A}$ also includes terms for extending the vector x with slack variables s that turn inequality constraints to equality constraints:

`$\overline{A}x=\left(\begin{array}{cc}{A}_{eq}& 0\\ A& I\end{array}\right)\left(\begin{array}{c}{x}_{0}\\ s\end{array}\right),$`

where x0 means the original x vector.

The KKT conditions are

 (10-2)

To find the solution x, slack variables and dual variables to Equation 10-2, the algorithm basically considers a Newton-Raphson step:

 $\left(\begin{array}{ccccc}H& -{\overline{A}}^{T}& 0& -I& I\\ \overline{A}& 0& 0& 0& 0\\ -I& 0& -I& 0& 0\\ V& 0& 0& X& 0\\ 0& 0& W& 0& T\end{array}\right)\left(\begin{array}{c}\Delta x\\ \Delta y\\ \Delta t\\ \begin{array}{l}\Delta v\\ \Delta w\end{array}\end{array}\right)=-\left(\begin{array}{c}Hx+c-{\overline{A}}^{T}y-v+w\\ \overline{A}x-\overline{b}\\ u-x-t\\ \begin{array}{l}VX\\ WT\end{array}\end{array}\right)=-\left(\begin{array}{c}{r}_{d}\\ {r}_{p}\\ {r}_{ub}\\ \begin{array}{l}{r}_{vx}\\ {r}_{wt}\end{array}\end{array}\right),$ (10-3)

where X, V, W, and T are diagonal matrices corresponding to the vectors x, v, w, and t respectively. The residual vectors on the far right side of the equation are:

• rd, the dual residual

• rp, the primal residual

• rub, the upper bound residual

• rvx, the lower bound complementarity residual

• rwt, the upper bound complementarity residual

The algorithm solves Equation 10-3 by first converting it to the symmetric matrix form

 $\left(\begin{array}{cc}-D& {\overline{A}}^{T}\\ \overline{A}& 0\end{array}\right)\left(\begin{array}{c}\Delta x\\ \Delta y\end{array}\right)=-\left(\begin{array}{c}R\\ {r}_{p}\end{array}\right),$ (10-4)

where

`$\begin{array}{l}D=H+{X}^{-1}V+{T}^{-1}W\\ R=-{r}_{d}-{X}^{-1}{r}_{vx}+{T}^{-1}{r}_{wt}+{T}^{-1}W{r}_{ub}.\end{array}$`

All the matrix inverses in the definitions of D and R are simple to compute because the matrices are diagonal.

To derive Equation 10-4 from Equation 10-3, notice that the second row of Equation 10-4 is the same as the second matrix row of Equation 10-3. The first row of Equation 10-4 comes from solving the last two rows of Equation 10-3 for Δv and Δw, and then solving for Δt.

To solve Equation 10-4, the algorithm follows the essential elements of Altman and Gondzio [1]. The algorithm solves the symmetric system by an LDL decomposition. As pointed out by authors such as Vanderbei and Carpenter [2], this decomposition is numerically stable without any pivoting, so can be fast.

After calculating the corrected Newton step, the algorithm performs more calculations to get both a longer current step, and to prepare for better subsequent steps. These multiple correction calculations can improve both performance and robustness. For details, see Gondzio [4].

The full `quadprog` predictor-corrector algorithm is largely the same as that in the `linprog` `'interior-point'` algorithm, but includes quadratic terms as well. See Predictor-Corrector.

## References

[1] Altman, Anna and J. Gondzio. Regularized symmetric indefinite systems in interior point methods for linear and quadratic optimization. Optimization Methods and Software, 1999. Available for download here.

[2] Vanderbei, R. J. and T. J. Carpenter. Symmetric indefinite systems for interior point methods. Mathematical Programming 58, 1993. pp. 1–32. Available for download here.

#### Stopping Conditions

The predictor-corrector algorithm iterates until it reaches a point that is feasible (satisfies the constraints to within tolerances) and where the relative step sizes are small. Specifically, define

`$\rho =\mathrm{max}\left(1,‖H‖,‖\overline{A}‖,‖{A}_{eq}‖,‖c‖,‖\overline{b}‖,‖{b}_{eq}‖\right)$`

The algorithm stops when all of these conditions are satisfied:

`$\begin{array}{c}{‖{r}_{p}‖}_{1}+{‖{r}_{ub}‖}_{1}\le \rho \text{TolCon}\\ {‖{r}_{d}‖}_{\infty }\le \rho \text{TolFun}\\ {r}_{c}\le \text{TolFun,}\end{array}$`

where

`${r}_{c}=\underset{i}{\mathrm{max}}\left(\mathrm{min}\left(|{x}_{i}{v}_{i}|,|{x}_{i}|,|{v}_{i}|\right),\mathrm{min}\left(|{t}_{i}{w}_{i}|,|{t}_{i}|,|{w}_{i}|\right)\right).$`

rc essentially measures the size of the complementarity residuals xv and tw, which are each vectors of zeros at a solution.

#### Infeasibility Detection

`quadprog` calculates a merit function φ at every iteration. The merit function is a measure of feasibility. `quadprog` stops if the merit function grows too large. In this case, `quadprog` declares the problem to be infeasible.

The merit function is related to the KKT conditions for the problem—see Predictor-Corrector. Use the following definitions:

`$\begin{array}{c}\rho =\mathrm{max}\left(1,‖H‖,‖\overline{A}‖,‖{A}_{eq}‖,‖c‖,‖\overline{b}‖,‖{b}_{eq}‖\right)\\ {r}_{\text{eq}}={A}_{\text{eq}}x-{b}_{\text{eq}}\\ {r}_{\text{ineq}}=\overline{A}x-\overline{b}+s\\ {r}_{\text{d}}=Hx+c+{A}_{\text{eq}}^{T}{\lambda }_{\text{eq}}+{\overline{A}}^{T}{\overline{\lambda }}_{\text{ineq}}\\ g={x}^{T}Hx+{f}^{T}x-{\overline{b}}^{T}{\overline{\lambda }}_{\text{ineq}}-{b}_{\text{eq}}^{T}{\lambda }_{\text{eq}}.\end{array}$`

The notation $\overline{A}$ and $\overline{b}$ means the linear inequality coefficients, augmented with terms to represent bounds for the sparse algorithm. The notation ${\overline{\lambda }}_{\text{ineq}}$ similarly represents Lagrange multipliers for the linear inequality constraints, including bound constraints. This was called z in Predictor-Corrector, and ${\lambda }_{\text{eq}}$ was called y.

The merit function φ is

`$\frac{1}{\rho }\left(\mathrm{max}\left({‖{r}_{\text{eq}}‖}_{\infty },{‖{r}_{\text{ineq}}‖}_{\infty },{‖{r}_{\text{d}}‖}_{\infty }\right)+g\right).$`

If this merit function becomes too large, `quadprog` declares the problem to be infeasible and halts with exit flag `-2`.

### `trust-region-reflective``quadprog` Algorithm

Many of the methods used in Optimization Toolbox™ solvers are based on trust regions, a simple yet powerful concept in optimization.

To understand the trust-region approach to optimization, consider the unconstrained minimization problem, minimize f(x), where the function takes vector arguments and returns scalars. Suppose you are at a point x in n-space and you want to improve, i.e., move to a point with a lower function value. The basic idea is to approximate f with a simpler function q, which reasonably reflects the behavior of function f in a neighborhood N around the point x. This neighborhood is the trust region. A trial step s is computed by minimizing (or approximately minimizing) over N. This is the trust-region subproblem,

 (10-5)

The current point is updated to be x + s if f(x + s) < f(x); otherwise, the current point remains unchanged and N, the region of trust, is shrunk and the trial step computation is repeated.

The key questions in defining a specific trust-region approach to minimizing f(x) are how to choose and compute the approximation q (defined at the current point x), how to choose and modify the trust region N, and how accurately to solve the trust-region subproblem. This section focuses on the unconstrained problem. Later sections discuss additional complications due to the presence of constraints on the variables.

In the standard trust-region method ([48]), the quadratic approximation q is defined by the first two terms of the Taylor approximation to F at x; the neighborhood N is usually spherical or ellipsoidal in shape. Mathematically the trust-region subproblem is typically stated

 (10-6)

where g is the gradient of f at the current point x, H is the Hessian matrix (the symmetric matrix of second derivatives), D is a diagonal scaling matrix, Δ is a positive scalar, and ∥ . ∥ is the 2-norm. Good algorithms exist for solving Equation 10-6 (see [48]); such algorithms typically involve the computation of a full eigensystem and a Newton process applied to the secular equation

`$\frac{1}{\Delta }-\frac{1}{‖s‖}=0.$`

Such algorithms provide an accurate solution to Equation 10-6. However, they require time proportional to several factorizations of H. Therefore, for large-scale problems a different approach is needed. Several approximation and heuristic strategies, based on Equation 10-6, have been proposed in the literature ([42] and [50]). The approximation approach followed in Optimization Toolbox solvers is to restrict the trust-region subproblem to a two-dimensional subspace S ([39] and [42]). Once the subspace S has been computed, the work to solve Equation 10-6 is trivial even if full eigenvalue/eigenvector information is needed (since in the subspace, the problem is only two-dimensional). The dominant work has now shifted to the determination of the subspace.

The two-dimensional subspace S is determined with the aid of a preconditioned conjugate gradient process described below. The solver defines S as the linear space spanned by s1 and s2, where s1 is in the direction of the gradient g, and s2 is either an approximate Newton direction, i.e., a solution to

 $H\cdot {s}_{2}=-g,$ (10-7)

or a direction of negative curvature,

 ${s}_{2}^{T}\cdot H\cdot {s}_{2}<0.$ (10-8)

The philosophy behind this choice of S is to force global convergence (via the steepest descent direction or negative curvature direction) and achieve fast local convergence (via the Newton step, when it exists).

A sketch of unconstrained minimization using trust-region ideas is now easy to give:

1. Formulate the two-dimensional trust-region subproblem.

2. Solve Equation 10-6 to determine the trial step s.

3. If f(x + s) < f(x), then x = x + s.

These four steps are repeated until convergence. The trust-region dimension Δ is adjusted according to standard rules. In particular, it is decreased if the trial step is not accepted, i.e., f(x + s) ≥ f(x). See [46] and [49] for a discussion of this aspect.

Optimization Toolbox solvers treat a few important special cases of f with specialized functions: nonlinear least-squares, quadratic functions, and linear least-squares. However, the underlying algorithmic ideas are the same as for the general case. These special cases are discussed in later sections.

The subspace trust-region method is used to determine a search direction. However, instead of restricting the step to (possibly) one reflection step, as in the nonlinear minimization case, a piecewise reflective line search is conducted at each iteration. See [45] for details of the line search.

A popular way to solve large symmetric positive definite systems of linear equations Hp = –g is the method of Preconditioned Conjugate Gradients (PCG). This iterative approach requires the ability to calculate matrix-vector products of the form H·v where v is an arbitrary vector. The symmetric positive definite matrix M is a preconditioner for H. That is, M = C2, where C–1HC–1 is a well-conditioned matrix or a matrix with clustered eigenvalues.

In a minimization context, you can assume that the Hessian matrix H is symmetric. However, H is guaranteed to be positive definite only in the neighborhood of a strong minimizer. Algorithm PCG exits when a direction of negative (or zero) curvature is encountered, i.e., dTHd ≤ 0. The PCG output direction, p, is either a direction of negative curvature or an approximate (tol controls how approximate) solution to the Newton system Hp = –g. In either case p is used to help define the two-dimensional subspace used in the trust-region approach discussed in Trust-Region Methods for Nonlinear Minimization.

#### Linear Equality Constraints

Linear constraints complicate the situation described for unconstrained minimization. However, the underlying ideas described previously can be carried through in a clean and efficient way. The trust-region methods in Optimization Toolbox solvers generate strictly feasible iterates.

The general linear equality constrained minimization problem can be written

 (10-9)

where A is an m-by-n matrix (m ≤ n). Some Optimization Toolbox solvers preprocess A to remove strict linear dependencies using a technique based on the LU factorization of AT [46]. Here A is assumed to be of rank m.

The method used to solve Equation 10-9 differs from the unconstrained approach in two significant ways. First, an initial feasible point x0 is computed, using a sparse least-squares step, so that Ax0 = b. Second, Algorithm PCG is replaced with Reduced Preconditioned Conjugate Gradients (RPCG), see [46], in order to compute an approximate reduced Newton step (or a direction of negative curvature in the null space of A). The key linear algebra step involves solving systems of the form

 $\left[\begin{array}{cc}C& {\stackrel{˜}{A}}^{T}\\ \stackrel{˜}{A}& 0\end{array}\right]\left[\begin{array}{c}s\\ t\end{array}\right]=\left[\begin{array}{c}r\\ 0\end{array}\right],$ (10-10)

where $\stackrel{˜}{A}$ approximates A (small nonzeros of A are set to zero provided rank is not lost) and C is a sparse symmetric positive-definite approximation to H, i.e., C = H. See [46] for more details.

#### Box Constraints

The box constrained problem is of the form

 (10-11)

where l is a vector of lower bounds, and u is a vector of upper bounds. Some (or all) of the components of l can be equal to –∞ and some (or all) of the components of u can be equal to ∞. The method generates a sequence of strictly feasible points. Two techniques are used to maintain feasibility while achieving robust convergence behavior. First, a scaled modified Newton step replaces the unconstrained Newton step (to define the two-dimensional subspace S). Second, reflections are used to increase the step size.

The scaled modified Newton step arises from examining the Kuhn-Tucker necessary conditions for Equation 10-11,

 ${\left(D\left(x\right)\right)}^{-2}g=0,$ (10-12)

where

`$D\left(x\right)=\text{diag}\left({|{v}_{k}|}^{-1/2}\right),$`

and the vector v(x) is defined below, for each 1 ≤ i ≤ n:

• If gi < 0 and ui < ∞ then vi = xi – ui

• If gi ≥ 0 and li > –∞ then vi = xi – li

• If gi < 0 and ui = ∞ then vi = –1

• If gi ≥ 0 and li = –∞ then vi = 1

The nonlinear system Equation 10-12 is not differentiable everywhere. Nondifferentiability occurs when vi = 0. You can avoid such points by maintaining strict feasibility, i.e., restricting l < x < u.

The scaled modified Newton step sk for the nonlinear system of equations given by Equation 10-12 is defined as the solution to the linear system

 $\stackrel{^}{M}D{s}^{N}=-\stackrel{^}{g}$ (10-13)

at the kth iteration, where

 $\stackrel{^}{g}={D}^{-1}g=\text{diag}\left({|v|}^{1/2}\right)g,$ (10-14)

and

 $\stackrel{^}{M}={D}^{-1}H{D}^{-1}+\text{diag}\left(g\right){J}^{v}.$ (10-15)

Here Jv plays the role of the Jacobian of |v|. Each diagonal component of the diagonal matrix Jv equals 0, –1, or 1. If all the components of l and u are finite, Jv = diag(sign(g)). At a point where gi = 0, vi might not be differentiable. ${J}_{ii}^{v}=0$ is defined at such a point. Nondifferentiability of this type is not a cause for concern because, for such a component, it is not significant which value vi takes. Further, |vi| will still be discontinuous at this point, but the function |vigi is continuous.

Second, reflections are used to increase the step size. A (single) reflection step is defined as follows. Given a step p that intersects a bound constraint, consider the first bound constraint crossed by p; assume it is the ith bound constraint (either the ith upper or ith lower bound). Then the reflection step pR = p except in the ith component, where pRi = –pi.