This is machine translation

Translated by Microsoft
Mouseover text to see original. Click the button below to return to the English verison of the page.

Note: This page has been translated by MathWorks. Please click here
To view all translated materals including this page, select Japan from the country navigator on the bottom of this page.

Quadratic Programming Algorithms

Quadratic Programming Definition

Quadratic programming is the problem of finding a vector x that minimizes a quadratic function, possibly subject to linear constraints:


such that A·x ≤ b, Aeq·x = beq, l ≤ x ≤ u.

interior-point-convex quadprog Algorithm

The interior-point-convex algorithm performs the following steps:

    Note:   The algorithm has two code paths. It takes one when the Hessian matrix H is an ordinary (full) matrix of doubles, and it takes the other when H is a sparse matrix. For details of the sparse data type, see Sparse Matrices (MATLAB). Generally, the algorithm is faster for large problems that have relatively few nonzero terms when you specify H as sparse. Similarly, the algorithm is faster for small or relatively dense problems when you specify H as full.


The algorithm begins by attempting to simplify the problem by removing redundancies and simplifying constraints. The tasks performed during the presolve step include:

  • Check if any variables have equal upper and lower bounds. If so, check for feasibility, and then fix and remove the variables.

  • Check if any linear inequality constraint involves just one variable. If so, check for feasibility, and change the linear constraint to a bound.

  • Check if any linear equality constraint involves just one variable. If so, check for feasibility, and then fix and remove the variable.

  • Check if any linear constraint matrix has zero rows. If so, check for feasibility, and delete the rows.

  • Check if the bounds and linear constraints are consistent.

  • Check if any variables appear only as linear terms in the objective function and do not appear in any linear constraint. If so, check for feasibility and boundedness, and fix the variables at their appropriate bounds.

  • Change any linear inequality constraints to linear equality constraints by adding slack variables.

If algorithm detects an infeasible or unbounded problem, it halts and issues an appropriate exit message.

The algorithm might arrive at a single feasible point, which represents the solution.

If the algorithm does not detect an infeasible or unbounded problem in the presolve step, it continues, if necessary, with the other steps. At the end, the algorithm reconstructs the original problem, undoing any presolve transformations. This final step is the postsolve step.

For details, see Gould and Toint [63].

Generate Initial Point

The initial point x0 for the algorithm is:

  1. Initialize x0 to ones(n,1), where n is the number of rows in H.

  2. For components that have both an upper bound ub and a lower bound lb, if a component of x0 is not strictly inside the bounds, the component is set to (ub + lb)/2.

  3. For components that have only one bound, modify the component if necessary to lie strictly inside the bound.

  4. Take a predictor step (see Predictor-Corrector), with minor corrections for feasibility, not a full predictor-corrector step. This places the initial point closer to the central path without entailing the overhead of a full predictor-corrector step. For details of the central path, see Nocedal and Wright [7], page 397.


The sparse and full interior-point-convex algorithms differ mainly in the predictor-corrector phase. The algorithms are similar, but differ in some details. For the basic algorithm description, see Mehrotra [47].

Sparse Predictor-Corrector.  Similar to the fmincon interior-point algorithm, the sparse interior-point-convex algorithm tries to find a point where the Karush-Kuhn-Tucker (KKT) conditions hold. For the quadratic programming problem described in Quadratic Programming Definition, these conditions are:

Hx+cAeqTyA¯Tz=0A¯xb¯s=0Aeqxbeq=0sizi=0, i=1,2,...,ms0z0.


  • A¯ is the extended linear inequality matrix that includes bounds written as linear inequalities. b¯ is the corresponding linear inequality vector, including bounds.

  • s is the vector of slacks that convert inequality constraints to equalities. s has length m, the number of linear inequalities and bounds.

  • z is the vector of Lagrange multipliers corresponding to s.

  • y is the vector of Lagrange multipliers associated with the equality constraints.

The algorithm first predicts a step from the Newton-Raphson formula, then computes a corrector step. The corrector attempts to better enforce the nonlinear constraint sizi = 0.

Definitions for the predictor step:

  • rd, the dual residual:


  • req, the primal equality constraint residual:


  • rineq, the primal inequality constraint residual, which includes bounds and slacks:


  • rsz, the complementarity residual:

    rsz = Sz.

    S is the diagonal matrix of slack terms, z is the column matrix of Lagrange multipliers.

  • rc, the average complementarity:


In a Newton step, the changes in x, s, y, and z, are given by:


However, a full Newton step might be infeasible, because of the positivity constraints on s and z. Therefore, quadprog shortens the step, if necessary, to maintain positivity.

Additionally, to maintain a "centered" position in the interior, instead of trying to solve sizi = 0, the algorithm takes a positive parameter σ, and tries to solve

sizi = σrc.

quadprog replaces rsz in the Newton step equation with rsz + ΔsΔz – σrc1, where 1 is the vector of ones. Also, quadprog reorders the Newton equations to obtain a symmetric, more numerically stable system for the predictor step calculation.

After calculating the corrected Newton step, the algorithm performs more calculations to get both a longer current step, and to prepare for better subsequent steps. These multiple correction calculations can improve both performance and robustness. For details, see Gondzio [4].

Full Predictor-Corrector.  The full predictor-corrector algorithm does not combine bounds into linear constraints, so it has another set of slack variables corresponding to the bounds. The algorithm shifts lower bounds to zero. And, if there is only one bound on a variable, the algorithm turns it into a lower bound of zero, by negating the inequality of an upper bound.

A¯ is the extended linear matrix that includes both linear inequalities and linear equalities. b¯ is the corresponding linear equality vector. A¯ also includes terms for extending the vector x with slack variables s that turn inequality constraints to equality constraints:


where x0 means the original x vector.

The KKT conditions are

Hx+cA¯Tyv+w=0A¯x=b¯x+t=uvixi=0, i=1,2,...,mwiti=0, i=1,2,...,nx,v,w,t0.(9-2)

To find the solution x, slack variables and dual variables to Equation 9-2, the algorithm basically considers a Newton-Raphson step:


where X, V, W, and T are diagonal matrices corresponding to the vectors x, v, w, and t respectively. The residual vectors on the far right side of the equation are:

  • rd, the dual residual

  • rp, the primal residual

  • rub, the upper bound residual

  • rvx, the lower bound complementarity residual

  • rwt, the upper bound complementarity residual

The algorithm solves Equation 9-3 by first converting it to the symmetric matrix form




All the matrix inverses in the definitions of D and R are simple to compute because the matrices are diagonal.

To derive Equation 9-4 from Equation 9-3, notice that the second row of Equation 9-4 is the same as the second matrix row of Equation 9-3. The first row of Equation 9-4 comes from solving the last two rows of Equation 9-3 for Δv and Δw, and then solving for Δt.

To solve Equation 9-4, the algorithm follows the essential elements of Altman and Gondzio [1]. The algorithm solves the symmetric system by an LDL decomposition. As pointed out by authors such as Vanderbei and Carpenter [2], this decomposition is numerically stable without any pivoting, so can be fast.

After calculating the corrected Newton step, the algorithm performs more calculations to get both a longer current step, and to prepare for better subsequent steps. These multiple correction calculations can improve both performance and robustness. For details, see Gondzio [4].

The full quadprog predictor-corrector algorithm is largely the same as that in the linprog 'interior-point' algorithm, but includes quadratic terms as well. See Predictor-Corrector.


[1] Altman, Anna and J. Gondzio. Regularized symmetric indefinite systems in interior point methods for linear and quadratic optimization. Optimization Methods and Software, 1999. Available for download here.

[2] Vanderbei, R. J. and T. J. Carpenter. Symmetric indefinite systems for interior point methods. Mathematical Programming 58, 1993. pp. 1–32. Available for download here.

Stopping Conditions

The predictor-corrector algorithm iterates until it reaches a point that is feasible (satisfies the constraints to within tolerances) and where the relative step sizes are small. Specifically, define


The algorithm stops when all of these conditions are satisfied:




rc essentially measures the size of the complementarity residuals xv and tw, which are each vectors of zeros at a solution.

Infeasibility Detection

quadprog calculates a merit function φ at every iteration. The merit function is a measure of feasibility. quadprog stops if the merit function grows too large. In this case, quadprog declares the problem to be infeasible.

The merit function is related to the KKT conditions for the problem—see Predictor-Corrector. Use the following definitions:


The notation A¯ and b¯ means the linear inequality coefficients, augmented with terms to represent bounds for the sparse algorithm. The notation λ¯ineq similarly represents Lagrange multipliers for the linear inequality constraints, including bound constraints. This was called z in Predictor-Corrector, and λeq was called y.

The merit function φ is


If this merit function becomes too large, quadprog declares the problem to be infeasible and halts with exit flag -2.

trust-region-reflective quadprog Algorithm

Many of the methods used in Optimization Toolbox™ solvers are based on trust regions, a simple yet powerful concept in optimization.

To understand the trust-region approach to optimization, consider the unconstrained minimization problem, minimize f(x), where the function takes vector arguments and returns scalars. Suppose you are at a point x in n-space and you want to improve, i.e., move to a point with a lower function value. The basic idea is to approximate f with a simpler function q, which reasonably reflects the behavior of function f in a neighborhood N around the point x. This neighborhood is the trust region. A trial step s is computed by minimizing (or approximately minimizing) over N. This is the trust-region subproblem,

mins{q(s), sN}.(9-5)

The current point is updated to be x + s if f(x + s) < f(x); otherwise, the current point remains unchanged and N, the region of trust, is shrunk and the trial step computation is repeated.

The key questions in defining a specific trust-region approach to minimizing f(x) are how to choose and compute the approximation q (defined at the current point x), how to choose and modify the trust region N, and how accurately to solve the trust-region subproblem. This section focuses on the unconstrained problem. Later sections discuss additional complications due to the presence of constraints on the variables.

In the standard trust-region method ([48]), the quadratic approximation q is defined by the first two terms of the Taylor approximation to F at x; the neighborhood N is usually spherical or ellipsoidal in shape. Mathematically the trust-region subproblem is typically stated

min{12sTHs+sTg  such that  DsΔ},(9-6)

where g is the gradient of f at the current point x, H is the Hessian matrix (the symmetric matrix of second derivatives), D is a diagonal scaling matrix, Δ is a positive scalar, and ∥ . ∥ is the 2-norm. Good algorithms exist for solving Equation 9-6 (see [48]); such algorithms typically involve the computation of a full eigensystem and a Newton process applied to the secular equation


Such algorithms provide an accurate solution to Equation 9-6. However, they require time proportional to several factorizations of H. Therefore, for large-scale problems a different approach is needed. Several approximation and heuristic strategies, based on Equation 9-6, have been proposed in the literature ([42] and [50]). The approximation approach followed in Optimization Toolbox solvers is to restrict the trust-region subproblem to a two-dimensional subspace S ([39] and [42]). Once the subspace S has been computed, the work to solve Equation 9-6 is trivial even if full eigenvalue/eigenvector information is needed (since in the subspace, the problem is only two-dimensional). The dominant work has now shifted to the determination of the subspace.

The two-dimensional subspace S is determined with the aid of a preconditioned conjugate gradient process described below. The solver defines S as the linear space spanned by s1 and s2, where s1 is in the direction of the gradient g, and s2 is either an approximate Newton direction, i.e., a solution to


or a direction of negative curvature,


The philosophy behind this choice of S is to force global convergence (via the steepest descent direction or negative curvature direction) and achieve fast local convergence (via the Newton step, when it exists).

A sketch of unconstrained minimization using trust-region ideas is now easy to give:

  1. Formulate the two-dimensional trust-region subproblem.

  2. Solve Equation 9-6 to determine the trial step s.

  3. If f(x + s) < f(x), then x = x + s.

  4. Adjust Δ.

These four steps are repeated until convergence. The trust-region dimension Δ is adjusted according to standard rules. In particular, it is decreased if the trial step is not accepted, i.e., f(x + s) ≥ f(x). See [46] and [49] for a discussion of this aspect.

Optimization Toolbox solvers treat a few important special cases of f with specialized functions: nonlinear least-squares, quadratic functions, and linear least-squares. However, the underlying algorithmic ideas are the same as for the general case. These special cases are discussed in later sections.

The subspace trust-region method is used to determine a search direction. However, instead of restricting the step to (possibly) one reflection step, as in the nonlinear minimization case, a piecewise reflective line search is conducted at each iteration. See [45] for details of the line search.

Preconditioned Conjugate Gradient Method

A popular way to solve large symmetric positive definite systems of linear equations Hp = –g is the method of Preconditioned Conjugate Gradients (PCG). This iterative approach requires the ability to calculate matrix-vector products of the form H·v where v is an arbitrary vector. The symmetric positive definite matrix M is a preconditioner for H. That is, M = C2, where C–1HC–1 is a well-conditioned matrix or a matrix with clustered eigenvalues.

In a minimization context, you can assume that the Hessian matrix H is symmetric. However, H is guaranteed to be positive definite only in the neighborhood of a strong minimizer. Algorithm PCG exits when a direction of negative (or zero) curvature is encountered, i.e., dTHd ≤ 0. The PCG output direction, p, is either a direction of negative curvature or an approximate (tol controls how approximate) solution to the Newton system Hp = –g. In either case p is used to help define the two-dimensional subspace used in the trust-region approach discussed in Trust-Region Methods for Nonlinear Minimization.

Linear Equality Constraints

Linear constraints complicate the situation described for unconstrained minimization. However, the underlying ideas described previously can be carried through in a clean and efficient way. The trust-region methods in Optimization Toolbox solvers generate strictly feasible iterates.

The general linear equality constrained minimization problem can be written

min{f(x)  such that  Ax=b},(9-9)

where A is an m-by-n matrix (m ≤ n). Some Optimization Toolbox solvers preprocess A to remove strict linear dependencies using a technique based on the LU factorization of AT [46]. Here A is assumed to be of rank m.

The method used to solve Equation 9-9 differs from the unconstrained approach in two significant ways. First, an initial feasible point x0 is computed, using a sparse least-squares step, so that Ax0 = b. Second, Algorithm PCG is replaced with Reduced Preconditioned Conjugate Gradients (RPCG), see [46], in order to compute an approximate reduced Newton step (or a direction of negative curvature in the null space of A). The key linear algebra step involves solving systems of the form


where A˜ approximates A (small nonzeros of A are set to zero provided rank is not lost) and C is a sparse symmetric positive-definite approximation to H, i.e., C = H. See [46] for more details.

Box Constraints

The box constrained problem is of the form

min{f(x)  such that  lxu},(9-11)

where l is a vector of lower bounds, and u is a vector of upper bounds. Some (or all) of the components of l can be equal to –∞ and some (or all) of the components of u can be equal to ∞. The method generates a sequence of strictly feasible points. Two techniques are used to maintain feasibility while achieving robust convergence behavior. First, a scaled modified Newton step replaces the unconstrained Newton step (to define the two-dimensional subspace S). Second, reflections are used to increase the step size.

The scaled modified Newton step arises from examining the Kuhn-Tucker necessary conditions for Equation 9-11,




and the vector v(x) is defined below, for each 1 ≤ i ≤ n:

  • If gi < 0 and ui < ∞ then vi = xi – ui

  • If gi ≥ 0 and li > –∞ then vi = xi – li

  • If gi < 0 and ui = ∞ then vi = –1

  • If gi ≥ 0 and li = –∞ then vi = 1

The nonlinear system Equation 9-12 is not differentiable everywhere. Nondifferentiability occurs when vi = 0. You can avoid such points by maintaining strict feasibility, i.e., restricting l < x < u.

The scaled modified Newton step sk for the nonlinear system of equations given by Equation 9-12 is defined as the solution to the linear system


at the kth iteration, where




Here Jv plays the role of the Jacobian of |v|. Each diagonal component of the diagonal matrix Jv equals 0, –1, or 1. If all the components of l and u are finite, Jv = diag(sign(g)). At a point where gi = 0, vi might not be differentiable. Jiiv=0 is defined at such a point. Nondifferentiability of this type is not a cause for concern because, for such a component, it is not significant which value vi takes. Further, |vi| will still be discontinuous at this point, but the function |vigi is continuous.

Second, reflections are used to increase the step size. A (single) reflection step is defined as follows. Given a step p that intersects a bound constraint, consider the first bound constraint crossed by p; assume it is the ith bound constraint (either the ith upper or ith lower bound). Then the reflection step pR = p except in the ith component, where pRi = –pi.

Was this topic helpful?