Main Content

lsqlin

Solve constrained linear least-squares problems

Description

Linear least-squares solver with bounds or linear constraints.

Solves least-squares curve fitting problems of the form

minx12Cxd22 such that {Axb,Aeqx=beq,lbxub.

Note

lsqlin applies only to the solver-based approach. For a discussion of the two optimization approaches, see First Choose Problem-Based or Solver-Based Approach.

example

x = lsqlin(C,d,A,b) solves the linear system C*x = d in the least-squares sense, subject to A*x ≤ b.

example

x = lsqlin(C,d,A,b,Aeq,beq,lb,ub) adds linear equality constraints Aeq*x = beq and bounds lb ≤ x ≤ ub. If you do not need certain constraints such as Aeq and beq, set them to []. If x(i) is unbounded below, set lb(i) = -Inf, and if x(i) is unbounded above, set ub(i) = Inf.

example

x = lsqlin(C,d,A,b,Aeq,beq,lb,ub,x0,options) minimizes with an initial point x0 and the optimization options specified in options. Use optimoptions to set these options. If you do not want to include an initial point, set the x0 argument to []; however, a nonempty x0 is required for the 'active-set' algorithm.

x = lsqlin(problem) finds the minimum for problem, a structure described in problem. Create the problem structure using dot notation or the struct function. Or create a problem structure from an OptimizationProblem object by using prob2struct.

example

[x,resnorm,residual,exitflag,output,lambda] = lsqlin(___), for any input arguments described above, returns:

  • The squared 2-norm of the residual resnorm = Cxd22

  • The residual residual = C*x - d

  • A value exitflag describing the exit condition

  • A structure output containing information about the optimization process

  • A structure lambda containing the Lagrange multipliers

    The factor ½ in the definition of the problem affects the values in the lambda structure.

example

[wsout,resnorm,residual,exitflag,output,lambda] = lsqlin(C,d,A,b,Aeq,beq,lb,ub,ws) starts lsqlin from the data in the warm start object ws, using the options in ws. The returned argument wsout contains the solution point in wsout.X. By using wsout as the initial warm start object in a subsequent solver call, lsqlin can work faster.

Examples

collapse all

Find the x that minimizes the norm of C*x - d for an overdetermined problem with linear inequality constraints.

Specify the problem and constraints.

C = [0.9501    0.7620    0.6153    0.4057
    0.2311    0.4564    0.7919    0.9354
    0.6068    0.0185    0.9218    0.9169
    0.4859    0.8214    0.7382    0.4102
    0.8912    0.4447    0.1762    0.8936];
d = [0.0578
    0.3528
    0.8131
    0.0098
    0.1388];
A = [0.2027    0.2721    0.7467    0.4659
    0.1987    0.1988    0.4450    0.4186
    0.6037    0.0152    0.9318    0.8462];
b = [0.5251
    0.2026
    0.6721];

Call lsqlin to solve the problem.

x = lsqlin(C,d,A,b)
Minimum found that satisfies the constraints.

Optimization completed because the objective function is non-decreasing in 
feasible directions, to within the value of the optimality tolerance,
and constraints are satisfied to within the value of the constraint tolerance.
x = 4×1

    0.1299
   -0.5757
    0.4251
    0.2438

Find the x that minimizes the norm of C*x - d for an overdetermined problem with linear equality and inequality constraints and bounds.

Specify the problem and constraints.

C = [0.9501    0.7620    0.6153    0.4057
    0.2311    0.4564    0.7919    0.9354
    0.6068    0.0185    0.9218    0.9169
    0.4859    0.8214    0.7382    0.4102
    0.8912    0.4447    0.1762    0.8936];
d = [0.0578
    0.3528
    0.8131
    0.0098
    0.1388];
A =[0.2027    0.2721    0.7467    0.4659
    0.1987    0.1988    0.4450    0.4186
    0.6037    0.0152    0.9318    0.8462];
b =[0.5251
    0.2026
    0.6721];
Aeq = [3 5 7 9];
beq = 4;
lb = -0.1*ones(4,1);
ub = 2*ones(4,1);

Call lsqlin to solve the problem.

x = lsqlin(C,d,A,b,Aeq,beq,lb,ub)
Minimum found that satisfies the constraints.

Optimization completed because the objective function is non-decreasing in 
feasible directions, to within the value of the optimality tolerance,
and constraints are satisfied to within the value of the constraint tolerance.
x = 4×1

   -0.1000
   -0.1000
    0.1599
    0.4090

This example shows how to use nondefault options for linear least squares.

Set options to use the 'interior-point' algorithm and to give iterative display.

options = optimoptions('lsqlin','Algorithm','interior-point','Display','iter');

Set up a linear least-squares problem.

C = [0.9501    0.7620    0.6153    0.4057
    0.2311    0.4564    0.7919    0.9354
    0.6068    0.0185    0.9218    0.9169
    0.4859    0.8214    0.7382    0.4102
    0.8912    0.4447    0.1762    0.8936];
d = [0.0578
    0.3528
    0.8131
    0.0098
    0.1388];
A = [0.2027    0.2721    0.7467    0.4659
    0.1987    0.1988    0.4450    0.4186
    0.6037    0.0152    0.9318    0.8462];
b = [0.5251
    0.2026
    0.6721];

Run the problem.

x = lsqlin(C,d,A,b,[],[],[],[],[],options)
 Iter         Resnorm  Primal Infeas    Dual Infeas  Complementarity  
    0    6.545534e-01   1.600492e+00   6.150431e-01     1.000000e+00  
    1    6.545534e-01   8.002458e-04   3.075216e-04     2.430833e-01  
    2    1.757343e-01   4.001229e-07   1.537608e-07     5.945636e-02  
    3    5.619277e-02   2.000615e-10   2.036997e-08     1.370933e-02  
    4    2.587604e-02   9.994783e-14   1.006816e-08     2.548273e-03  
    5    1.868939e-02   2.775558e-17   2.955102e-09     4.295807e-04  
    6    1.764630e-02   2.775558e-17   1.237758e-09     3.102850e-05  
    7    1.758561e-02   2.498002e-16   1.645864e-10     1.138719e-07  
    8    1.758538e-02   2.498002e-16   2.399331e-13     5.693290e-11  

Minimum found that satisfies the constraints.

Optimization completed because the objective function is non-decreasing in 
feasible directions, to within the value of the optimality tolerance,
and constraints are satisfied to within the value of the constraint tolerance.
x = 4×1

    0.1299
   -0.5757
    0.4251
    0.2438

Obtain and interpret all lsqlin outputs.

Define a problem with linear inequality constraints and bounds. The problem is overdetermined because there are four columns in the C matrix but five rows. This means the problem has four unknowns and five conditions, even before including the linear constraints and bounds.

C = [0.9501    0.7620    0.6153    0.4057
    0.2311    0.4564    0.7919    0.9354
    0.6068    0.0185    0.9218    0.9169
    0.4859    0.8214    0.7382    0.4102
    0.8912    0.4447    0.1762    0.8936];
d = [0.0578
    0.3528
    0.8131
    0.0098
    0.1388];
A = [0.2027    0.2721    0.7467    0.4659
    0.1987    0.1988    0.4450    0.4186
    0.6037    0.0152    0.9318    0.8462];
b = [0.5251
    0.2026
    0.6721];
lb = -0.1*ones(4,1);
ub = 2*ones(4,1);

Set options to use the 'interior-point' algorithm.

options = optimoptions('lsqlin','Algorithm','interior-point');

The 'interior-point' algorithm does not use an initial point, so set x0 to [].

x0 = [];

Call lsqlin with all outputs.

[x,resnorm,residual,exitflag,output,lambda] = ...
    lsqlin(C,d,A,b,[],[],lb,ub,x0,options)
Minimum found that satisfies the constraints.

Optimization completed because the objective function is non-decreasing in 
feasible directions, to within the value of the optimality tolerance,
and constraints are satisfied to within the value of the constraint tolerance.
x = 4×1

   -0.1000
   -0.1000
    0.2152
    0.3502

resnorm = 0.1672
residual = 5×1

    0.0455
    0.0764
   -0.3562
    0.1620
    0.0784

exitflag = 1
output = struct with fields:
            message: 'Minimum found that satisfies the constraints....'
          algorithm: 'interior-point'
      firstorderopt: 4.3374e-11
    constrviolation: 0
         iterations: 6
       linearsolver: 'dense'
       cgiterations: []

lambda = struct with fields:
    ineqlin: [3x1 double]
      eqlin: [0x1 double]
      lower: [4x1 double]
      upper: [4x1 double]

Examine the nonzero Lagrange multiplier fields in more detail. First examine the Lagrange multipliers for the linear inequality constraint.

lambda.ineqlin
ans = 3×1

    0.0000
    0.2392
    0.0000

Lagrange multipliers are nonzero exactly when the solution is on the corresponding constraint boundary. In other words, Lagrange multipliers are nonzero when the corresponding constraint is active. lambda.ineqlin(2) is nonzero. This means that the second element in A*x should equal the second element in b, because the constraint is active.

[A(2,:)*x,b(2)]
ans = 1×2

    0.2026    0.2026

Now examine the Lagrange multipliers for the lower and upper bound constraints.

lambda.lower
ans = 4×1

    0.0409
    0.2784
    0.0000
    0.0000

lambda.upper
ans = 4×1

     0
     0
     0
     0

The first two elements of lambda.lower are nonzero. You see that x(1) and x(2) are at their lower bounds, -0.1. All elements of lambda.upper are essentially zero, and you see that all components of x are less than their upper bound, 2.

Create a warm start object so you can solve a modified problem quickly. Set options to turn off iterative display to support warm start.

rng default % For reproducibility
options = optimoptions('lsqlin','Algorithm','active-set','Display','off');
n = 15;
x0 = 5*rand(n,1);
ws = optimwarmstart(x0,options);

Create and solve the first problem. Find the solution time.

r = 1:n-1; % Index for making vectors
v(n) = (-1)^(n+1)/n; % Allocating the vector v
v(r) =( -1).^(r+1)./r;
C = gallery('circul',v);
C = [C;C];
r = 1:2*n;
d(r) = n-r;
lb = -5*ones(1,n);
ub = 5*ones(1,n);
tic
[ws,fval,~,exitflag,output] = lsqlin(C,d,[],[],[],[],lb,ub,ws)
toc
Elapsed time is 0.005117 seconds.

Add a linear constraint and solve again.

A = ones(1,n);
b = -10;
tic
[ws,fval,~,exitflag,output] = lsqlin(C,d,A,b,[],[],lb,ub,ws)
toc
Elapsed time is 0.001491 seconds.

Input Arguments

collapse all

Multiplier matrix, specified as a matrix of doubles. C represents the multiplier of the solution x in the expression C*x - d. C is M-by-N, where M is the number of equations, and N is the number of elements of x.

Example: C = [1,4;2,5;7,8]

Data Types: double

Constant vector, specified as a vector of doubles. d represents the additive constant term in the expression C*x - d. d is M-by-1, where M is the number of equations.

Example: d = [5;0;-12]

Data Types: double

Linear inequality constraints, specified as a real matrix. A is an M-by-N matrix, where M is the number of inequalities, and N is the number of variables (number of elements in x0). For large problems, pass A as a sparse matrix.

A encodes the M linear inequalities

A*x <= b,

where x is the column vector of N variables x(:), and b is a column vector with M elements.

For example, consider these inequalities:

x1 + 2x2 ≤ 10
3x1 + 4x2 ≤ 20
5x1 + 6x2 ≤ 30,

Specify the inequalities by entering the following constraints.

A = [1,2;3,4;5,6];
b = [10;20;30];

Example: To specify that the x components sum to 1 or less, use A = ones(1,N) and b = 1.

Data Types: double

Linear inequality constraints, specified as a real vector. b is an M-element vector related to the A matrix. If you pass b as a row vector, solvers internally convert b to the column vector b(:). For large problems, pass b as a sparse vector.

b encodes the M linear inequalities

A*x <= b,

where x is the column vector of N variables x(:), and A is a matrix of size M-by-N.

For example, consider these inequalities:

x1 + 2x2 ≤ 10
3x1 + 4x2 ≤ 20
5x1 + 6x2 ≤ 30.

Specify the inequalities by entering the following constraints.

A = [1,2;3,4;5,6];
b = [10;20;30];

Example: To specify that the x components sum to 1 or less, use A = ones(1,N) and b = 1.

Data Types: double

Linear equality constraints, specified as a real matrix. Aeq is an Me-by-N matrix, where Me is the number of equalities, and N is the number of variables (number of elements in x0). For large problems, pass Aeq as a sparse matrix.

Aeq encodes the Me linear equalities

Aeq*x = beq,

where x is the column vector of N variables x(:), and beq is a column vector with Me elements.

For example, consider these inequalities:

x1 + 2x2 + 3x3 = 10
2x1 + 4x2 + x3 = 20,

Specify the inequalities by entering the following constraints.

Aeq = [1,2,3;2,4,1];
beq = [10;20];

Example: To specify that the x components sum to 1, use Aeq = ones(1,N) and beq = 1.

Data Types: double

Linear equality constraints, specified as a real vector. beq is an Me-element vector related to the Aeq matrix. If you pass beq as a row vector, solvers internally convert beq to the column vector beq(:). For large problems, pass beq as a sparse vector.

beq encodes the Me linear equalities

Aeq*x = beq,

where x is the column vector of N variables x(:), and Aeq is a matrix of size Me-by-N.

For example, consider these equalities:

x1 + 2x2 + 3x3 = 10
2x1 + 4x2 + x3 = 20.

Specify the equalities by entering the following constraints.

Aeq = [1,2,3;2,4,1];
beq = [10;20];

Example: To specify that the x components sum to 1, use Aeq = ones(1,N) and beq = 1.

Data Types: double

Lower bounds, specified as a vector or array of doubles. lb represents the lower bounds elementwise in lb  x  ub.

Internally, lsqlin converts an array lb to the vector lb(:).

Example: lb = [0;-Inf;4] means x(1) ≥ 0, x(3) ≥ 4.

Data Types: double

Upper bounds, specified as a vector or array of doubles. ub represents the upper bounds elementwise in lb  x  ub.

Internally, lsqlin converts an array ub to the vector ub(:).

Example: ub = [Inf;4;10] means x(2) ≤ 4, x(3) ≤ 10.

Data Types: double

Initial point for the solution process, specified as a real vector or array.

  • The 'interior-point' algorithm does not use x0.

  • x0 is an optional argument for the 'trust-region-reflective' algorithm. By default, this algorithm sets x0 to an all-zero vector. If any of these components violate the bound constraints, lsqlin resets the entire default x0 to a vector that satisfies the bounds. If any components of a nondefault x0 violates the bounds, lsqlin sets those components to satisfy the bounds.

  • A nonempty x0 is a required argument for the 'active-set' algorithm. If any components of x0 violates the bounds, lsqlin sets those components to satisfy the bounds.

Example: x0 = [4;-3]

Data Types: double

Options for lsqlin, specified as the output of the optimoptions function or as a structure such as created by optimset.

Some options are absent from the optimoptions display. These options appear in italics in the following table. For details, see View Optimization Options.

All Algorithms

Algorithm

Choose the algorithm:

  • 'interior-point' (default)

  • 'trust-region-reflective'

  • 'active-set'

The 'trust-region-reflective' algorithm allows only upper and lower bounds, no linear inequalities or equalities. If you specify both the 'trust-region-reflective' algorithm and linear constraints, lsqlin uses the 'interior-point' algorithm.

The 'trust-region-reflective' algorithm does not allow equal upper and lower bounds.

When the problem has no constraints, lsqlin calls mldivide internally.

If you have a large number of linear constraints and not a large number of variables, try the 'active-set' algorithm.

For more information on choosing the algorithm, see Choosing the Algorithm.

Diagnostics

Display diagnostic information about the function to be minimized or solved. The choices are 'on' or the default 'off'.

Display

Level of display returned to the command line.

  • 'off' or 'none' displays no output.

  • 'final' displays just the final output (default).

The 'interior-point' and 'active-set' algorithms allow additional values:

  • 'iter' gives iterative display.

  • 'iter-detailed' gives iterative display with a detailed exit message.

  • 'final-detailed' displays just the final output, with a detailed exit message.

MaxIterations

Maximum number of iterations allowed, a positive integer. The default value is 2000 for the 'active-set' algorithm, and 200 for the other algorithms.

For optimset, the option name is MaxIter. See Current and Legacy Option Names.

trust-region-reflective Algorithm Options

FunctionTolerance

Termination tolerance on the function value, a positive scalar. The default is 100*eps, about 2.2204e-14.

For optimset, the option name is TolFun. See Current and Legacy Option Names.

JacobianMultiplyFcn

Jacobian multiply function, specified as a function handle. For large-scale structured problems, this function should compute the Jacobian matrix product C*Y, C'*Y, or C'*(C*Y) without actually forming C. Write the function in the form

W = jmfun(Jinfo,Y,flag)

where Jinfo contains a matrix used to compute C*Y (or C'*Y, or C'*(C*Y)).

jmfun must compute one of three different products, depending on the value of flag that lsqlin passes:

  • If flag == 0 then W = C'*(C*Y).

  • If flag > 0 then W = C*Y.

  • If flag < 0 then W = C'*Y.

In each case, jmfun need not form C explicitly. lsqlin uses Jinfo to compute the preconditioner. See Passing Extra Parameters for information on how to supply extra parameters if necessary.

See Jacobian Multiply Function with Linear Least Squares for an example.

For optimset, the option name is JacobMult. See Current and Legacy Option Names.

MaxPCGIter

Maximum number of PCG (preconditioned conjugate gradient) iterations, a positive scalar. The default is max(1,floor(numberOfVariables/2)). For more information, see Trust-Region-Reflective Algorithm.

OptimalityTolerance

Termination tolerance on the first-order optimality, a positive scalar. The default is 100*eps, about 2.2204e-14. See First-Order Optimality Measure.

For optimset, the option name is TolFun. See Current and Legacy Option Names.

PrecondBandWidth

Upper bandwidth of preconditioner for PCG (preconditioned conjugate gradient). By default, diagonal preconditioning is used (upper bandwidth of 0). For some problems, increasing the bandwidth reduces the number of PCG iterations. Setting PrecondBandWidth to Inf uses a direct factorization (Cholesky) rather than the conjugate gradients (CG). The direct factorization is computationally more expensive than CG, but produces a better quality step toward the solution. For more information, see Trust-Region-Reflective Algorithm.

SubproblemAlgorithm

Determines how the iteration step is calculated. The default, 'cg', takes a faster but less accurate step than 'factorization'. See Trust-Region-Reflective Least Squares.

TolPCG

Termination tolerance on the PCG (preconditioned conjugate gradient) iteration, a positive scalar. The default is 0.1.

TypicalX

Typical x values. The number of elements in TypicalX is equal to the number of variables. The default value is ones(numberofvariables,1). lsqlin uses TypicalX internally for scaling. TypicalX has an effect only when x has unbounded components, and when a TypicalX value for an unbounded component is larger than 1.

interior-point Algorithm Options

ConstraintTolerance

Tolerance on the constraint violation, a positive scalar. The default is 1e-8.

For optimset, the option name is TolCon. See Current and Legacy Option Names.

LinearSolver

Type of internal linear solver in algorithm:

  • 'auto' (default) — Use 'sparse' if the C matrix is sparse, 'dense' otherwise.

  • 'sparse' — Use sparse linear algebra. See Sparse Matrices.

  • 'dense' — Use dense linear algebra.

OptimalityTolerance

Termination tolerance on the first-order optimality, a positive scalar. The default is 1e-8. See First-Order Optimality Measure.

For optimset, the option name is TolFun. See Current and Legacy Option Names.

StepTolerance

Termination tolerance on x, a positive scalar. The default is 1e-12.

For optimset, the option name is TolX. See Current and Legacy Option Names.

'active-set' Algorithm Options

ConstraintTolerance

Tolerance on the constraint violation, a positive scalar. The default value is 1e-8.

For optimset, the option name is TolCon. See Current and Legacy Option Names.

ObjectiveLimit

A tolerance (stopping criterion) that is a scalar. If the objective function value goes below ObjectiveLimit and the current point is feasible, the iterations halt because the problem is unbounded, presumably. The default value is -1e20.

OptimalityTolerance

Termination tolerance on the first-order optimality, a positive scalar. The default value is 1e-8. See First-Order Optimality Measure.

For optimset, the name is TolFun. See Current and Legacy Option Names.

StepTolerance

Termination tolerance on x, a positive scalar. The default value is 1e-8.

For optimset, the option name is TolX. See Current and Legacy Option Names.

Optimization problem, specified as a structure with the following fields.

C

Matrix multiplier in the term C*x - d

d

Additive constant in the term C*x - d

Aineq

Matrix for linear inequality constraints

bineq

Vector for linear inequality constraints

Aeq

Matrix for linear equality constraints

beq

Vector for linear equality constraints
lbVector of lower bounds
ubVector of upper bounds

x0

Initial point for x

solver

'lsqlin'

options

Options created with optimoptions

Note

You cannot use warm start with the problem argument.

Data Types: struct

Warm start object, specified as an object created using optimwarmstart. The warm start object contains the start point and options, and optional data for memory size in code generation. See Warm Start Best Practices.

Example: ws = optimwarmstart(x0,options)

Output Arguments

collapse all

Solution, returned as a vector that minimizes the norm of C*x-d subject to all bounds and linear constraints.

Solution warm start object, returned as a LsqlinWarmStart object. The solution point is wsout.X.

You can use wsout as the input warm start object in a subsequent lsqlin call.

Objective value, returned as the scalar value norm(C*x-d)^2.

Solution residuals, returned as the vector C*x-d.

Algorithm stopping condition, returned as an integer identifying the reason the algorithm stopped. The following lists the values of exitflag and the corresponding reasons lsqlin stopped.

3

Change in the residual was smaller than the specified tolerance options.FunctionTolerance. (trust-region-reflective algorithm)

2

Step size smaller than options.StepTolerance, constraints satisfied. (interior-point algorithm)

1

Function converged to a solution x.

0

Number of iterations exceeded options.MaxIterations.

-2

The problem is infeasible. Or, for the interior-point algorithm, step size smaller than options.StepTolerance, but constraints are not satisfied.

-3The problem is unbounded.

-4

Ill-conditioning prevents further optimization.

-8

Unable to compute a step direction.

The exit message for the interior-point algorithm can give more details on the reason lsqlin stopped, such as exceeding a tolerance. See Exit Flags and Exit Messages.

Solution process summary, returned as a structure containing information about the optimization process.

iterations

Number of iterations the solver took.

algorithm

One of these algorithms:

  • 'interior-point'

  • 'trust-region-reflective'

  • 'mldivide' for an unconstrained problem

For an unconstrained problem, iterations = 0, and the remaining entries in the output structure are empty.

constrviolation

Constraint violation that is positive for violated constraints (not returned for the 'trust-region-reflective' algorithm).

constrviolation = max([0;norm(Aeq*x-beq, inf);(lb-x);(x-ub);(A*x-b)])

message

Exit message.

firstorderopt

First-order optimality at the solution. See First-Order Optimality Measure.

linearsolver

Type of internal linear solver, 'dense' or 'sparse' ('interior-point' algorithm only)

cgiterations

Number of conjugate gradient iterations the solver performed. Nonempty only for the 'trust-region-reflective' algorithm.

See Output Structures.

Lagrange multipliers, returned as a structure with the following fields.

lower

Lower bounds lb

upper

Upper bounds ub

ineqlin

Linear inequalities

eqlin

Linear equalities

See Lagrange Multiplier Structures.

Tips

  • For problems with no constraints, you can use mldivide (matrix left division). When you have no constraints, lsqlin returns x = C\d.

  • Because the problem being solved is always convex, lsqlin finds a global, although not necessarily unique, solution.

  • If your problem has many linear constraints and few variables, try using the 'active-set' algorithm. See Quadratic Programming with Many Linear Constraints.

  • Better numerical results are likely if you specify equalities explicitly, using Aeq and beq, instead of implicitly, using lb and ub.

  • The trust-region-reflective algorithm does not allow equal upper and lower bounds. Use another algorithm for this case.

  • If the specified input bounds for a problem are inconsistent, the output x is x0 and the outputs resnorm and residual are [].

  • You can solve some large structured problems, including those where the C matrix is too large to fit in memory, using the trust-region-reflective algorithm with a Jacobian multiply function. For information, see trust-region-reflective Algorithm Options.

Algorithms

collapse all

Trust-Region-Reflective Algorithm

This method is a subspace trust-region method based on the interior-reflective Newton method described in [1]. Each iteration involves the approximate solution of a large linear system using the method of preconditioned conjugate gradients (PCG). See Trust-Region-Reflective Least Squares, and in particular Large Scale Linear Least Squares.

Interior-Point Algorithm

The 'interior-point' algorithm is based on the quadprog 'interior-point-convex' algorithm. See Linear Least Squares: Interior-Point or Active-Set.

Active-Set Algorithm

The 'active-set' algorithm is based on the quadprog 'active-set' algorithm. For more information, see Linear Least Squares: Interior-Point or Active-Set and active-set quadprog Algorithm.

References

[1] Coleman, T. F. and Y. Li. “A Reflective Newton Method for Minimizing a Quadratic Function Subject to Bounds on Some of the Variables,” SIAM Journal on Optimization, Vol. 6, Number 4, pp. 1040–1058, 1996.

[2] Gill, P. E., W. Murray, and M. H. Wright. Practical Optimization, Academic Press, London, UK, 1981.

Warm Start

A warm start object maintains a list of active constraints from the previous solved problem. The solver carries over as much active constraint information as possible to solve the current problem. If the previous problem is too different from the current one, no active set information is reused. In this case, the solver effectively executes a cold start in order to rebuild the list of active constraints.

Alternative Functionality

App

The Optimize Live Editor task provides a visual interface for lsqlin.

Extended Capabilities

Version History

Introduced before R2006a