Solve constrained linear least-squares problems
x = lsqlin(C,d,A,b)
x = lsqlin(C,d,A,b,Aeq,beq)
x = lsqlin(C,d,A,b,Aeq,beq,lb,ub)
x = lsqlin(C,d,A,b,Aeq,beq,lb,ub,x0)
x = lsqlin(C,d,A,b,Aeq,beq,lb,ub,x0,options)
x = lsqlin(problem)
[x,resnorm] = lsqlin(...)
[x,resnorm,residual] = lsqlin(...)
[x,resnorm,residual,exitflag] = lsqlin(...)
[x,resnorm,residual,exitflag,output] = lsqlin(...)
[x,resnorm,residual,exitflag,output,lambda] = lsqlin(...)
x = lsqlin(C,d,A,b,Aeq,beq,lb,ub) defines a set of lower and upper bounds on the design variables in x so that the solution is always in the range lb ≤ x ≤ ub. Set Aeq =  and beq =  if no equalities exist.
x = lsqlin(C,d,A,b,Aeq,beq,lb,ub,x0,options) minimizes with the optimization options specified in options. Use optimoptions to set these options.
x = lsqlin(problem) finds the minimum for problem, where problem is a structure described in Input Arguments.
Create the problem structure by exporting a problem from Optimization app, as described in Exporting Your Work.
Note: If the specified input bounds for a problem are inconsistent, the output x is x0 and the outputs resnorm and residual are .
Components of x0 that violate the bounds lb ≤ x ≤ ub are reset to the interior of the box defined by the bounds. Components that respect the bounds are not changed.
If no x0 is provided, x0 is set to the zero vector. If any component of this zero vector x0 violates the bounds, x0 is set to a point in the interior of the box defined by the bounds.
The factor ½ in the definition of the problem affects the values in the lambda structure.
You can solve some large structured problems, including those where the C matrix is too large to fit in memory, using the trust-region-reflective algorithm with a Jacobian multiply function. For information, see Trust-Region-Reflective Algorithm Only.
|Matrix for linear inequality constraints|
|Vector for linear inequality constraints|
|Matrix for linear equality constraints|
|Vector for linear equality constraints|
|lb||Vector of lower bounds|
|ub||Vector of upper bounds|
|Initial point for x|
|Options created with optimoptions|
Function Arguments contains general descriptions of arguments returned by lsqlin. This section provides function-specific details for exitflag, lambda, and output:
Integer identifying the reason the algorithm terminated. The following lists the values of exitflag and the corresponding reasons the algorithm terminated:
Function converged to a solution x.
Change in the residual was smaller than the specified tolerance.
Number of iterations exceeded options.MaxIter.
The problem is infeasible.
Ill-conditioning prevents further optimization.
Magnitude of search direction became too small. No further progress could be made.
Structure containing the Lagrange multipliers at the solution x (separated by constraint type). The fields are
Lower bounds lb
Upper bounds ub
Structure containing information about the optimization. The fields are
Number of iterations taken
Optimization algorithm used
Total number of PCG iterations (trust-region-reflective algorithm, [ ] for active-set)
Measure of first-order optimality (trust-region-reflective algorithm, [ ] for active-set)
Optimization options used by lsqlin. Set or change the values of these options using the optimoptions function. Some options apply to all algorithms, some are only relevant when you are using the trust-region-reflective algorithm. See Optimization Options Reference for detailed information.
All algorithms use the following options:
If you use optimset (not recommended), use LargeScale instead of Algorithm.
Choose the lsqlin algorithm. Choices are 'active-set' and 'trust-region-reflective' (default).
The trust-region-reflective algorithm requires only upper and lower bounds, meaning no linear inequalities or equalities. Otherwise, lsqlin uses the active-set algorithm. For more information on choosing the algorithm, see Choosing the Algorithm.
Display diagnostic information about the function to be minimized or solved. The choices are 'on' or the default 'off'.
Level of display. 'off' or 'none' displays no output; 'final' (default) displays just the final output.
If you use optimoptions (recommended), use Algorithm instead of LargeScale.
Use the large-scale algorithm if possible when set to 'on' (default). Use the medium-scale algorithm when set to 'off'.
The large-scale algorithm requires only upper and lower bounds, meaning no linear inequalities or equalities. Otherwise, lsqlin uses the medium-scale algorithm. For more information on choosing the algorithm, see Choosing the Algorithm.
Maximum number of iterations allowed, a positive integer. The default value is 200.
The trust-region-reflective algorithm uses the following options:
Function handle for Jacobian multiply function. For large-scale structured problems, this function should compute the Jacobian matrix product C*Y, C'*Y, or C'*(C*Y) without actually forming C. Write the function in the form
W = jmfun(Jinfo,Y,flag)
where Jinfo contains a matrix used to compute C*Y (or C'*Y, or C'*(C*Y)).
jmfun must compute one of three different products, depending on the value of flag that lsqlin passes:
In each case, jmfun need not form C explicitly. lsqlin uses Jinfo to compute the preconditioner. See Passing Extra Parameters for information on how to supply extra parameters if necessary.
See Jacobian Multiply Function with Linear Least Squares for an example.
Maximum number of PCG (preconditioned conjugate gradient) iterations, a positive scalar. The default is max(1,floor(numberOfVariables/2)). For more information, see Algorithms.
Upper bandwidth of preconditioner for PCG. By default, diagonal preconditioning is used (upper bandwidth of 0). For some problems, increasing the bandwidth reduces the number of PCG iterations. Setting PrecondBandWidth to Inf uses a direct factorization (Cholesky) rather than the conjugate gradients (CG). The direct factorization is computationally more expensive than CG, but produces a better quality step towards the solution.
Termination tolerance on the function value, a positive scalar. The default is 100*eps, about 2.2204e-14.
Termination tolerance on the PCG iteration, a positive scalar. The default is 0.1.
Typical x values. The number of elements in TypicalX is equal to the number of elements in x0, the starting point. The default value is ones(numberofvariables,1). lsqlin uses TypicalX internally for scaling. TypicalX has an effect only when x has unbounded components, and when a TypicalX value for an unbounded component is larger than 1.
Find the least-squares solution to the overdetermined system C·x = d, subject to A·x ≤ b and lb ≤ x ≤ ub.
First, enter the coefficient matrices and the lower and upper bounds.
C = [ 0.9501 0.7620 0.6153 0.4057 0.2311 0.4564 0.7919 0.9354 0.6068 0.0185 0.9218 0.9169 0.4859 0.8214 0.7382 0.4102 0.8912 0.4447 0.1762 0.8936]; d = [ 0.0578 0.3528 0.8131 0.0098 0.1388]; A =[ 0.2027 0.2721 0.7467 0.4659 0.1987 0.1988 0.4450 0.4186 0.6037 0.0152 0.9318 0.8462]; b =[ 0.5251 0.2026 0.6721]; lb = -0.1*ones(4,1); ub = 2*ones(4,1);
Next, call the constrained linear least-squares routine.
[x,resnorm,residual,exitflag,output,lambda] = ... lsqlin(C,d,A,b,[ ],[ ],lb,ub);
Examine x, lambda.ineqlin, lambda.lower, and lambda.upper:
x x = -0.1000 -0.1000 0.2152 0.3502 lambda.ineqlin ans = 0 0.2392 0 lambda.lower ans = 0.0409 0.2784 0 0 lambda.upper ans = 0 0 0 0
Nonzero elements of the vectors in the fields of lambda indicate active constraints at the solution. In this case, the second inequality constraint (in lambda.ineqlin) and the first lower and second lower bound constraints (in lambda.lower) are active constraints (i.e., the solution is on their constraint boundaries).
For problems with no constraints, use \ (matrix left division). For example, x= A\b.
Because the problem being solved is always convex, lsqlin will find a global, although not necessarily unique, solution.
Better numerical results are likely if you specify equalities explicitly, using Aeq and beq, instead of implicitly, using lb and ub.
If x0 is not strictly feasible, lsqlin chooses a new strictly feasible (centered) starting point.
If components of x have no upper (or lower) bounds, set the corresponding components of ub (or lb) to Inf (or -Inf for lb) as opposed to an arbitrary but very large positive (or negative in the case of lower bounds) number.
The trust-region-reflective algorithm does not allow equal upper and lower bounds. For example, if lb(2) == ub(2), then lsqlin gives the following error:
Equal upper and lower bounds not permitted in this large-scale method. Use equality constraints and the medium-scale method instead.
At this time, you must use the active-set algorithm to solve equality constrained problems.
If the matrices C, A, or Aeq are sparse, and the problem formulation is not solvable using the trust-region-reflective algorithm, lsqlin warns that the matrices are converted to full.
Warning: This problem formulation not yet available for sparse matrices. Converting to full to solve.
When a problem is infeasible, lsqlin gives a warning:
Warning: The constraints are overly stringent; there is no feasible solution.
In this case, lsqlin produces a result that minimizes the worst case constraint violation.
When the equality constraints are inconsistent, lsqlin gives
Warning: The equality constraints are overly stringent; there is no feasible solution.
At this time, the only levels of display, using the Display option in options, are 'off' and 'final'; iterative output using 'iter' is not available.
Trust-Region-Reflective Algorithm Requirements
|For Large Problems|
C should be sparse.
When the problem given to lsqlin has only upper and lower bounds; i.e., no linear inequalities or equalities are specified, and the matrix C has at least as many rows as columns, the default algorithm is trust-region-reflective. This method is a subspace trust-region method based on the interior-reflective Newton method described in . Each iteration involves the approximate solution of a large linear system using the method of preconditioned conjugate gradients (PCG). See Trust-Region Methods for Nonlinear Minimization and Preconditioned Conjugate Gradient Method.
lsqlin uses the active-set algorithm when you specify it with optimoptions, or when you give linear inequalities or equalities. The algorithm is based on quadprog, which uses an active set method similar to that described in . It finds an initial feasible solution by first solving a linear programming problem. See trust-region-reflective quadprog Algorithm.
 Coleman, T.F. and Y. Li, "A Reflective Newton Method for Minimizing a Quadratic Function Subject to Bounds on Some of the Variables," SIAM Journal on Optimization, Vol. 6, Number 4, pp. 1040-1058, 1996.
 Gill, P.E., W. Murray, and M.H. Wright, Practical Optimization, Academic Press, London, UK, 1981.