Documentation

### This is machine translation

Translated by
Mouseover text to see original. Click the button below to return to the English version of the page.

# lsqlin

Solve constrained linear least-squares problems

Linear least-squares solver with bounds or linear constraints.

Solves least-squares curve fitting problems of the form

### Note

`lsqlin` applies only to the solver-based approach. For a discussion of the two optimization approaches, see First Choose Problem-Based or Solver-Based Approach.

## Syntax

``x = lsqlin(C,d,A,b)``
``x = lsqlin(C,d,A,b,Aeq,beq,lb,ub)``
``x = lsqlin(C,d,A,b,Aeq,beq,lb,ub,x0,options)``
``x = lsqlin(problem)``
``````[x,resnorm,residual,exitflag,output,lambda] = lsqlin(___)``````

## Description

example

````x = lsqlin(C,d,A,b)` solves the linear system `C*x = d` in the least-squares sense, subject to `A*x` ≤ `b`.```

example

````x = lsqlin(C,d,A,b,Aeq,beq,lb,ub)` adds linear equality constraints `Aeq*x = beq` and bounds `lb` ≤ `x` ≤ `ub`. If you do not need certain constraints such as `Aeq` and `beq`, set them to `[]`. If `x(i)` is unbounded below, set `lb(i) = -Inf`, and if `x(i)` is unbounded above, set `ub(i) = Inf`.```

example

````x = lsqlin(C,d,A,b,Aeq,beq,lb,ub,x0,options)` minimizes with an initial point `x0` and the optimization options specified in `options`. Use `optimoptions` to set these options. If you do not want to include an initial point, set the `x0` argument to `[]`.```
````x = lsqlin(problem)` finds the minimum for `problem`, where `problem` is a structure. Create the `problem` structure by exporting a problem from Optimization app, as described in Exporting Your Work. Or create a `problem` structure from an `OptimizationProblem` object by using `prob2struct`.```

example

``````[x,resnorm,residual,exitflag,output,lambda] = lsqlin(___)```, for any input arguments described above, returns:The squared 2-norm of the residual `resnorm = `${‖C\cdot x-d‖}_{2}^{2}$The residual `residual = C*x - d`A value `exitflag` describing the exit conditionA structure `output` containing information about the optimization processA structure `lambda` containing the Lagrange multipliersThe factor ½ in the definition of the problem affects the values in the `lambda` structure.```

## Examples

collapse all

Find the `x` that minimizes the norm of `C*x - d` for an overdetermined problem with linear inequality constraints.

Specify the problem and constraints.

```C = [0.9501 0.7620 0.6153 0.4057 0.2311 0.4564 0.7919 0.9354 0.6068 0.0185 0.9218 0.9169 0.4859 0.8214 0.7382 0.4102 0.8912 0.4447 0.1762 0.8936]; d = [0.0578 0.3528 0.8131 0.0098 0.1388]; A = [0.2027 0.2721 0.7467 0.4659 0.1987 0.1988 0.4450 0.4186 0.6037 0.0152 0.9318 0.8462]; b = [0.5251 0.2026 0.6721];```

Call `lsqlin` to solve the problem.

`x = lsqlin(C,d,A,b)`
```Minimum found that satisfies the constraints. Optimization completed because the objective function is non-decreasing in feasible directions, to within the default value of the optimality tolerance, and constraints are satisfied to within the default value of the constraint tolerance. ```
```x = 4×1 0.1299 -0.5757 0.4251 0.2438 ```

Find the `x` that minimizes the norm of `C*x - d` for an overdetermined problem with linear equality and inequality constraints and bounds.

Specify the problem and constraints.

```C = [0.9501 0.7620 0.6153 0.4057 0.2311 0.4564 0.7919 0.9354 0.6068 0.0185 0.9218 0.9169 0.4859 0.8214 0.7382 0.4102 0.8912 0.4447 0.1762 0.8936]; d = [0.0578 0.3528 0.8131 0.0098 0.1388]; A =[0.2027 0.2721 0.7467 0.4659 0.1987 0.1988 0.4450 0.4186 0.6037 0.0152 0.9318 0.8462]; b =[0.5251 0.2026 0.6721]; Aeq = [3 5 7 9]; beq = 4; lb = -0.1*ones(4,1); ub = 2*ones(4,1);```

Call `lsqlin` to solve the problem.

`x = lsqlin(C,d,A,b,Aeq,beq,lb,ub)`
```Minimum found that satisfies the constraints. Optimization completed because the objective function is non-decreasing in feasible directions, to within the default value of the optimality tolerance, and constraints are satisfied to within the default value of the constraint tolerance. ```
```x = 4×1 -0.1000 -0.1000 0.1599 0.4090 ```

This example shows how to use nondefault options for linear least squares.

Set options to use the `'interior-point'` algorithm and to give iterative display.

`options = optimoptions('lsqlin','Algorithm','interior-point','Display','iter');`

Set up a linear least-squares problem.

```C = [0.9501 0.7620 0.6153 0.4057 0.2311 0.4564 0.7919 0.9354 0.6068 0.0185 0.9218 0.9169 0.4859 0.8214 0.7382 0.4102 0.8912 0.4447 0.1762 0.8936]; d = [0.0578 0.3528 0.8131 0.0098 0.1388]; A = [0.2027 0.2721 0.7467 0.4659 0.1987 0.1988 0.4450 0.4186 0.6037 0.0152 0.9318 0.8462]; b = [0.5251 0.2026 0.6721];```

Run the problem.

`x = lsqlin(C,d,A,b,[],[],[],[],[],options)`
``` Iter Fval Primal Infeas Dual Infeas Complementarity 0 -7.687420e-02 1.600492e+00 6.150431e-01 1.000000e+00 1 -7.687419e-02 8.002458e-04 3.075216e-04 2.430833e-01 2 -3.162837e-01 4.001229e-07 1.537608e-07 5.945636e-02 3 -3.760545e-01 2.000616e-10 2.036997e-08 1.370933e-02 4 -3.912129e-01 1.000866e-13 1.006816e-08 2.548273e-03 5 -3.948062e-01 2.220446e-16 2.955102e-09 4.295807e-04 6 -3.953277e-01 2.775558e-17 1.237758e-09 3.102850e-05 7 -3.953581e-01 2.775558e-17 1.645862e-10 1.138719e-07 8 -3.953582e-01 2.775558e-17 2.399608e-13 5.693290e-11 Minimum found that satisfies the constraints. Optimization completed because the objective function is non-decreasing in feasible directions, to within the default value of the optimality tolerance, and constraints are satisfied to within the default value of the constraint tolerance. ```
```x = 4×1 0.1299 -0.5757 0.4251 0.2438 ```

Obtain and interpret all `lsqlin` outputs.

Define a problem with linear inequality constraints and bounds. The problem is overdetermined because there are four columns in the `C` matrix but five rows. This means the problem has four unknowns and five conditions, even before including the linear constraints and bounds.

```C = [0.9501 0.7620 0.6153 0.4057 0.2311 0.4564 0.7919 0.9354 0.6068 0.0185 0.9218 0.9169 0.4859 0.8214 0.7382 0.4102 0.8912 0.4447 0.1762 0.8936]; d = [0.0578 0.3528 0.8131 0.0098 0.1388]; A = [0.2027 0.2721 0.7467 0.4659 0.1987 0.1988 0.4450 0.4186 0.6037 0.0152 0.9318 0.8462]; b = [0.5251 0.2026 0.6721]; lb = -0.1*ones(4,1); ub = 2*ones(4,1);```

Set options to use the `'interior-point'` algorithm.

`options = optimoptions('lsqlin','Algorithm','interior-point');`

The `'interior-point'` algorithm does not use an initial point, so set `x0` to `[]`.

`x0 = [];`

Call `lsqlin` with all outputs.

```[x,resnorm,residual,exitflag,output,lambda] = ... lsqlin(C,d,A,b,[],[],lb,ub,x0,options)```
```Minimum found that satisfies the constraints. Optimization completed because the objective function is non-decreasing in feasible directions, to within the default value of the optimality tolerance, and constraints are satisfied to within the default value of the constraint tolerance. ```
```x = 4×1 -0.1000 -0.1000 0.2152 0.3502 ```
```resnorm = 0.1672 ```
```residual = 5×1 0.0455 0.0764 -0.3562 0.1620 0.0784 ```
```exitflag = 1 ```
```output = struct with fields: message: 'Minimum found that satisfies the constraints....' algorithm: 'interior-point' firstorderopt: 4.3374e-11 constrviolation: 0 iterations: 6 linearsolver: 'dense' cgiterations: [] ```
```lambda = struct with fields: ineqlin: [3x1 double] eqlin: [0x1 double] lower: [4x1 double] upper: [4x1 double] ```

Examine the nonzero Lagrange multiplier fields in more detail. First examine the Lagrange multipliers for the linear inequality constraint.

`lambda.ineqlin`
```ans = 3×1 0.0000 0.2392 0.0000 ```

Lagrange multipliers are nonzero exactly when the solution is on the corresponding constraint boundary. In other words, Lagrange multipliers are nonzero when the corresponding constraint is active. `lambda.ineqlin(2)` is nonzero. This means that the second element in `A*x` should equal the second element in `b`, because the constraint is active.

`[A(2,:)*x,b(2)]`
```ans = 1×2 0.2026 0.2026 ```

Now examine the Lagrange multipliers for the lower and upper bound constraints.

`lambda.lower`
```ans = 4×1 0.0409 0.2784 0.0000 0.0000 ```
`lambda.upper`
```ans = 4×1 0 0 0 0 ```

The first two elements of `lambda.lower` are nonzero. You see that `x(1)` and `x(2)` are at their lower bounds, `-0.1`. All elements of `lambda.upper` are essentially zero, and you see that all components of `x` are less than their upper bound, `2`.

## Input Arguments

collapse all

Multiplier matrix, specified as a matrix of doubles. `C` represents the multiplier of the solution `x` in the expression ```C*x - d```. `C` is `M`-by-`N`, where `M` is the number of equations, and `N` is the number of elements of `x`.

Example: `C = [1,4;2,5;7,8]`

Data Types: `double`

Constant vector, specified as a vector of doubles. `d` represents the additive constant term in the expression `C*x - d`. `d` is `M`-by-`1`, where `M` is the number of equations.

Example: `d = [5;0;-12]`

Data Types: `double`

Linear inequality constraint matrix, specified as a matrix of doubles. `A` represents the linear coefficients in the constraints `A*x `` b`. `A` has size `Mineq`-by-`N`, where `Mineq` is the number of constraints and `N` is the number of elements of `x`. To save memory, pass `A` as a sparse matrix.

Example: `A = [4,3;2,0;4,-1];` means three linear inequalities (three rows) for two decision variables (two columns).

Data Types: `double`

Linear inequality constraint vector, specified as a vector of doubles. `b` represents the constant vector in the constraints `A*x `` b`. `b` has length `Mineq`, where `A` is `Mineq`-by-`N`.

Example: `[4,0]`

Data Types: `double`

Linear equality constraint matrix, specified as a matrix of doubles. `Aeq` represents the linear coefficients in the constraints `Aeq*x `=` beq`. `Aeq` has size `Meq`-by-`N`, where `Meq` is the number of constraints and `N` is the number of elements of `x`. To save memory, pass `Aeq` as a sparse matrix.

Example: `A = [4,3;2,0;4,-1];` means three linear inequalities (three rows) for two decision variables (two columns).

Data Types: `double`

Linear equality constraint vector, specified as a vector of doubles. `beq` represents the constant vector in the constraints `Aeq*x `=` beq`. `beq` has length `Meq`, where `Aeq` is `Meq`-by-`N`.

Example: `[4,0]`

Data Types: `double`

Lower bounds, specified as a vector or array of doubles. `lb` represents the lower bounds elementwise in `lb `` x `` ub`.

Internally, `lsqlin` converts an array `lb` to the vector `lb(:)`.

Example: `lb = [0;-Inf;4]` means ```x(1) ≥ 0```, `x(3) ≥ 4`.

Data Types: `double`

Upper bounds, specified as a vector or array of doubles. `ub` represents the upper bounds elementwise in `lb `` x `` ub`.

Internally, `lsqlin` converts an array `ub` to the vector `ub(:)`.

Example: `ub = [Inf;4;10]` means ```x(2) ≤ 4```, `x(3) ≤ 10`.

Data Types: `double`

Initial point for the solution process, specified as a vector or array of doubles. `x0` is used only by the `'trust-region-reflective'` algorithm. Optional.

If you do not provide an `x0` for the `'trust-region-reflective'` algorithm, `lsqlin` sets `x0` to the zero vector. If any component of this zero vector `x0` violates the bounds, `lsqlin` sets `x0` to a point in the interior of the box defined by the bounds.

Example: `x0 = [4;-3]`

Data Types: `double`

Options for `lsqlin`, specified as the output of the `optimoptions` function or the Optimization app.

Some options are absent from the `optimoptions` display. These options appear in italics in the following table. For details, see View Options.

All Algorithms

 `Algorithm` Choose the algorithm:`'interior-point'` (default)`'trust-region-reflective'`The `'trust-region-reflective'` algorithm allows only upper and lower bounds, meaning no linear inequalities or equalities. If you specify both the `'trust-region-reflective'` and linear constraints, `lsqlin` uses the `'interior-point'` algorithm.The `'trust-region-reflective'` algorithm does not allow equal upper and lower bounds.For more information on choosing the algorithm, see Choosing the Algorithm. Diagnostics Display diagnostic information about the function to be minimized or solved. The choices are `'on'` or the default `'off'`. `Display` Level of display returned to the command line.`'off'` or `'none'` displays no output.`'final'` displays just the final output (default).The `'interior-point'` algorithm allows additional values:`'iter'` gives iterative display.`'iter-detailed'` gives iterative display with a detailed exit message.`'final-detailed'` displays just the final output, with a detailed exit message. `MaxIterations` Maximum number of iterations allowed, a positive integer. The default value is `200`. For `optimset`, the name is `MaxIter`. See Current and Legacy Option Name Tables.

`trust-region-reflective` Algorithm Options

 `FunctionTolerance` Termination tolerance on the function value, a positive scalar. The default is `100*eps`, about `2.2204e-14`. For `optimset`, the name is `TolFun`. See Current and Legacy Option Name Tables. `JacobianMultiplyFcn` Jacobian multiply function, specified as a function handle. For large-scale structured problems, this function should compute the Jacobian matrix product `C*Y`, `C'*Y`, or `C'*(C*Y)` without actually forming `C`. Write the function in the form`W = jmfun(Jinfo,Y,flag)`where `Jinfo` contains a matrix used to compute `C*Y` (or `C'*Y`, or `C'*(C*Y)`).`jmfun` must compute one of three different products, depending on the value of `flag` that `lsqlin` passes:If `flag == 0` then `W = C'*(C*Y)`. If `flag > 0` then `W = C*Y`. If `flag < 0` then `W = C'*Y`. In each case, `jmfun` need not form `C` explicitly. `lsqlin` uses `Jinfo` to compute the preconditioner. See Passing Extra Parameters for information on how to supply extra parameters if necessary.See Jacobian Multiply Function with Linear Least Squares for an example.For `optimset`, the name is `JacobMult`. See Current and Legacy Option Name Tables. MaxPCGIter Maximum number of PCG (preconditioned conjugate gradient) iterations, a positive scalar. The default is `max(1,floor(numberOfVariables/2))`. For more information, see Trust-Region-Reflective Algorithm. `OptimalityTolerance` Termination tolerance on the first-order optimality, a positive scalar. The default is `100*eps`, about `2.2204e-14`. See First-Order Optimality Measure. For `optimset`, the name is `TolFun`. See Current and Legacy Option Name Tables. PrecondBandWidth Upper bandwidth of preconditioner for PCG (preconditioned conjugate gradient). By default, diagonal preconditioning is used (upper bandwidth of 0). For some problems, increasing the bandwidth reduces the number of PCG iterations. Setting `PrecondBandWidth` to `Inf` uses a direct factorization (Cholesky) rather than the conjugate gradients (CG). The direct factorization is computationally more expensive than CG, but produces a better quality step toward the solution. For more information, see Trust-Region-Reflective Algorithm. `SubproblemAlgorithm` Determines how the iteration step is calculated. The default, `'cg'`, takes a faster but less accurate step than `'factorization'`. See Trust-Region-Reflective Least Squares. TolPCG Termination tolerance on the PCG (preconditioned conjugate gradient) iteration, a positive scalar. The default is `0.1`. `TypicalX` Typical `x` values. The number of elements in `TypicalX` is equal to the number of variables. The default value is `ones(numberofvariables,1)`. `lsqlin` uses `TypicalX` internally for scaling. `TypicalX` has an effect only when `x` has unbounded components, and when a `TypicalX` value for an unbounded component is larger than `1`.

`interior-point` Algorithm Options

 `ConstraintTolerance` Tolerance on the constraint violation, a positive scalar. The default is `1e-8`. For `optimset`, the name is `TolCon`. See Current and Legacy Option Name Tables. `LinearSolver` Type of internal linear solver in algorithm:`'auto'` (default) — Use `'sparse'` if the `C` matrix is sparse, `'dense'` otherwise.`'sparse'` — Use sparse linear algebra. See Sparse Matrices (MATLAB).`'dense'` — Use dense linear algebra. `OptimalityTolerance` Termination tolerance on the first-order optimality, a positive scalar. The default is `1e-8`. See First-Order Optimality Measure. For `optimset`, the name is `TolFun`. See Current and Legacy Option Name Tables. `StepTolerance` Termination tolerance on `x`, a positive scalar. The default is `1e-12`. For `optimset`, the name is `TolX`. See Current and Legacy Option Name Tables.

Optimization problem, specified as a structure with the following fields.

 `C` Matrix multiplier in the term ```C*x - d``` `d` Additive constant in the term ```C*x - d``` `Aineq` Matrix for linear inequality constraints `bineq` Vector for linear inequality constraints `Aeq` Matrix for linear equality constraints `beq` Vector for linear equality constraints `lb` Vector of lower bounds `ub` Vector of upper bounds `x0` Initial point for `x` `solver` `'lsqlin'` `options` Options created with `optimoptions`

Create the `problem` structure by exporting a problem from the Optimization app, as described in Exporting Your Work.

Data Types: `struct`

## Output Arguments

collapse all

Solution, returned as a vector that minimizes the norm of `C*x-d` subject to all bounds and linear constraints.

Objective value, returned as the scalar value `norm(C*x-d)^2`.

Solution residuals, returned as the vector `C*x-d`.

Algorithm stopping condition, returned as an integer identifying the reason the algorithm stopped. The following lists the values of `exitflag` and the corresponding reasons `lsqlin` stopped.

 `3` Change in the residual was smaller than the specified tolerance `options.FunctionTolerance`. (`trust-region-reflective` algorithm) `2` Step size smaller than `options.StepTolerance`, constraints satisfied. (`interior-point` algorithm) `1` Function converged to a solution `x`. `0` Number of iterations exceeded `options.MaxIterations`. `-2` The problem is infeasible. Or, for the `interior-point` algorithm, step size smaller than `options.StepTolerance`, but constraints are not satisfied. `-4` Ill-conditioning prevents further optimization. `-8` Unable to compute a step direction.

The exit message for the `interior-point` algorithm can give more details on the reason `lsqlin` stopped, such as exceeding a tolerance. See Exit Flags and Exit Messages.

Solution process summary, returned as a structure containing information about the optimization process.

 `iterations` Number of iterations the solver took. `algorithm` One of these algorithms:`'interior-point'``'trust-region-reflective'` `constrviolation` Constraint violation that is positive for violated constraints (not returned for the `'trust-region-reflective'` algorithm).```constrviolation = max([0;norm(Aeq*x-beq, inf);(lb-x);(x-ub);(A*x-b)])``` `message` Exit message. `firstorderopt` First-order optimality at the solution. See First-Order Optimality Measure. `linearsolver` Type of internal linear solver, `'dense'` or `'sparse'` (`'interior-point'` algorithm only) `cgiterations` Number of conjugate gradient iterations the solver performed. Nonempty only for the `'trust-region-reflective'` algorithm.

Lagrange multipliers, returned as a structure with the following fields.

 `lower` Lower bounds `lb` `upper` Upper bounds `ub` `ineqlin` Linear inequalities `eqlin` Linear equalities

## Tips

• For problems with no constraints, you can use `mldivide` (matrix left division). When you have no constraints, `lsqlin` returns `x = C\d`.

• Because the problem being solved is always convex, `lsqlin` finds a global, although not necessarily unique, solution.

• Better numerical results are likely if you specify equalities explicitly, using `Aeq` and `beq`, instead of implicitly, using `lb` and `ub`.

• The `trust-region-reflective` algorithm does not allow equal upper and lower bounds. Use another algorithm for this case.

• If the specified input bounds for a problem are inconsistent, the output `x` is `x0` and the outputs `resnorm` and `residual` are `[]`.

• You can solve some large structured problems, including those where the `C` matrix is too large to fit in memory, using the `trust-region-reflective` algorithm with a Jacobian multiply function. For information, see trust-region-reflective Algorithm Options.

## Algorithms

collapse all

### Trust-Region-Reflective Algorithm

This method is a subspace trust-region method based on the interior-reflective Newton method described in [1]. Each iteration involves the approximate solution of a large linear system using the method of preconditioned conjugate gradients (PCG). See Trust-Region-Reflective Least Squares, and in particular Large Scale Linear Least Squares.

### Interior-Point Algorithm

The `'interior-point'` algorithm is based on the `quadprog` `'interior-point-convex'` algorithm. See Interior-Point Linear Least Squares.

## References

[1] Coleman, T. F. and Y. Li. “A Reflective Newton Method for Minimizing a Quadratic Function Subject to Bounds on Some of the Variables,” SIAM Journal on Optimization, Vol. 6, Number 4, pp. 1040–1058, 1996.

[2] Gill, P. E., W. Murray, and M. H. Wright. Practical Optimization, Academic Press, London, UK, 1981.

Watch now