Note: This page has been translated by MathWorks. Please click here

To view all translated materals including this page, select Japan from the country navigator on the bottom of this page.

To view all translated materals including this page, select Japan from the country navigator on the bottom of this page.

Compute solution to given system of LMIs

[tmin,xfeas] = feasp(lmisys,options,target)

`The function `

`feasp`

computes
a solution `xfeas`

(if any) of the system of LMIs
described by `lmisys`

. The vector `xfeas`

is
a particular value of the decision variables for which all LMIs are
satisfied.

Given the LMI system

$${N}^{T}LxN\le {M}^{T}R(x)M,$$ | (1-3) |

`xfeas`

is computed by solving the auxiliary
convex program:

Minimize t subject to *N ^{T}L*(

The global minimum of this program is the scalar value `tmin`

returned
as first output argument by `feasp`

. The LMI constraints
are feasible if `tmin`

≤ 0 and strictly feasible
if `tmin`

< 0. If the problem is feasible but
not strictly feasible, `tmin`

is positive and very
small. Some post-analysis may then be required to decide whether `xfeas`

is
close enough to feasible.

The optional argument `target`

sets a target
value for `tmin`

. The optimization code terminates
as soon as a value of *t* below this target is reached.
The default value is `target`

= 0.

Note that `xfeas`

is a solution in terms of
the decision variables and not in terms of the matrix variables of
the problem. Use `dec2mat`

to
derive feasible values of the matrix variables from `xfeas`

.

The optional argument `options`

gives access
to certain control parameters for the optimization algorithm. This
five-entry vector is organized as follows:

`options(1)`

is not used.`options(2)`

sets the maximum number of iterations allowed to be performed by the optimization procedure (100 by default).`options(3)`

resets the*feasibility radius*. Setting`options(3)`

to a value*R*> 0 further constrains the decision vector*x*= (*x*_{1}, . . .,*x*) to lie within the ball_{N}$$\sum _{i=1}^{N}{x}_{i}^{2}<{R}^{2}$$

In other words, the Euclidean norm of

`xfeas`

should not exceed*R*. The feasibility radius is a simple means of controlling the magnitude of solutions. Upon termination,`feasp`

displays the*f-radius saturation*, that is, the norm of the solution as a percentage of the feasibility radius*R*.The default value is

*R*= 109. Setting`options(3)`

to a negative value activates the "flexible bound" mode. In this mode, the feasibility radius is initially set to 108, and increased if necessary during the course of optimization`options(4)`

helps speed up termination. When set to an integer value*J*> 0, the code terminates if*t*did not decrease by more than one percent in relative terms during the last*J*iterations. The default value is 10. This parameter trades off speed vs. accuracy. If set to a small value (< 10), the code terminates quickly but without guarantee of accuracy. On the contrary, a large value results in natural convergence at the expense of a possibly large number of iterations.`options(5) = 1`

turns off the trace of execution of the optimization procedure. Resetting`options(5)`

to zero (default value) turns it back on.

Setting `option(i)`

to zero is equivalent to
setting the corresponding control parameter to its default value.
Consequently, there is no need to redefine the entire vector when
changing just one control parameter. To set the maximum number of
iterations to 10, for instance, it suffices to type

options=zeros(1,5) % default value for all parameters options(2)=10

When the least-squares problem solved at each iteration becomes
ill conditioned, the `feasp`

solver switches from
Cholesky-based to QR-based linear algebra (see Memory Problems for details). Since the QR
mode typically requires much more memory, MATLAB^{®} may run out
of memory and display the message

??? Error using ==> feaslv Out of memory. Type HELP MEMORY for your options.

You should then ask your system manager to increase your swap
space or, if no additional swap space is available, set ```
options(4)
= 1
```

. This will prevent switching to QR and `feasp`

will
terminate when Cholesky fails due to numerical instabilities.

Consider the problem of finding *P* > *I* such
that

$${A}_{1}^{T}P+P{A}_{1}<0$$ | (1-4) |

$${A}_{2}^{T}P+P{A}_{2}<0$$ | (1-5) |

$${A}_{3}^{T}P+P{A}_{3}<0$$ | (1-6) |

with data

$$A1=\left(\begin{array}{cc}-1& 2\\ 1& -3\end{array}\right),\text{}A2=\left(\begin{array}{cc}-0.8& 1.5\\ 1.3& -2.7\end{array}\right),\text{}A3=\left(\begin{array}{cc}-1.4& 0.9\\ 0.7& -2.0\end{array}\right)$$

This problem arises when studying the quadratic stability of
the polytope of matrices Co{*A*_{1}, *A*_{2}, *A*_{3}}.

To assess feasibility with `feasp`

, first enter
the LMIs Equation 1-4 -Equation 1-6:

setlmis([]) p = lmivar(1,[2 1]) lmiterm([1 1 1 p],1,a1,'s') % LMI #1 lmiterm([2 1 1 p],1,a2,'s') % LMI #2 lmiterm([3 1 1 p],1,a3,'s') % LMI #3 lmiterm([-4 1 1 p],1,1) % LMI #4: P lmiterm([4 1 1 0],1) % LMI #4: I lmis = getlmis

Then call `feasp`

to find a feasible decision
vector:

[tmin,xfeas] = feasp(lmis)

This returns `tmin = -3.1363`

. Hence Equation 1-4 - Equation 1-6 is feasible and
the dynamical system $$\dot{x}$$ = *A*(*t*)*x* is
quadratically stable for *A*(*t*)
∊ Co{*A*_{1}, *A*_{2}, *A*_{3}}.

To obtain a Lyapunov matrix *P* proving the
quadratic stability, type

P = dec2mat(lmis,xfeas,p)

This returns

$$P=\left(\begin{array}{cc}270.8& 126.4\\ 126.4& 155.1\end{array}\right)$$

It is possible to add further constraints on this feasibility
problem. For instance, you can bound the Frobenius norm of *P* by
10 while asking `tmin`

to be less than or equal to
–1. This is done by

[tmin,xfeas] = feasp(lmis,[0,0,10,0,0],-1)

The third entry `10`

of `options`

sets
the feasibility radius to 10 while the third argument `-1`

sets
the target value for `tmin`

. This yields ```
tmin
= -1.1745
```

and a matrix `P`

with largest
eigenvalue λ_{max}(*P*)
= 9.6912.

The feasibility solver `feasp`

is based on
Nesterov and Nemirovski's Projective Method described in:

Nesterov, Y., and A. Nemirovski, *Interior Point Polynomial
Methods in Convex Programming: Theory and Applications*,
SIAM, Philadelphia, 1994.

Nemirovski, A., and P. Gahinet, "The Projective Method
for Solving Linear Matrix Inequalities," *Proc. Amer.
Contr. Conf*., 1994, Baltimore, Maryland, p. 840–844.

The optimization is performed by the C-MEX file `feaslv.mex`

.

Was this topic helpful?