Note: This page has been translated by MathWorks. Please click here

To view all translated materals including this page, select Japan from the country navigator on the bottom of this page.

To view all translated materals including this page, select Japan from the country navigator on the bottom of this page.

*A* linear matrix inequality (LMI) is any
constraint of the form

(A) := xA_{0} + x_{1}A_{1} + ... + < 0x_{N}A_{N} | (3-1) |

where

*x*= (*x*_{1}, . . . ,*x*) is a vector of unknown scalars (the_{N}*decision*or*optimization*variables)*A*_{0}, . . . ,*A*are given_{N}*symmetric*matrices< 0 stands for "negative definite," i.e., the largest eigenvalue of

*A*(*x*) is negative

Note that the constraints *A*(*x*)
> 0 and *A*(*x*) < *B*(*x*)
are special cases of Equation 3-1 since they can be rewritten as –*A*(*x*)
< 0 and *A*(*x*)* –
B*(*x*) < 0, respectively.

The LMI of Equation 3-1 is
a convex constraint on *x* since *A*(*y*)
< 0 and *A*(*z*) < 0 imply
that $$A\left(\frac{y+z}{2}\right)<0$$. As a result,

Its solution set, called the

*feasible set*, is a convex subset of*R*^{N}Finding a solution

*x*to Equation 3-1, if any, is a convex optimization problem.

Convexity has an important consequence: even though Equation 3-1 has no analytical solution in general, it can be solved numerically with guarantees of finding a solution when one exists. Note that a system of LMI constraints can be regarded as a single LMI since

$$\{\begin{array}{c}{A}_{1}\left(x\right)<0\\ \vdots \\ {A}_{K}\left(x\right)<0\end{array}$$

is equivalent to

$$A\left(x\right):=\text{diag}\left({\text{A}}_{\text{1}}\left(x\right),\dots ,{\text{A}}_{\text{K}}\left(x\right)\right)<0$$

where diag (*A*_{1}(*x*),
. . . , *A*_{K}(*x*))
denotes the block-diagonal matrix with *A*_{1}(*x*),
. . . , *A*_{K}(*x*)
on its diagonal. Hence multiple LMI constraints can be imposed on
the vector of decision variables *x* without destroying
convexity.

In most control applications, LMIs do not naturally arise in the canonical form of Equation 3-1 , but rather in the form

*L*(*X*_{1},
. . . , *X*_{n}) < *R*(*X*_{1},
. . . , *X*_{n})

where *L*(.) and *R*(.)
are affine functions of some structured *matrix* variables *X*_{1},
. . . , *X*_{n}. *A* simple
example is the Lyapunov inequality

+ A^{T}X < 0XA | (3-2) |

where the unknown *X* is a symmetric matrix.
Defining *x*_{1}, . . . , *x*_{N} as
the independent scalar entries of *X*, this LMI
could be rewritten in the form of Equation 3-1. Yet it is more convenient and efficient
to describe it in its natural form Equation 3-2, which is the approach taken in the
LMI Lab.

Was this topic helpful?