A linear matrix inequality (LMI) is any constraint of the form
x = (x1, . . . , xN) is a vector of unknown scalars (the decision or optimization variables)
A0, . . . , AN are given symmetric matrices
< 0 stands for "negative definite," i.e., the largest eigenvalue of A(x) is negative
Note that the constraints A(x) > 0 and A(x) < B(x) are special cases of Equation 3-1 since they can be rewritten as –A(x) < 0 and A(x) – B(x) < 0, respectively.
The LMI of Equation 3-1 is a convex constraint on x since A(y) < 0 and A(z) < 0 imply that . As a result,
Its solution set, called the feasible set, is a convex subset of RN
Finding a solution x to Equation 3-1, if any, is a convex optimization problem.
Convexity has an important consequence: even though Equation 3-1 has no analytical solution in general, it can be solved numerically with guarantees of finding a solution when one exists. Note that a system of LMI constraints can be regarded as a single LMI since
is equivalent to
where diag (A1(x),
. . . , AK(x))
denotes the block-diagonal matrix with
A1(x), . . . , AK(x) on its diagonal. Hence multiple LMI constraints can be imposed on the vector of decision variables x without destroying convexity.
In most control applications, LMIs do not naturally arise in the canonical form of Equation 3-1 , but rather in the form
L(X1, . . . , Xn) < R(X1, . . . , Xn)
where L(.) and R(.) are affine functions of some structured matrix variables X1, . . . , Xn. A simple example is the Lyapunov inequality