Approximately solve constant-matrix, upper bound µ-synthesis problem
[QOPT,BND] = cmsclsyn(R,U,V,BlockStructure); [QOPT,BND] = cmsclsyn(R,U,V,BlockStructure,opt); [QOPT,BND] = cmsclsyn(R,U,V,BlockStructure,opt,qinit); [QOPT,BND] = cmsclsyn(R,U,V,BlockStructure,opt,'random',N)
approximately solves the constant-matrix, upper bound µ-synthesis
problem by minimization,
for given matrices R ∊ Cnxm, U ∊ Cnxr, V ∊ Ctxm, and a set Δ ⊂ Cmxn. This applies to constant matrix data in R, U, and V.
[QOPT,BND] = cmsclsyn(R,U,V,BlockStructure) minimizes,
by choice of Q.
QOPT is the optimum value of Q,
the upper bound of
mussv(R+U*Q*V,BLK), BND. The
V are constant
matrices of the appropriate dimension.
a matrix specifying the perturbation blockstructure as defined for
[QOPT,BND] = cmsclsyn(R,U,V,BlockStructure,OPT) uses
the options specified by
OPT in the calls to
mussv for more information. The default value
[QOPT,BND] = cmsclsyn(R,U,V,BlockStructure,OPT,QINIT) initializes
the iterative computation from Q =
of the nonconvexity of the overall problem, different starting points
often yield different final answers. If
an N-D array, then the iterative computation is performed multiple
times - the
i'th optimization is initialized at
QINIT(:,:,i). The output arguments are associated
with the best solution obtained in this brute force approach.
[QOPT,BND] = cmsclsyn(R,U,V,BlockStructure,OPT,'random',N) initializes
the iterative computation from
N random instances
NCU is the number
of columns of
the number of rows of
V, then the approximation
to solving the constant matrix µ synthesis problem is two-fold:
only the upper bound for µ is minimized, and the minimization
is not convex, hence the optimum is generally not found. If
full column rank, or
V is full row rank, then the
problem can (and is) cast as a convex problem, [Packard, Zhou, Pandey
and Becker], and the global optimizer (for the upper bound for µ)
cmsclsyn algorithm is iterative, alternatively
holding Q fixed, and computing the
bound, followed by holding the upper bound multipliers fixed, and
minimizing the bound implied by choice of Q. If
square and invertible, then the optimization is reformulated (exactly)
as an linear matrix inequality, and solved directly, without resorting
to the iteration.
Packard, A.K., K. Zhou, P. Pandey, and G. Becker, "A collection of robust control problems leading to LMI's," 30th IEEE Conference on Decision and Control, Brighton, UK, 1991, p. 1245–1250.