Approximately solve constant-matrix, upper bound µ-synthesis problem
[QOPT,BND] = cmsclsyn(R,U,V,BlockStructure); [QOPT,BND] = cmsclsyn(R,U,V,BlockStructure,opt); [QOPT,BND] = cmsclsyn(R,U,V,BlockStructure,opt,qinit); [QOPT,BND] = cmsclsyn(R,U,V,BlockStructure,opt,'random',N)
cmsclsyn approximately solves the constant-matrix, upper bound µ-synthesis problem by minimization,
for given matrices R ∊ Cnxm, U ∊ Cnxr, V ∊ Ctxm, and a set Δ ⊂ Cmxn. This applies to constant matrix data in R, U, and V.
[QOPT,BND] = cmsclsyn(R,U,V,BlockStructure) minimizes, by choice of Q.
QOPT is the optimum value of Q, the upper bound of
mussv(R+U*Q*V,BLK), BND. The matrices
V are constant matrices of the appropriate dimension.
BlockStructure is a matrix specifying the perturbation blockstructure as defined for
[QOPT,BND] = cmsclsyn(R,U,V,BlockStructure,OPT) uses the options specified by
OPT in the calls to
mussv for more information. The default value for
[QOPT,BND] = cmsclsyn(R,U,V,BlockStructure,OPT,QINIT) initializes the iterative computation from Q =
QINIT. Because of the nonconvexity of the overall problem, different starting points often yield different final answers. If
QINIT is an N-D array, then the iterative computation is performed multiple times - the
i'th optimization is initialized at Q =
QINIT(:,:,i). The output arguments are associated with the best solution obtained in this brute force approach.
[QOPT,BND] = cmsclsyn(R,U,V,BlockStructure,OPT,'random',N) initializes the iterative computation from
N random instances of
NCU is the number of columns of
NRV is the number of rows of
V, then the approximation to solving the constant matrix µ synthesis problem is two-fold: only the upper bound for µ is minimized, and the minimization is not convex, hence the optimum is generally not found. If
U is full column rank, or
V is full row rank, then the problem can (and is) cast as a convex problem, [Packard, Zhou, Pandey and Becker], and the global optimizer (for the upper bound for µ) is calculated.
cmsclsyn algorithm is iterative, alternatively holding Q fixed, and computing the
mussv upper bound, followed by holding the upper bound multipliers fixed, and minimizing the bound implied by choice of Q. If
V is square and invertible, then the optimization is reformulated (exactly) as an linear matrix inequality, and solved directly, without resorting to the iteration.
Packard, A.K., K. Zhou, P. Pandey, and G. Becker, “A collection of robust control problems leading to LMI's,” 30th IEEE Conference on Decision and Control, Brighton, UK, 1991, p. 1245–1250.