Optimization techniques are used to find a set of design parameters, *x* = {*x*_{1},*x*_{2},...,*x _{n}*},
that can in some way be defined as optimal. In a simple case this
might be the minimization or maximization of some system characteristic
that is dependent on

A General Problem (GP) description is stated as

$$\underset{x}{\mathrm{min}}f(x),$$ | (2-1) |

subject to

$$\begin{array}{cc}{G}_{i}(x)=0& i=1,\mathrm{...},{m}_{e},\\ {G}_{i}(x)\le 0& i={m}_{e}+1,\mathrm{...},m,\end{array}$$

where *x* is the vector of length *n* design
parameters, *f*(*x*) is the objective
function, which returns a scalar value, and the vector function *G*(*x*)
returns a vector of length *m* containing the values
of the equality and inequality constraints evaluated at *x*.

An efficient and accurate solution to this problem depends not only on the size of the problem in terms of the number of constraints and design variables but also on characteristics of the objective function and constraints. When both the objective function and the constraints are linear functions of the design variable, the problem is known as a Linear Programming (LP) problem. Quadratic Programming (QP) concerns the minimization or maximization of a quadratic objective function that is linearly constrained. For both the LP and QP problems, reliable solution procedures are readily available. More difficult to solve is the Nonlinear Programming (NP) problem in which the objective function and constraints can be nonlinear functions of the design variables. A solution of the NP problem generally requires an iterative procedure to establish a direction of search at each major iteration. This is usually achieved by the solution of an LP, a QP, or an unconstrained subproblem.

All optimization takes place in real numbers. However, unconstrained least squares problems and equation-solving can be formulated and solved using complex analytic functions. See Complex Numbers in Optimization Toolbox Solvers.

Was this topic helpful?