Solve minimax constraint problem
Finds the minimum of a problem specified by
where b and beq are vectors, A and Aeq are matrices, and c(x), ceq(x), and F(x) are functions that return vectors. F(x), c(x), and ceq(x) can be nonlinear functions.
x, lb, and ub can be passed as vectors or matrices; see Matrix Arguments.
You can also solve max-min problems with fminimax
, using the
identity
You can solve problems of the form
by using the AbsoluteMaxObjectiveCount
option; see Notes.
x = fminimax(fun,x0)
x = fminimax(fun,x0,A,b)
x = fminimax(fun,x0,A,b,Aeq,beq)
x = fminimax(fun,x0,A,b,Aeq,beq,lb,ub)
x = fminimax(fun,x0,A,b,Aeq,beq,lb,ub,nonlcon)
x = fminimax(fun,x0,A,b,Aeq,beq,lb,ub,nonlcon,options)
x = fminimax(problem)
[x,fval] = fminimax(...)
[x,fval,maxfval] = fminimax(...)
[x,fval,maxfval,exitflag] = fminimax(...)
[x,fval,maxfval,exitflag,output] = fminimax(...)
[x,fval,maxfval,exitflag,output,lambda]
= fminimax(...)
fminimax
minimizes the worst-case (largest) value of a set of
multivariable functions, starting at an initial estimate. This is generally referred
to as the minimax problem.
Passing Extra Parameters explains how to pass extra parameters to the objective functions and nonlinear constraint functions, if necessary.
x = fminimax(fun,x0)
starts at
x0
and finds a minimax solution x
to the
functions described in fun
.
x = fminimax(fun,x0,A,b)
solves the minimax
problem subject to the linear inequalities
A*x ≤ b
.
x = fminimax(fun,x0,A,b,Aeq,beq)
solves the
minimax problem subject to the linear equalities
Aeq*x = beq
as well. Set A = []
and b = []
if no inequalities exist.
x = fminimax(fun,x0,A,b,Aeq,beq,lb,ub)
defines a set of lower and upper bounds on the design variables in
x
, so that the solution is always in the range
lb
≤ x
≤ ub
.
x = fminimax(fun,x0,A,b,Aeq,beq,lb,ub,nonlcon)
subjects the minimax problem to the nonlinear inequalities c(x)
or equality constraints ceq(x)
defined in
nonlcon
. fminimax
optimizes such that
c(x) ≤ 0
and
ceq(x) = 0
. Set lb = []
and/or
ub = []
if no bounds exist.
x = fminimax(fun,x0,A,b,Aeq,beq,lb,ub,nonlcon,options)
minimizes with the optimization options specified in options
. Use
optimoptions
to set these
options.
x = fminimax(problem)
finds the minimum for
problem
, where problem
is a structure
described in Input Arguments.
Create the problem
structure by exporting a problem from
Optimization app, as described in Exporting Your Work.
[x,fval] = fminimax(...)
returns the value
of the objective function fun
at the solution
x
.
[x,fval,maxfval] = fminimax(...)
returns the
maximum of the objective functions in the input fun
evaluated at
the solution x
.
[x,fval,maxfval,exitflag] = fminimax(...)
returns a value exitflag
that describes the exit condition of
fminimax
.
[x,fval,maxfval,exitflag,output] = fminimax(...)
returns a structure output
with information about the
optimization.
[x,fval,maxfval,exitflag,output,lambda]
= fminimax(...)
returns a structure lambda
whose fields contain the Lagrange
multipliers at the solution x
.
If the specified input bounds for a problem are inconsistent, the output
x
is x0
and the output
fval
is []
.
Function Input Arguments contains general descriptions of arguments passed
into fminimax
. This section provides function-specific details
for fun
, nonlcon
, and
problem
:
| The function to be minimized. x = fminimax(@myfun,x0) where
function F = myfun(x) F = ... % Compute function values at x
x = fminimax(@(x)sin(x.*x),x0); If
the user-defined values for To
minimize the worst case absolute values of any of the
elements of the vector
F(x) (i.e.,
min{max abs{F(x)} } ),
partition those objectives into the first elements of
If the
gradient of the objective function can also be computed
and the
options = optimoptions('fminimax','SpecifyObjectiveGradient',true) then
the function By checking the
value of function [F,G] = myfun(x) F = ... % Compute the function values at x if nargout > 1 % Two output arguments G = ... % Gradients evaluated at x end NoteSetting |
|
The function that computes the nonlinear inequality
constraints x = fminimax(@myfun,x0,A,b,Aeq,beq,lb,ub,@mycon) where function [c,ceq] = mycon(x) c = ... % Compute nonlinear inequalities at x ceq = ... % Compute nonlinear equalities at x If the gradients of the constraints can also be computed
and the
options = optimoptions('fminimax','SpecifyConstraintGradient',true) then the function NoteSetting NoteBecause Optimization
Toolbox™ functions only accept inputs of type
| |
problem |
| Objective function |
| Initial point for
x | |
| Matrix for linear inequality constraints | |
| Vector for linear inequality constraints | |
| Matrix for linear equality constraints | |
| Vector for linear equality constraints | |
| Vector of lower bounds | |
| Vector of upper bounds | |
| Nonlinear constraint function | |
| 'fminimax' | |
| Options created with optimoptions |
Function Input Arguments contains general descriptions of arguments returned
by fminimax
. This section provides function-specific details for
exitflag
, lambda
,
maxfval
, and output
:
|
Integer identifying the reason the algorithm terminated.
The following lists the values of
| |
|
Function converged to a solution
| |
|
Magnitude of the search direction less than the specified
tolerance and constraint violation less than
| |
|
Magnitude of directional derivative less than the
specified tolerance and constraint violation less than
| |
|
Number of iterations exceeded
| |
|
Algorithm was terminated by the output function. | |
|
No feasible point was found. | |
|
Structure containing the Lagrange multipliers at the
solution | |
|
Lower bounds | |
|
Upper bounds | |
|
Linear inequalities | |
|
Linear equalities | |
|
Nonlinear inequalities | |
|
Nonlinear equalities | |
|
Maximum of the function values evaluated at the solution
| |
|
Structure containing information about the optimization. The fields of the structure are | |
iterations |
Number of iterations taken. | |
funcCount |
Number of function evaluations. | |
lssteplength |
Size of line search step relative to search direction | |
stepsize |
Final displacement in | |
algorithm |
Optimization algorithm used. | |
firstorderopt |
Measure of first-order optimality | |
constrviolation |
Maximum of constraint functions | |
message |
Exit message |
Optimization options used by fminimax
. Use optimoptions
to set or change options
. See Optimization Options Reference for detailed information.
Some options are absent from the
optimoptions
display. These options appear in italics in the following
table. For details, see View Options.
AbsoluteMaxObjectiveCount |
Number of elements of
Fi(x)
to minimize the maximum absolute value of
Fi. See Notes. The default is
For |
Diagnostics |
Display diagnostic information about the function to be
minimized or solved. The choices are |
DiffMaxChange |
Maximum change in variables for finite-difference gradients (a
positive scalar). The default is |
DiffMinChange |
Minimum change in variables for finite-difference gradients (a
positive scalar). The default is |
Display |
Level of display (see Iterative Display):
|
ConstraintTolerance |
Termination tolerance on the constraint violation, a positive
scalar. The default is For |
FiniteDifferenceStepSize |
Scalar or vector step size factor for finite differences. When
you set
sign′(x)
= sign(x) except sign′(0) = 1 .
Central finite differences are
FiniteDifferenceStepSize expands
to a vector. The default is sqrt(eps) for forward
finite differences, and eps^(1/3) for central finite
differences.
For |
FiniteDifferenceType |
Finite differences, used to estimate gradients, are either
The algorithm is careful to obey bounds when estimating both types of finite differences. So, for example, it could take a backward, rather than a forward, difference to avoid evaluating at a point outside bounds. For |
FunctionTolerance |
Termination tolerance on the function value, a positive
scalar. The default is For |
FunValCheck |
Check whether objective function and constraints values are
valid. |
MaxFunctionEvaluations |
Maximum number of function evaluations allowed, a positive
integer. The default value is
For |
MaxIterations |
Maximum number of iterations allowed, a positive integer. The
default value is For |
MaxSQPIter |
Maximum number of SQP iterations allowed, a positive integer.
The default is |
MeritFunction |
Use the goal attainment/minimax merit function if set to
|
OptimalityTolerance |
Termination tolerance on the first-order optimality, a positive
scalar. The default is For |
OutputFcn | Specify one or more user-defined
functions that an optimization function calls at each iteration.
Pass a function handle or a cell array of function handles. The
default is none ( |
PlotFcn | Plots various measures of progress
while the algorithm executes; select from predefined plots or
write your own. Pass a name, a function handle, or a cell array
of names or function handles. For custom plot functions, pass
function handles. The default is none
(
For information on writing a custom plot function, see Plot Functions. For |
RelLineSrchBnd |
Relative bound (a real nonnegative scalar value) on the line
search step length such that the total displacement in
|
RelLineSrchBndDuration |
Number of iterations for which the bound specified in
|
SpecifyConstraintGradient |
Gradient for the user-defined constraints. When set to
For |
SpecifyObjectiveGradient |
Gradient for the user-defined objective function. See the
preceding description of For |
StepTolerance |
Termination tolerance on For |
TolConSQP |
Termination tolerance on inner iteration SQP constraint
violation, a positive scalar. The default is
|
TypicalX |
Typical |
UseParallel |
When |
Find values of x that minimize the maximum value of
[f1(x), f2(x), f3(x), f4(x), f5(x)]
where
First, write a file that computes the five functions at
x
.
function f = myfun(x) f(1)= 2*x(1)^2+x(2)^2-48*x(1)-40*x(2)+304; % Objectives f(2)= -x(1)^2 - 3*x(2)^2; f(3)= x(1) + 3*x(2) -18; f(4)= -x(1)- x(2); f(5)= x(1) + x(2) - 8;
Next, invoke an optimization routine.
x0 = [0.1; 0.1]; % Make a starting guess at solution [x,fval] = fminimax(@myfun,x0);
After seven iterations, the solution is
x,fval x = 4.0000 4.0000 fval = 0.0000 -64.0000 -2.0000 -8.0000 -0.0000
You can solve problems of the form
where
Here m is the value of the
AbsoluteMaxObjectiveCount
option. The advantage of this
formulation is you can minimize the absolute value of some components of
F, even though the absolute value function is not
smooth.
In order to use this option, reorder the elements of F, if necessary, so the first elements are those for which you want the minimum absolute value.
For example, consider the problem in Examples. Modify the problem to find the minimum of the maximum
absolute values of all
fi(x). Solve this
problem by invoking fminimax
with the commands
x0 = [0.1; 0.1]; % Make a starting guess at the solution options = optimoptions('fminimax','AbsoluteMaxObjectiveCount',5); % Minimize abs. values [x,fval] = fminimax(@myfun,x0,... [],[],[],[],[],[],[],options);
After seven iterations, the solution is
x = 4.9256 2.0796 fval = 37.2356 -37.2356 -6.8357 -7.0052 -0.9948
The function to be minimized must be continuous. fminimax
might
only give local solutions.
fminimax
internally reformulates the minimax problem into an
equivalent Nonlinear Linear Programming problem by appending additional
(reformulation) constraints of the form
Fi(x)
≤ γ to the constraints given in Equation, and then minimizing γ over x.
fminimax
uses a sequential quadratic programming (SQP) method
[1] to solve this problem.
Modifications are made to the line search and Hessian. In the line search an exact
merit function (see [2] and [4]) is used together with the merit function proposed
by [3] and [5]. The line search is terminated when either merit
function shows improvement. The function uses a modified Hessian that takes
advantage of the special structure of this problem. Using
optimoptions
to set the MeritFunction
option to'singleobj'
uses the merit function and Hessian used in
fmincon
.
See also SQP Implementation for more details on the
algorithm used and the types of procedures printed under the
Procedures
heading when you set the
Display
option to'iter'
.
[1] Brayton, R. K., S. W. Director, G. D. Hachtel, and L. Vidigal, “A New Algorithm for Statistical Circuit Design Based on Quasi-Newton Methods and Function Splitting,” IEEE Trans. Circuits and Systems, Vol. CAS-26, pp. 784-794, Sept. 1979.
[2] Grace, A. C. W., “Computer-Aided Control System Design Using Optimization Techniques,” Ph.D. Thesis, University of Wales, Bangor, Gwynedd, UK, 1989.
[3] Han, S. P., “A Globally Convergent Method For Nonlinear Programming,” Journal of Optimization Theory and Applications, Vol. 22, p. 297, 1977.
[4] Madsen, K. and H. Schjaer-Jacobsen, “Algorithms for Worst Case Tolerance Optimization,” IEEE Trans. of Circuits and Systems, Vol. CAS-26, Sept. 1979.
[5] Powell, M. J. D., “A Fast Algorithm for Nonlinearly Constrained Optimization Calculations,” Numerical Analysis, ed. G. A. Watson, Lecture Notes in Mathematics, Vol. 630, Springer Verlag, 1978.