Solve minimax constraint problem
Finds the minimum of a problem specified by
$$\underset{x}{\mathrm{min}}\underset{i}{\mathrm{max}}{F}_{i}(x)\text{suchthat}\{\begin{array}{c}c(x)\le 0\\ ceq(x)=0\\ A\cdot x\le b\\ Aeq\cdot x=beq\\ lb\le x\le ub\end{array}$$
where b and beq are vectors, A and Aeq are matrices, and c(x), ceq(x), and F(x) are functions that return vectors. F(x), c(x), and ceq(x) can be nonlinear functions.
x, lb, and ub can be passed as vectors or matrices; see Matrix Arguments.
You can also solve maxmin problems with fminimax
,
using the identity
$$\underset{x}{\mathrm{max}}\underset{i}{\mathrm{min}}{F}_{i}(x)=\underset{x}{\mathrm{min}}\underset{i}{\mathrm{max}}\left({F}_{i}(x)\right).$$
You can solve problems of the form
$$\underset{x}{\mathrm{min}}\underset{i}{\mathrm{max}}\left{F}_{i}(x)\right$$
by using the AbsoluteMaxObjectiveCount
option;
see Notes.
x = fminimax(fun,x0)
x = fminimax(fun,x0,A,b)
x = fminimax(fun,x0,A,b,Aeq,beq)
x = fminimax(fun,x0,A,b,Aeq,beq,lb,ub)
x = fminimax(fun,x0,A,b,Aeq,beq,lb,ub,nonlcon)
x = fminimax(fun,x0,A,b,Aeq,beq,lb,ub,nonlcon,options)
x = fminimax(problem)
[x,fval] = fminimax(...)
[x,fval,maxfval] = fminimax(...)
[x,fval,maxfval,exitflag] = fminimax(...)
[x,fval,maxfval,exitflag,output] = fminimax(...)
[x,fval,maxfval,exitflag,output,lambda]
= fminimax(...)
fminimax
minimizes the worstcase (largest)
value of a set of multivariable functions, starting at an initial
estimate. This is generally referred to as the minimax problem.
Note: Passing Extra Parameters explains how to pass extra parameters to the objective functions and nonlinear constraint functions, if necessary. 
x = fminimax(fun,x0)
starts
at x0
and finds a minimax solution x
to
the functions described in fun
.
x = fminimax(fun,x0,A,b)
solves
the minimax problem subject to the linear inequalities A*x ≤ b
.
x = fminimax(fun,x0,A,b,Aeq,beq)
solves
the minimax problem subject to the linear equalities Aeq*x = beq
as well. Set A
= []
and b = []
if no inequalities exist.
x = fminimax(fun,x0,A,b,Aeq,beq,lb,ub)
defines
a set of lower and upper bounds on the design variables in x
,
so that the solution is always in the range lb
≤ x
≤ ub
.
Note: See Iterations Can Violate Constraints. 
x = fminimax(fun,x0,A,b,Aeq,beq,lb,ub,nonlcon)
subjects
the minimax problem to the nonlinear inequalities c(x)
or
equality constraints ceq(x)
defined in nonlcon
. fminimax
optimizes
such that c(x) ≤ 0
and ceq(x) = 0
. Set lb = []
and/or ub
= []
if no bounds exist.
x = fminimax(fun,x0,A,b,Aeq,beq,lb,ub,nonlcon,options)
minimizes
with the optimization options specified in options
.
Use optimoptions
to set these
options.
x = fminimax(problem)
finds the minimum
for problem
, where problem
is
a structure described in Input Arguments.
Create the problem
structure by exporting
a problem from Optimization app, as described in Exporting Your Work.
[x,fval] = fminimax(...)
returns
the value of the objective function fun
at the
solution x
.
[x,fval,maxfval] = fminimax(...)
returns
the maximum of the objective functions in the input fun
evaluated
at the solution x
.
[x,fval,maxfval,exitflag] = fminimax(...)
returns
a value exitflag
that describes the exit condition
of fminimax
.
[x,fval,maxfval,exitflag,output] = fminimax(...)
returns
a structure output
with information about the optimization.
[x,fval,maxfval,exitflag,output,lambda]
= fminimax(...)
returns a structure lambda
whose
fields contain the Lagrange multipliers at the solution x
.
Note:
If the specified input bounds for a problem are inconsistent,
the output 
Function Arguments contains
general descriptions of arguments passed into fminimax
.
This section provides functionspecific details for fun
, nonlcon
,
and problem
:
 The function to be minimized. x = fminimax(@myfun,x0) where function F = myfun(x) F = ... % Compute function values at x
x = fminimax(@(x)sin(x.*x),x0); If
the userdefined values for To
minimize the worst case absolute values of any of the elements of
the vector F(x) (i.e., min{max
abs{F(x)} } ), partition those
objectives into the first elements of If the gradient of the
objective function can also be computed and the options = optimoptions('fminimax','SpecifyObjectiveGradient',true) then
the function By
checking the value of function [F,G] = myfun(x) F = ... % Compute the function values at x if nargout > 1 % Two output arguments G = ... % Gradients evaluated at x end

 The function that computes the nonlinear
inequality constraints x = fminimax(@myfun,x0,A,b,Aeq,beq,lb,ub,@mycon) where function [c,ceq] = mycon(x) c = ... % Compute nonlinear inequalities at x ceq = ... % Compute nonlinear equalities at x If the gradients of the constraints can also
be computed and the options = optimoptions('fminimax','SpecifyConstraintGradient',true) then
the function
 
problem 
 Objective function  
 Initial point for x  
 Matrix for linear inequality constraints  
 Vector for linear inequality constraints  
 Matrix for linear equality constraints  
 Vector for linear equality constraints  
 Vector of lower bounds  
 Vector of upper bounds  
 Nonlinear constraint function  
 'fminimax'  
 Options created with optimoptions 
Function Arguments contains
general descriptions of arguments returned by fminimax
.
This section provides functionspecific details for exitflag
, lambda
, maxfval
,
and output
:
 Integer identifying the
reason the algorithm terminated. The following lists the values of  
 Function converged to a solution  
 Magnitude of the search direction less than the specified
tolerance and constraint violation less than  
 Magnitude of directional derivative less than the specified
tolerance and constraint violation less than  
 Number of iterations exceeded  
 Algorithm was terminated by the output function.  
 No feasible point was found.  
 Structure containing the
Lagrange multipliers at the solution  
 Lower bounds  
 Upper bounds  
 Linear inequalities  
 Linear equalities  
 Nonlinear inequalities  
 Nonlinear equalities  
 Maximum of the function
values evaluated at the solution  
 Structure containing information about the optimization. The fields of the structure are  
iterations  Number of iterations taken.  
funcCount  Number of function evaluations.  
lssteplength  Size of line search step relative to search direction  
stepsize  Final displacement in  
algorithm  Optimization algorithm used.  
firstorderopt  Measure of firstorder optimality  
constrviolation  Maximum of constraint functions  
message  Exit message 
Optimization options used by fminimax
.
Use optimoptions
to set or change options
.
See Optimization Options Reference for detailed
information.
Some options are absent from the optimoptions
display.
These options are listed in italics. For details, see View Options.
AbsoluteMaxObjectiveCount  Number of elements of F_{i}(x)
to minimize the maximum absolute value of F_{i}.
See Notes. The default is 
Diagnostics  Display diagnostic information
about the function to be minimized or solved. The choices are 
DiffMaxChange  Maximum change in variables for
finitedifference gradients (a positive scalar). The default is 
DiffMinChange  Minimum change in variables for
finitedifference gradients (a positive scalar). The default is 
Display  Level of display (see Iterative Display):

ConstraintTolerance  Termination tolerance on the constraint
violation, a positive scalar. The default is 
FiniteDifferenceStepSize  Scalar or vector step size factor for finite differences. When
you set
where
Scalar 
FiniteDifferenceType  Finite differences, used to estimate
gradients, are either The algorithm is careful to obey bounds when estimating both types of finite differences. So, for example, it could take a backward, rather than a forward, difference to avoid evaluating at a point outside bounds. 
FunctionTolerance  Termination tolerance on the function
value, a positive scalar. The default is 
FunValCheck  Check whether objective function
and constraints values are valid. 
MaxFunctionEvaluations  Maximum number of function evaluations
allowed, a positive integer. The default value is 
MaxIterations  Maximum number of iterations allowed,
a positive integer. The default value is 
MaxSQPIter  Maximum number of SQP iterations
allowed, a positive integer. The default is 
MeritFunction  Use the goal attainment/minimax
merit function if set to 
OptimalityTolerance  Termination tolerance on the firstorder optimality, a positive
scalar. The default is 
OutputFcn  Specify one or more userdefined
functions that an optimization function calls at each iteration, either
as a function handle or as a cell array of function handles. The default
is none ( 
PlotFcn  Plots various measures of progress
while the algorithm executes, select from predefined plots or write
your own. Pass a function handle or a cell array of function handles.
The default is none (
For information on writing a custom plot function, see Plot Functions. 
RelLineSrchBnd  Relative bound (a real nonnegative
scalar value) on the line search step length such that the total displacement
in 
RelLineSrchBndDuration  Number of iterations for which
the bound specified in 
SpecifyConstraintGradient  Gradient for the userdefined constraints.
When set to 
SpecifyObjectiveGradient  Gradient for the userdefined objective
function. See the preceding description of 
StepTolerance  Termination tolerance on 
TolConSQP  Termination tolerance on inner
iteration SQP constraint violation, a positive scalar. The default
is 
TypicalX  Typical 
UseParallel  When 
Find values of x that minimize the maximum value of
[f_{1}(x), f_{2}(x), f_{3}(x), f_{4}(x), f_{5}(x)]
where
$$\begin{array}{l}{f}_{1}(x)=2{x}_{1}^{2}+{x}_{2}^{2}48{x}_{1}40{x}_{2}+304,\\ {f}_{2}(x)={x}_{1}^{2}3{x}_{2}^{2},\\ {f}_{3}(x)={x}_{1}+3{x}_{2}18,\\ {f}_{4}(x)={x}_{1}{x}_{2},\\ {f}_{5}(x)={x}_{1}+{x}_{2}8.\end{array}$$
First, write a file that computes the five functions at x
.
function f = myfun(x) f(1)= 2*x(1)^2+x(2)^248*x(1)40*x(2)+304; % Objectives f(2)= x(1)^2  3*x(2)^2; f(3)= x(1) + 3*x(2) 18; f(4)= x(1) x(2); f(5)= x(1) + x(2)  8;
Next, invoke an optimization routine.
x0 = [0.1; 0.1]; % Make a starting guess at solution [x,fval] = fminimax(@myfun,x0);
After seven iterations, the solution is
x,fval x = 4.0000 4.0000 fval = 0.0000 64.0000 2.0000 8.0000 0.0000
You can solve problems of the form
$$\underset{x}{\mathrm{min}}\underset{i}{\mathrm{max}}{G}_{i}(x),$$
where
$${G}_{i}(x)=\{\begin{array}{ll}\left{F}_{i}(x)\right\hfill & 1\le i\le m\hfill \\ {F}_{i}(x)\hfill & i>m.\hfill \end{array}$$
Here m is the value of the AbsoluteMaxObjectiveCount
option.
The advantage of this formulation is you can minimize the absolute
value of some components of F, even though the
absolute value function is not smooth.
In order to use this option, reorder the elements of F, if necessary, so the first elements are those for which you want the minimum absolute value.
For example, consider the problem in Examples. Modify the problem to find the minimum of the maximum
absolute values of all f_{i}(x).
Solve this problem by invoking fminimax
with the
commands
x0 = [0.1; 0.1]; % Make a starting guess at the solution options = optimoptions('fminimax','AbsoluteMaxObjectiveCount',5); % Minimize abs. values [x,fval] = fminimax(@myfun,x0,... [],[],[],[],[],[],[],options);
After seven iterations, the solution is
x = 4.9256 2.0796 fval = 37.2356 37.2356 6.8357 7.0052 0.9948
The function to be minimized must be continuous. fminimax
might
only give local solutions.
[1] Brayton, R. K., S. W. Director, G. D. Hachtel, and L. Vidigal, "A New Algorithm for Statistical Circuit Design Based on QuasiNewton Methods and Function Splitting," IEEE Trans. Circuits and Systems, Vol. CAS26, pp. 784794, Sept. 1979.
[2] Grace, A. C. W., "ComputerAided Control System Design Using Optimization Techniques," Ph.D. Thesis, University of Wales, Bangor, Gwynedd, UK, 1989.
[3] Han, S. P., "A Globally Convergent Method For Nonlinear Programming," Journal of Optimization Theory and Applications, Vol. 22, p. 297, 1977.
[4] Madsen, K. and H. SchjaerJacobsen, "Algorithms for Worst Case Tolerance Optimization," IEEE Trans. of Circuits and Systems, Vol. CAS26, Sept. 1979.
[5] Powell, M. J. D., "A Fast Algorithm for Nonlinearly Constrained Optimization Calculations," Numerical Analysis, ed. G. A. Watson, Lecture Notes in Mathematics, Vol. 630, Springer Verlag, 1978.