OPTIMIZE is an improvement upon the functions FMINSEARCHBND and FMINSEARCHCON written by John d'Errico (also available on the file exchange). It solves the optimization problem
lb <= x <= ub
A * x < b
Aeq * x = beq
c(x) <= 0
ceq(x) = 0
using a coordinate transformation for the bound constraints, and penalty functions for the other constraints. The penalty functions used are pseudo-adaptive: normally, an exponential penalty is used. However, if the amount of constraint violation is severe, this could lead to numerical overflow. To prevent this, the penalty function is converted to a linear function in case it is too large for the exponential constraint.
The main differences between OPTIMIZE and FMINSEARCHCON are
- (non)linear equality constraints can now be used
- strictness is more controllable
While FMINSEARCHCON does not permit ANY function evaluation outside the feasible domain, OPTIMIZE can be either allowed (default) or disallowed ('strict' or 'superstrict' options) to do so.
Its behavior is similar to that of FMINCON (except for the really fancy stuff), which makes it useful for those who do not have the optimization toolbox. Also, it is particularly useful in case your function is hard or impossible to differentiate. In such cases, FMINCON is forced to compute the derivatives numerically, which usually takes > 60% of the computation time if you have a sizeable problem. Since FMINSEARCH is the engine for OPTIMIZE, no derivatives are required, which might make it more efficient than using FMINCON.
Some basic examples are included in a published M-file. I recycled most of the examples (and help text) from FMINSEARCHCON -- see its documentation for more underlying theory.
You may now also omit the argument [x0], which (when [lb] and [ub] are given) will try to optimize the problem *globally*; simply a few randomly generated starting points inside [lb,ub] will be used to optimize the problem, so that the minimum returned is more likely to be the global minimum. Note however that the number of required function evaluations is considerable. This method should only be used for "cheap" objective functions. See my other release (GODLIKE) for a more robust way to optimize problems globally.
Also, an additional argument [algorithm] may be provided. When you set it to 'NelderMead', an internal version of the Nelder-Mead simplex method will be used, in stead of the one implemented in FMINSEARCH. The internal one is usually less accurate, but slightly more robust and internally more efficient. This is the recommended method for problems of larger dimensionality.
sol = OPTIMIZE(func, x0)
sol = OPTIMIZE(func, x0, lb, ub)
sol = OPTIMIZE(func, x0, lb, ub, A, b)
sol = OPTIMIZE(func, x0, lb, ub, A, b, Aeq, beq)
sol = OPTIMIZE(func, x0, lb, ub, A, b, Aeq, beq, nonlcon)
sol = OPTIMIZE(func, x0, lb, ub, A, b, Aeq, beq, nonlcon, strictness)
sol = OPTIMIZE(func, x0, lb, ub, A, b, Aeq, beq, nonlcon, strictness, options)
sol = OPTIMIZE(func, x0, lb, ub, A, b, Aeq, beq, nonlcon, strictness, options, algorithm)
[sol, fval] = OPTIMIZE(func, ...)
[sol, fval, exitflag] = OPTIMIZE(func, ...)
[sol, fval, exitflag, output] = OPTIMIZE(func, ...)
If you find this work useful, please consider a donation: