Minimize constrained functions with FMINSEARCH or FMINLBFGS, globally or locally
28 Downloads
Updated 14 Apr 2017
( NOTE: adding the main folder and its subfolders to the MATLAB search path will enable you to view the extended documentation in the MATLAB help browser. )
MINIMIZE is an improvement upon the functions FMINSEARCHBND and FMINSEARCHCON written by John d'Errico (also available on the file exchange). It solves the optimization problem
min f(x)
s.t.
lb <= x <= ub
A * x < b
Aeq * x = beq
c(x) <= 0
ceq(x) = 0
using a coordinate transformation for the bound constraints, and penalty functions for the other constraints. The penalty functions used are pseudoadaptive, in that they are designed to penalize heavily yet prevent overflow from ever happening.
The main differences between MINIMIZE and FMINSEARCHCON are
 rudimentary support for global optimization problems
 it handles (non)linear equality constraints
 strictness is more controllable
 support for FMINLBFGS
While FMINSEARCHCON does not permit ANY function evaluation outside the feasible domain, MINIMIZE can be either allowed (default) or disallowed ('AlwaysHonorConstraints' option) to do so.
Its behavior is similar to that of FMINCON (optimization toolbox), which makes it useful for those who do not have the optimization toolbox, but only have oneoff simple problems to solve, or are considering buying the toolbox and want to practice a bit with FMINCON's interface.
Note that MINIMIZE is by no means intended to be a full replacement of FMINCON, since the algorithms used by FMINCON are simply better. However, it does have a few notable advantages.
For relatively small problems where it is hard or impossible to come up with good initial estimates, MINIMIZE can be used effectively as a global optimization routine, providing a simple means to find good initial estimates.
It is also particularly useful in cases where the objective function is costly to compute, and hard or impossible to differentiate analytically. In such cases, FMINCON is forced to compute the derivatives numerically, which usually takes > 60% of the computation time if you have a sizeable problem. Since FMINSEARCH is the engine for MINIMIZE , no derivatives are required, which might make it more efficient than using FMINCON.
With the addition of FMINLBFGS (included in this publication), it is also useful for extremely large problems (over 3000 variables). Hessian information, even with BFGS solvers, can consume large amounts of memory. This problem is solved by using a Limitedmemory version of the BFGS routines, implemented nicely in FMINLBFGS.
Usage:
sol = MINIMIZE(func, x0)
sol = MINIMIZE(func, x0, lb,ub)
sol = MINIMIZE(func, x0, A,b)
sol = MINIMIZE(func, x0, A,b, Aeq,beq)
sol = MINIMIZE(func, x0, A,b, Aeq,beq, lb,ub)
sol = MINIMIZE(func, x0, A,b, Aeq,beq, lb,ub, nonlcon, options)
[sol, fval] = OPTIMIZE(func, ...)
[sol, fval, exitflag] = MINIMIZE(func, ...)
[sol, fval, exitflag, output] = MINIMIZE(func, ...)
( NOTE: adding the main folder and its subfolders to the MATLAB search path will enable you to view the extended documentation in the MATLAB help browser. )
If you find this work useful, please consider a donation:
https://www.paypal.com/cgibin/webscr?cmd=_sxclick&hosted_button_id=6G3S5UYM7HJ3N
1.7  [linked to Github] 

1.6   Screenshot update 

1.5  MAJOR update; see the changelogs for further details. 

1.4   Updated contact info


1.3  Removed dependency on the optimization toolbox (TolCon). Added global routine, and an associated exitflag (3). 

1.2  Corrected problem with 1Dfunctions, and included more robust version of the NMalgorithm (see changelog) 

1.1  Two bugs fixed: 1) [x0] can now be a matrix, just as in FMINSEARCH.
Also, I cleaned up the code somewhat, and expanded error handling a bit. 
Inspired by: fminsearchbnd, fminsearchcon
Inspired: Population Balance Equation Modeling of Precipitation, CORE: Conceptual Optimization of Rotorcraft Environment
Rody Oldenhuis (view profile)
@MohammadRezaAndalibi no, there isn't  the code and its tests is all there is. It builds on a couple of ideas, which are (1) Nelder/Mead algorithm (fminsearch), (2) LBFGS, and (3) transformation and barrier functions, fairly common already for decades in the mathematical optimization world. So, in effect, this code provides convenience, but not novelty  which is why it's not published.
Mohammad Reza Andalibi (view profile)
Dear Rody,
I was wondering if aside from citing your code there is some publication that I can cite. Thank you very much.
ps: By the way, it was an amazing code! I used it for fitting some experimental data to a computational model (set of 10 ODEs) and it worked absolutely great. Thank you very much.
Kévin Fontaine (view profile)
Dear Rody,
I have one question:
Is it possible to have a boundary that depends on a variable ?
I explain myself: I need a boundary such has x(2) > f(x(1)).
If this condition is not met, one of my non linear constraint will lead to an error.
For this reason I can't put this condition as a non linear constraint; the condition need to be met before evaluating non linear constraints.
I wonder if there is a way to do this.
Thanks for this great function,
Kevin.
Rody Oldenhuis (view profile)
@SimonWoodward Thank you for the kind words :) You can use this citation:
Rody Oldenhuis, orcid.org/0000000231623660. "minimize" version 1.7, 2017/April/06. MATLAB minimization algorithm. http://nl.mathworks.com/matlabcentral/fileexchange/24298minimize.
Simon Woodward (view profile)
Thanks so much for this. How would you like me to cite this in the paper I am writing?
Mohammad Reza Andalibi (view profile)
Good day Rody ,
I have two questions:
1) When I input 'PlotFcns',{@optimplotx,@optimplotfval} into setoptimoptions the current point graph does not show design variables in the correct [lb,ub] domain. It sounds like they are the transforemd variables. Is that the case? Is there any way to plot the untransformed design parameters?
2) I wanted to use your code for my publication but I did not know how I should cite it. Could you please let me know what are the references for your work and how I should cite your code.
Thank you again,
Reza
Martin Vlcek (view profile)
Thank you very much for this function.
However, when I am executing mfile:
function test_minimize
rosen = @(x) (1x(1)).^2 + 105*(x(2)x(1).^2).^2;
options = optimset('TolFun', 1e8, 'TolX', 1e8);
minimize(rosen, [3 3], [],[],[],[],[],[],...
@nonlcon, [], options)
end
function [c, ceq] = nonlcon(x)
c = norm(x)  1;
ceq = x(1)^2 + x(2)^3  0.5;
end
I am getting following errors:
Error using nonlcon
Too many input arguments.
Error in minimize/conFcn (line 676)
[varargout{1:nargout}] = feval(nonlcon , reshape(x,size(new_x)), varargin{:});
Error in minimize/check_input (line 523)
[c, ceq] = conFcn(x0);
Error in minimize (line 300)
checks_OK = check_input;
Error in test_minimize (line 6)
minimize(rosen, [3 3], [],[],[],[],[],[],...
Thank you very much for any help.
Best regards,
Martin
Mateusz (view profile)
Emmanuel Ramasso (view profile)
Very useful, thanks
mathieu (view profile)
Rody,
I experience some troubles with this routine. In the plot, using "optimplotx" I think I'm not seeing x but delta_x.
Thanks in advance
mathieu (view profile)
Rody,
I experience some troubles with this routine. When I incremente the length of the variables to be use between two run of the routine, it is not directly taken into account. When I used a toolbox optimization routine before using the 2nd time your routine, the update is made correctly. Also in the plot, using "optimplotx" I think I'm not seeing x but delta_x.
Thanks in advance
JuanPablo Ortega (view profile)
Great script! Thank you!
Oguz (view profile)
Christophe (view profile)
Rody,
Thanks for that. I'll look more closely at a wrapper. I did one for "cmaes", maybe I can do one for optimize.
I want my "variables" to change by at list 0.01 because they are "physical" dimensions in millimetres of something I am designing. For my costfucntion, a variable that changes by less than 0.01mm will not change the value of the costfunction as I round off to two decimal points in the analysis as my manufacture accuracy is at best 0.01mm, but more line 0.02mm (20 microns) in real life. I'm designing hardware :)
Thanks for taking the time to look at my question. Much appreciated...
Rody Oldenhuis (view profile)
@Christophe,
It seems you and Ben have discrete problems, whereas OPTIMIZE() is for continuous problems. In other words, "grid" constraints are not supported.
Indeed you are correct  "DiffMinChange" is not supported by OPTIMIZE(). But that is because it doesn't do what you think it does. From the documentation of FMINCON():
"Minimum change in variables for finitedifference gradients (a positive scalar). The default is 1e8."
Thus it is an option to control the minimum step in the numerical computation of *derivative*, something that OPTIMIZE() does not need or use.
As a workaround, you can optimize your function with a wrapper:
myFunc = @(x) yourOriginalFunction( round(X*100)/100 );
Note that this restricts X to a grid with step 0.01 in all dimensions. This may not be what you want, but I trust you get the idea.
As a general interest  *why* is your X restricted to 0.01 minimum steps?
Rody Oldenhuis (view profile)
@Wieland,
Hmmm that is strange indeed. Thanks for reporting this bug! I'll try to fix this in the upcoming release.
By the way; 'superstrict' does not seem to have the same problem...
Christophe (view profile)
Dear Rody,
I have the same problem as Ben below. I need to have a minimum step of 0.01 in my variables within the bounds. I've tried 'DiffMinChange',1e2 but the program does not seem to register this. Also, is the standard fminsearch used by default or does the program use a modified version of fminsearch?
Wieland Brendel (view profile)
Dear Oldenhuis,
thank you so much for this wonderful optimization script! I am about to release a data analysis method that needs one nonlinear, bounded optimization step with one linear equality constraint. Your script will make the distribution of the code much easier (compared with having people buy & install the optimization toolbox).
In any case, I stumbled upon a problem: I only get the initial value returned while using both the equality constraint and 'strict'. The code in a nutshell:
L1 = @(x) sum(x.^2);
Aeq = ones(1,3);
beq = 1;
a0 = [0.5 0.25 0.25]';
[sol0,fval0,exit0] = optimize(L1,a0,[],[],[],[],Aeq,beq)
[sol1,fval1,exit1] = optimize(L1,a0,[],[],[],[],Aeq,beq,[],'strict')
Both exit with status 1, but only the first yields the correct result. Would you be able to check what is going wrong? Your help is highly appreciated!
Thanks a lot in advance!
PS: I need the option 'strict' because of a log in my actual objective used in the data analysis method.
PPS: Matlab 2012b 64bit on Windows 8
ZAFAR (view profile)
Dear Oldenhuis,
Actually my problem (let it be 3 variable problem) has one linear equality constraint e.g. Aeq =[1 1 1], beq = 200 and also there is lower and upper bound for each variable e.g. lb = [30 40 50] and ub = [80 100 85]. Let x0 = [60 70 80]. Let the function is fn. There is no iequality constraint
I write the optimize statement as follows
sol=optimize(@fn,x0,[],[],Aeq,beq,lb,ub)
But it did not work and the message comes to be
Error using optimize/check_input (line 466)
Given linear constraint vector [b] has incompatible
size with given [x0].
Error in optimize (line 347)
check_input;
Kindly advise.
ZAFAR (view profile)
@Oldenhuis,
What to do till then. What should be the order of input arguments. Without this facility, this utility cannot be used for constrained optimization. Kindly give your advice.
Rody Oldenhuis (view profile)
@ZAFAR:
You are right  In one of the updates, I changed the order of the input arguments so that they correspond with fmincon(). However, I did NOT update the documentation to reflect these changes.
I'm unfortunately very short on time now, but I will do my best to update optimize() soon.
ZAFAR (view profile)
Dear Rody Oldenhuis,
When i try to optimize the following fn (with inequality constraints)
rosen = @(x) (1x(1)).^2 + 105*(x(2)x(1).^2).^2;
optimize(rosen,[0 0],[],[],[1 1], 1)
I get the error message
Error using optimize/check_input (line 466)
Given linear constraint vector [b] has incompatible
size with given [x0].
Error in optimize (line 347)
check_input;
Why is this so, although i used the example given with optimize.m
Similar is the problem with equality constraints. Kindly spare a little of your precious time to reply
ZAFAR (view profile)
Please read the first line of my mrssage as
"When i try to optimize the following fn (with inequality constraints)"
Guillaume (view profile)
Thanks so much for this function Rody!
Although, the syntax changes a little bit with respect to the fmincon.m utility, the optimization process runs equally as well (and maybe even faster!).
Qu (view profile)
pietro (view profile)
It works good, but sometimes it stucks at a strange objective function value (fval=7.22597e+086), but the function value is instead different, around 1000. I did several tries, but it doesn't change at all. Any suggestions is appreciated
Shmuel (view profile)
nice, but change line 12 in the testoptimize
to :
clc, rosen = @(x) (1x(1))^2 + 105*(x(2)x(1)^2)^2 6000*sinc(x(2)^2+x(1)^2); % adding local minima.
and you get a local minimume solver as fminsearch is.
Marco G (view profile)
Very Good it does work!! I was using the routine to solve a constrained problem (my constraints are basically non negativity of the x´s and that they sum up to one: Sum_i x_i =1) with some additional parameters in the objective function. I noticed that the contraint of the sum is not perfectly matched (i.e sum is 0.9 or so) even though I used ´strict´ option. Is there a way to force the constraint to be respected and maybe lose something somewhere else( e.g. less optimal minimization)? thx for help
Alexei (view profile)
Ryan Webb (view profile)
I use optimize for many problems, however I have found a bug when optimize is used with the distributed computing toolbox. When it is being run on a worker, it gives an "Undefined Function Handle" error.
Unfortuntely I am unable to trace exactly what line this error comes from, but I can say that it is after line 398. The likely culprit is when optimize calls fminsearch and funfncP is out of scope somehow...
Sky Sartorius (view profile)
Andre Guy Tranquille (view profile)
Nagendra (view profile)
please specify the algorithm journal paper which is used to write this algorithm
Ben (view profile)
Code seems to work great. Thanks.
Is there any way to set the increment size for the x0 range? Allow me to clarify: lets say a have a lower bound of 2 and an upper bound of 2. I want the guesses to increment by 0.1. So effectively, I want to check the values [2:0.1:2]. Where in your code can I set the 0.1 increment? Also, can that increment be different for each variable in x0?
Rody Oldenhuis (view profile)
Thierry: it's basically already included (the included HTML is a published Mfile, so all commands can basically be copypasted). But once I get home tonight, I'll included it nonetheless for completeness. As for the demo, that would indeed be great! I'll see to it. Thanks for the suggestion!
As for the dependencies: To my knowledge, FMINSEARCH is indeed included in MATLAB's standard library (in my version it's located in R2008b\toolbox\matlab\funfun\fminsearch.m), but of course FMINCON is only included in the optimization toolbox (located in R2008b\toolbox\shared\optimlib\fmincon.m), so OPTIMIZE is for everyone in need of a "basic" FMINCON ^_^
Thierry Dalon (view profile)
Hi!
looks GREAT!
could you add to the package the source for the testoptimize.m, please ?
maybe add it as well as a demo alike TMW optimdemos.
John D'Errico (view profile)
Well done.