Rody,
Thanks for that. I'll look more closely at a wrapper. I did one for "cmaes", maybe I can do one for optimize.
I want my "variables" to change by at list 0.01 because they are "physical" dimensions in millimetres of something I am designing. For my cost-fucntion, a variable that changes by less than 0.01mm will not change the value of the cost-function as I round off to two decimal points in the analysis as my manufacture accuracy is at best 0.01mm, but more line 0.02mm (20 microns) in real life. I'm designing hardware :)
Thanks for taking the time to look at my question. Much appreciated...
Comment only
21 Feb 2014
minimize
Minimize constrained functions with FMINSEARCH or FMINLBFGS, globally or locally
@Christophe,
It seems you and Ben have discrete problems, whereas OPTIMIZE() is for continuous problems. In other words, "grid" constraints are not supported.
Indeed you are correct -- "DiffMinChange" is not supported by OPTIMIZE(). But that is because it doesn't do what you think it does. From the documentation of FMINCON():
"Minimum change in variables for finite-difference gradients (a positive scalar). The default is 1e-8."
Thus it is an option to control the minimum step in the numerical computation of *derivative*, something that OPTIMIZE() does not need or use.
As a workaround, you can optimize your function with a wrapper:
myFunc = @(x) yourOriginalFunction( round(X*100)/100 );
Note that this restricts X to a grid with step 0.01 in all dimensions. This may not be what you want, but I trust you get the idea.
As a general interest -- *why* is your X restricted to 0.01 minimum steps?
Comment only
20 Feb 2014
minimize
Minimize constrained functions with FMINSEARCH or FMINLBFGS, globally or locally
@Wieland,
Hmmm that is strange indeed. Thanks for reporting this bug! I'll try to fix this in the upcoming release.
By the way; 'superstrict' does not seem to have the same problem...
Comment only
20 Feb 2014
minimize
Minimize constrained functions with FMINSEARCH or FMINLBFGS, globally or locally
Dear Rody,
I have the same problem as Ben below. I need to have a minimum step of 0.01 in my variables within the bounds. I've tried 'DiffMinChange',1e-2 but the program does not seem to register this. Also, is the standard fminsearch used by default or does the program use a modified version of fminsearch?
Comment only
24 Oct 2013
minimize
Minimize constrained functions with FMINSEARCH or FMINLBFGS, globally or locally
Dear Oldenhuis,
thank you so much for this wonderful optimization script! I am about to release a data analysis method that needs one nonlinear, bounded optimization step with one linear equality constraint. Your script will make the distribution of the code much easier (compared with having people buy & install the optimization toolbox).
In any case, I stumbled upon a problem: I only get the initial value returned while using both the equality constraint and 'strict'. The code in a nutshell:
L1 = @(x) sum(x.^2);
Aeq = ones(1,3);
beq = 1;
a0 = [0.5 0.25 0.25]';
[sol0,fval0,exit0] = optimize(L1,a0,[],[],[],[],Aeq,beq)
[sol1,fval1,exit1] = optimize(L1,a0,[],[],[],[],Aeq,beq,[],'strict')
Both exit with status 1, but only the first yields the correct result. Would you be able to check what is going wrong? Your help is highly appreciated!
Thanks a lot in advance!
PS: I need the option 'strict' because of a log in my actual objective used in the data analysis method.
PPS: Matlab 2012b 64bit on Windows 8
Comment only