Code covered by the BSD License  

Highlights from
minimize

4.90909

4.9 | 11 ratings Rate this file 102 Downloads (last 30 days) File Size: 383 KB File ID: #24298
image thumbnail

minimize

by

 

29 May 2009 (Updated )

Minimize constrained functions with FMINSEARCH or FMINLBFGS, globally or locally

| Watch this File

File Information
Description

( NOTE: adding the main folder and its sub-folders to the MATLAB search path will enable you to view the extended documentation in the MATLAB help browser. )
MINIMIZE is an improvement upon the functions FMINSEARCHBND and FMINSEARCHCON written by John d'Errico (also available on the file exchange). It solves the optimization problem

min f(x)

s.t.

lb <= x <= ub
A * x < b
Aeq * x = beq
c(x) <= 0
ceq(x) = 0

using a coordinate transformation for the bound constraints, and penalty functions for the other constraints. The penalty functions used are pseudo-adaptive, in that they are designed to penalize heavily yet prevent overflow from ever happening.

The main differences between MINIMIZE and FMINSEARCHCON are

 - rudimentary support for global optimization problems
 - it handles (non)linear equality constraints
 - strictness is more controllable
 - support for FMINLBFGS

While FMINSEARCHCON does not permit ANY function evaluation outside the feasible domain, MINIMIZE can be either allowed (default) or disallowed ('AlwaysHonorConstraints' option) to do so.

Its behavior is similar to that of FMINCON (optimization toolbox), which makes it useful for those who do not have the optimization toolbox, but only have one-off simple problems to solve, or are considering buying the toolbox and want to practice a bit with FMINCON's interface.

Note that MINIMIZE is by no means intended to be a full replacement of FMINCON, since the algorithms used by FMINCON are simply better. However, it does have a few notable advantages.

For relatively small problems where it is hard or impossible to come up with good initial estimates, MINIMIZE can be used effectively as a global optimization routine, providing a simple means to find good initial estimates.

It is also particularly useful in cases where the objective function is costly to compute, and hard or impossible to differentiate analytically. In such cases, FMINCON is forced to compute the derivatives numerically, which usually takes > 60% of the computation time if you have a sizeable problem. Since FMINSEARCH is the engine for MINIMIZE , no derivatives are required, which might make it more efficient than using FMINCON.

With the addition of FMINLBFGS (included in this publication), it is also useful for extremely large problems (over 3000 variables). Hessian information, even with BFGS solvers, can consume large amounts of memory. This problem is solved by using a Limited-memory version of the BFGS routines, implemented nicely in FMINLBFGS.

Usage:

sol = MINIMIZE(func, x0)
sol = MINIMIZE(func, x0, lb,ub)
sol = MINIMIZE(func, x0, A,b)
sol = MINIMIZE(func, x0, A,b, Aeq,beq)
sol = MINIMIZE(func, x0, A,b, Aeq,beq, lb,ub)
sol = MINIMIZE(func, x0, A,b, Aeq,beq, lb,ub, nonlcon, options)

[sol, fval] = OPTIMIZE(func, ...)
[sol, fval, exitflag] = MINIMIZE(func, ...)
[sol, fval, exitflag, output] = MINIMIZE(func, ...)

( NOTE: adding the main folder and its sub-folders to the MATLAB search path will enable you to view the extended documentation in the MATLAB help browser. )

If you find this work useful, please consider a donation:
https://www.paypal.com/cgi-bin/webscr?cmd=_s-xclick&hosted_button_id=6G3S5UYM7HJ3N

Acknowledgements

Fminsearchbnd, Fminsearchcon and Fminlbfgs: Fast Limited Memory Optimizer inspired this file.

This file inspired Core: Conceptual Optimization Of Rotorcraft Environment.

MATLAB release MATLAB 7.10 (R2010a)
Tags for This File   Please login to tag files.
Please login to add a comment or rating.
Comments and Ratings (24)
23 Feb 2014 Christophe

Rody,

Thanks for that. I'll look more closely at a wrapper. I did one for "cmaes", maybe I can do one for optimize.

I want my "variables" to change by at list 0.01 because they are "physical" dimensions in millimetres of something I am designing. For my cost-fucntion, a variable that changes by less than 0.01mm will not change the value of the cost-function as I round off to two decimal points in the analysis as my manufacture accuracy is at best 0.01mm, but more line 0.02mm (20 microns) in real life. I'm designing hardware :)

Thanks for taking the time to look at my question. Much appreciated...

21 Feb 2014 Rody Oldenhuis

@Christophe,

It seems you and Ben have discrete problems, whereas OPTIMIZE() is for continuous problems. In other words, "grid" constraints are not supported.

Indeed you are correct -- "DiffMinChange" is not supported by OPTIMIZE(). But that is because it doesn't do what you think it does. From the documentation of FMINCON():

"Minimum change in variables for finite-difference gradients (a positive scalar). The default is 1e-8."

Thus it is an option to control the minimum step in the numerical computation of *derivative*, something that OPTIMIZE() does not need or use.

As a workaround, you can optimize your function with a wrapper:

myFunc = @(x) yourOriginalFunction( round(X*100)/100 );

Note that this restricts X to a grid with step 0.01 in all dimensions. This may not be what you want, but I trust you get the idea.

As a general interest -- *why* is your X restricted to 0.01 minimum steps?

20 Feb 2014 Rody Oldenhuis

@Wieland,

Hmmm that is strange indeed. Thanks for reporting this bug! I'll try to fix this in the upcoming release.

By the way; 'superstrict' does not seem to have the same problem...

20 Feb 2014 Christophe

Dear Rody,
I have the same problem as Ben below. I need to have a minimum step of 0.01 in my variables within the bounds. I've tried 'DiffMinChange',1e-2 but the program does not seem to register this. Also, is the standard fminsearch used by default or does the program use a modified version of fminsearch?

24 Oct 2013 Wieland Brendel

Dear Oldenhuis,
thank you so much for this wonderful optimization script! I am about to release a data analysis method that needs one nonlinear, bounded optimization step with one linear equality constraint. Your script will make the distribution of the code much easier (compared with having people buy & install the optimization toolbox).

In any case, I stumbled upon a problem: I only get the initial value returned while using both the equality constraint and 'strict'. The code in a nutshell:

L1 = @(x) sum(x.^2);
Aeq = ones(1,3);
beq = 1;

a0 = [0.5 0.25 0.25]';
[sol0,fval0,exit0] = optimize(L1,a0,[],[],[],[],Aeq,beq)
[sol1,fval1,exit1] = optimize(L1,a0,[],[],[],[],Aeq,beq,[],'strict')

Both exit with status 1, but only the first yields the correct result. Would you be able to check what is going wrong? Your help is highly appreciated!

Thanks a lot in advance!

PS: I need the option 'strict' because of a log in my actual objective used in the data analysis method.

PPS: Matlab 2012b 64bit on Windows 8

09 Apr 2013 ZAFAR

Dear Oldenhuis,
Actually my problem (let it be 3 variable problem) has one linear equality constraint e.g. Aeq =[1 1 1], beq = 200 and also there is lower and upper bound for each variable e.g. lb = [30 40 50] and ub = [80 100 85]. Let x0 = [60 70 80]. Let the function is fn. There is no iequality constraint
I write the optimize statement as follows

sol=optimize(@fn,x0,[],[],Aeq,beq,lb,ub)

But it did not work and the message comes to be

Error using optimize/check_input (line 466)
Given linear constraint vector [b] has incompatible
size with given [x0].

Error in optimize (line 347)
check_input;

Kindly advise.

09 Apr 2013 ZAFAR

@Oldenhuis,
What to do till then. What should be the order of input arguments. Without this facility, this utility cannot be used for constrained optimization. Kindly give your advice.

09 Apr 2013 Rody Oldenhuis

@ZAFAR:

You are right -- In one of the updates, I changed the order of the input arguments so that they correspond with fmincon(). However, I did NOT update the documentation to reflect these changes.

I'm unfortunately very short on time now, but I will do my best to update optimize() soon.

08 Apr 2013 ZAFAR

Dear Rody Oldenhuis,

When i try to optimize the following fn (with inequality constraints)

rosen = @(x) (1-x(1)).^2 + 105*(x(2)-x(1).^2).^2;

optimize(rosen,[0 0],[],[],[1 1], 1)

I get the error message

Error using optimize/check_input (line 466)
Given linear constraint vector [b] has incompatible
size with given [x0].

Error in optimize (line 347)
check_input;

Why is this so, although i used the example given with optimize.m
Similar is the problem with equality constraints. Kindly spare a little of your precious time to reply

08 Apr 2013 ZAFAR

Please read the first line of my mrssage as

"When i try to optimize the following fn (with inequality constraints)"

22 Aug 2012 Guillaume

Thanks so much for this function Rody!
Although, the syntax changes a little bit with respect to the fmincon.m utility, the optimization process runs equally as well (and maybe even faster!).

16 Apr 2012 Qu  
15 Feb 2012 pietro

It works good, but sometimes it stucks at a strange objective function value (fval=7.22597e+086), but the function value is instead different, around 1000. I did several tries, but it doesn't change at all. Any suggestions is appreciated

11 Dec 2011 Shmuel

nice, but change line 12 in the testoptimize
to :
clc, rosen = @(x) (1-x(1))^2 + 105*(x(2)-x(1)^2)^2 -6000*sinc(x(2)^2+x(1)^2); % adding local minima.
and you get a local minimume solver as fminsearch is.

29 Jul 2011 Marco G

Very Good it does work!! I was using the routine to solve a constrained problem (my constraints are basically non negativity of the x´s and that they sum up to one: Sum_i x_i =1) with some additional parameters in the objective function. I noticed that the contraint of the sum is not perfectly matched (i.e sum is 0.9 or so) even though I used ´strict´ option. Is there a way to force the constraint to be respected and maybe lose something somewhere else( e.g. less optimal minimization)? thx for help

13 Feb 2011 Alexei  
31 Aug 2010 Ryan Webb

I use optimize for many problems, however I have found a bug when optimize is used with the distributed computing toolbox. When it is being run on a worker, it gives an "Undefined Function Handle" error.

Unfortuntely I am unable to trace exactly what line this error comes from, but I can say that it is after line 398. The likely culprit is when optimize calls fminsearch and funfncP is out of scope somehow...

07 Jul 2010 Sky Sartorius  
15 Feb 2010 Andre Guy Tranquille  
09 Feb 2010 Nagendra

please specify the algorithm journal paper which is used to write this algorithm

14 Oct 2009 Ben

Code seems to work great. Thanks.
Is there any way to set the increment size for the x0 range? Allow me to clarify: lets say a have a lower bound of -2 and an upper bound of 2. I want the guesses to increment by 0.1. So effectively, I want to check the values [-2:0.1:2]. Where in your code can I set the 0.1 increment? Also, can that increment be different for each variable in x0?

02 Jun 2009 Rody Oldenhuis

Thierry: it's basically already included (the included HTML is a published M-file, so all commands can basically be copy-pasted). But once I get home tonight, I'll included it nonetheless for completeness. As for the demo, that would indeed be great! I'll see to it. Thanks for the suggestion!

As for the dependencies: To my knowledge, FMINSEARCH is indeed included in MATLAB's standard library (in my version it's located in R2008b\toolbox\matlab\funfun\fminsearch.m), but of course FMINCON is only included in the optimization toolbox (located in R2008b\toolbox\shared\optimlib\fmincon.m), so OPTIMIZE is for everyone in need of a "basic" FMINCON ^_^

02 Jun 2009 Thierry Dalon

Hi!
looks GREAT!
could you add to the package the source for the testoptimize.m, please ?
maybe add it as well as a demo alike TMW optimdemos.

29 May 2009 John D'Errico

Well done.

Updates
30 Jul 2009

Two bugs fixed:

1) [x0] can now be a matrix, just as in FMINSEARCH.
2) Fixed a minor issue with the strictness setting

Also, I cleaned up the code somewhat, and expanded error handling a bit.

04 Aug 2009

Corrected problem with 1D-functions, and included more robust version of the NM-algorithm (see changelog)

05 Aug 2009

Removed dependency on the optimization toolbox (TolCon). Added global routine, and an associated exitflag (-3).

19 Feb 2014

- Updated contact info
- Minor changes to deal with uncaught code analyzer messages

13 Mar 2014

MAJOR update; see the changelogs for further details.

13 Mar 2014

- Screenshot update

Contact us