File Exchange

image thumbnail

FMINLBFGS: Fast Limited Memory Optimizer

version 1.8 (9.82 KB) by

Quasi newton limited memory BFGS and Steepest decent optimizer for large amount of unknowns

36 Downloads

Updated

View License

FMINLBFGS is a Memory efficient optimizer for problems such as image registration with large amounts of unknowns, and cpu-expensive gradients.

Supported:
- Quasi Newton Broyden–Fletcher–Goldfarb–Shanno (BFGS).
- Limited memory BFGS (L-BFGS).
- Steepest Gradient Descent optimization.

Advantages:
- Quasi-Newton thus no need to provide a hessian, instead the hessian is updated by analyzing successive gradient vectors instead.
- L-BFGS never explicitly forms or stores the Hessian matrix, which can be quite expensive when the number of dimensions becomes large.
- There is an input option to replace gradient calls during linesearch with normal function calls, if the gradient is cpu-expensive.

This function uses the same interface and options as Matlab functions such as fminunc and fminsearch. But optimset can be replaced by a simple struct when optimization toolbox not available.

Example:
options = optimset('GradObj', 'on' ,'Display', 'iter', 'HessUpdate', 'bfgs', 'GoalsExactAchieve',1);
[x,fval] = fminlbfgs(@myfun, [1 1 1 1 1], options);

Comments and Ratings (23)

shaojiang wu

Amro

Amro (view profile)

Rody Oldenhuis

Rody Oldenhuis (view profile)

Hi Dirk-Jan,

After lots of life happened, I now *finally* picked up work again to get my OPTIMIZE() to officially support your optimizer. So, getting back to that collaboration on OPTIMIZE() we were discussing 4 years ago... :)

Here's a small list of issues I encountered:

- when optim.Display is empty ([]), it is NOT the same as being 'off'. It results in at least the display of the final results. As people will probably be creating "optim" with optimset(), we should expect empties, and interpret them as 'off'.

- The output structure contains two fields with a slightly different name from those of FMINSEARCH:
-- you: output.funccount FMINSEARCH: output.funcCount (capital)
-- you: output.iteration FMINSEARCH: output.iterations ('s' at the end)

- Your optim.GradConstr has a different meaning than in the optimization toolbox: in FMINCON for example, this setting means that you provide gradients of the non-linear constraint functions. As I need that setting, I propose a name change for your setting: optim.AvoidGrad (or something similar)

That's it. As I said, it's really basic stuff. If you could update FMINLBFGS() with these little things, I'd be very grateful.

Best regards,

Rody Oldenhuis

Tang

Tang (view profile)

hi, as a student, I try to use the fminlbfgs to estimate some parameters, but I find the exit message is " line search cannot find an acceptable point along the current search". Does that mean the results are wrong? how should i do to improve the results? Thanks a lot. hope for your reply.

Thanks for providing this useful piece of numerical optimization code. I implemented it for fitting tomographic data to a model. I found that the step sizes for computing the gradient (DiffMinChange) seemed quite crucial for the convergence. Since my set of variables for optimization has different domains (e.g. [0...0.5] or [0....2*PI] ...), I had to modify the code so to account for different DiffMin/MaxChanges.
BTW, is there a fast MEX-version of the (L)BFGS optimizer?

TT

TT (view profile)

Hi, as a beginner, I have some problems under the condition that there are two known variables, let's say that the objective function is f =sin(x)+cos(y) and the gradient function is obviously g = cos(x)-sin(y), how to modify the example.m, since I have no idea that how to assign the initial values, when I set the values of x and y all equal to 1, the result suggests that the dimensions is not matched. Can anybody help me to modify myfun.m and example.m to give the right answer? Thank you so much.

Stephen Becker

Stephen Becker (view profile)

@marco: the standard way to do this is with anonymous functions. e.g. if your function "myfun" accepts arguments (xo, addarg1, addarg2), then define a new function

newfun = @(x) myfun( x, addarg1, addarg2 );

-Stephen

Marco G

Hi. I would like to add in the optimization additional external arguments that are needed by the 'myfun', e.g.:
[RegistrationParams] = fminlbgs( @myfun, x0 , options, addarg1, addarg2, ...)

Is it possible with this function? When I try, I get 'Too many input arguments' error.

Thanks

Dirk-Jan Kroon

Dirk-Jan Kroon (view profile)

*Chris Barber,
Constraints on the variables are not a build-in feature.
But constraining your problem can be done by adding some function to your cost function like:
C = C + abs(log(max(x-a,eps)))*1e-5
In this way x will never go lower then the constant 'a'.
You can also have a look at the constraining code of 'Rody Oldenhuis' (Comment above)

Chris Barber

Fantastic code! Just wondering, is there a way to put constraints on the variables?

Siavash

Siavash

Dirk-Jan Kroon

Dirk-Jan Kroon (view profile)

Thanx Atiyah, I fixed the example in the the newest version

Atiyah

Atiyah (view profile)

I found the mistake. The small example that comes with the source code is the cause. The objective function is supposed to return a scaler. In the example. @objfunc returns a vector. I changed it and it works fine now. Overall thanks for the contribution.

Atiyah

Atiyah (view profile)

Such software is very useful for time-consuming optimization problems. Unfortunately I am not able to use it. Running the script example.m produces a lot of errors. I tried to fix the errors one after another but there are a lot of errors. There are a lot of comparison operations between scalers and vectors (eg. s < v). Also empty index vector (eg. x(ind)). I stopped fixing it because I don't know what is the right thing to do ?

Tomonori

Tomonori

I still have the same problem of the function returning empty variable...

Tomonori

I encountered an occasion which potentially indicates bug. When the initial parameter sets gives the minimum value of the optimization function, the return values (at least x and fval) of fminlbfgs are empty, which could halt applications utilizing this subfunction. I'm not familiar with the internal process of this subfunction and could not figure out which line needs to be modified, but from the observed input-output relationship, I guess zero repetition results in empty output.

If you have references for this optimization methods, would you mind letting us know?

LI guang

Dirk-Jan Kroon

Dirk-Jan Kroon (view profile)

Rody Oldenhuis, thank you for your comment and compliments.

I have no experience with (in)equality constrained optimization, or are an expert on optimization, I just needed a good optimizer for my image registration.

Your OPTIMIZE() wrapper must be very good, because John D'Errico says well done ;-).

Please explain which options or other things have to be changed to let FMINLBFGS work with your function, then I will try to change those things in FMINLBFGS, and link in the description to your code, because people like Sergio Rossi here, need constrains.

In my image registration problems I always adjust my delta-step for finite-differences to be in the order of 1% of the current line-search step length. Those derivatives are more valid for larger steps lengths than with infinite small delta-steps, especially in case of noise or small oscillating functions. Maybe this is also the reason in your case.

Rody Oldenhuis

Rody Oldenhuis (view profile)

I found a minor bug:
- when the maximum amount of function evaluations have been reached, the output message is 'Algorithm was terminated by the output function.'

Just a general question: I find that when passing explicit derivatives in stead of using the central-difference ones, FMINLBFGS()'s basin of attraction seems to shrink significantly. The end-results are more accurate, and my derivatives are good (I checked them on paper, and double checked them with central-differences), which hints at that they DO work, but convergence is much harder to attain. Is there something I'm missing?

Also - I'm currently adjusting my own OPTIMIZE() wapper function (see my profile) to do (non)linear (in)equality constrained optimization with FMINLBGFS(). To that end, I want to ask you to make a few changes. For example, my OPTIMIZE() must of course get the gradients of the constraint functions, which is normally set through the option field GradConstr. However, you use that option in a different way; to constrain CPU-usage on calculations. Also, you use output.funccount in stead of FMINSEARCH's output.funcCount (capital C), and many more minor things.

Anyway, if you're interested in collaboration on OPTIMIZE, let me know. Otherwise, I'll just change these little things myself and post it with the next update of OPTIMIZE() (giving you the credit of course).

But, aside from those minor things, a great function! Well commented and documented, intuitive variable names and in all...very readable. Good work!

Sergio Rossi

This code is excellent: optimizations with around 100 parameters which were impossible using fminsearch on my machine, and very slow even on a dedicated server, are now a matter of minutes on the server. Thank you, Dirk, for the brilliant work. If anybody wrote a wrapper function for this which implemented linear/nonlinear constraints, it would become a state-of-the-art optimizator.

Jackson Shen

Brilliant! Excellent Work!

Updates

1.8

Fixed Example and out of memory issues.

1.7

output.fval is now also set in case the initial position is already the minimum

1.5

Added option to visualize line search. Lless iterations used in simple line search. Solved imag. warning.

1.4

Small NaN error solved...

1.3

Help extended and all parameters are now available as option

1.1

Fixed an "oscillation" bug in simple-linesearch which could occur with a non-smooth error function.

MATLAB Release
MATLAB 7.7 (R2008b)

Download apps, toolboxes, and other File Exchange content using Add-On Explorer in MATLAB.

» Watch video