View License

Download apps, toolboxes, and other File Exchange content using Add-On Explorer in MATLAB.

» Watch video

Highlights from
FMINLBFGS: Fast Limited Memory Optimizer

4.6 | 13 ratings Rate this file 39 Downloads (last 30 days) File Size: 9.82 KB File ID: #23245 Version: 1.8

FMINLBFGS: Fast Limited Memory Optimizer


Dirk-Jan Kroon (view profile)


10 Mar 2009 (Updated )

Quasi newton limited memory BFGS and Steepest decent optimizer for large amount of unknowns

| Watch this File

File Information

FMINLBFGS is a Memory efficient optimizer for problems such as image registration with large amounts of unknowns, and cpu-expensive gradients.

- Quasi Newton Broyden–Fletcher–Goldfarb–Shanno (BFGS).
- Limited memory BFGS (L-BFGS).
- Steepest Gradient Descent optimization.

- Quasi-Newton thus no need to provide a hessian, instead the hessian is updated by analyzing successive gradient vectors instead.
- L-BFGS never explicitly forms or stores the Hessian matrix, which can be quite expensive when the number of dimensions becomes large.
- There is an input option to replace gradient calls during linesearch with normal function calls, if the gradient is cpu-expensive.

This function uses the same interface and options as Matlab functions such as fminunc and fminsearch. But optimset can be replaced by a simple struct when optimization toolbox not available.

options = optimset('GradObj', 'on' ,'Display', 'iter', 'HessUpdate', 'bfgs', 'GoalsExactAchieve',1);
[x,fval] = fminlbfgs(@myfun, [1 1 1 1 1], options);


MATLAB release MATLAB 7.7 (R2008b)
Tags for This File   Please login to tag files.
Please login to add a comment or rating.
Comments and Ratings (22)
27 Jun 2016 Amro

Amro (view profile)

26 Feb 2014 Rody Oldenhuis

Rody Oldenhuis (view profile)

Hi Dirk-Jan,

After lots of life happened, I now *finally* picked up work again to get my OPTIMIZE() to officially support your optimizer. So, getting back to that collaboration on OPTIMIZE() we were discussing 4 years ago... :)

Here's a small list of issues I encountered:

- when optim.Display is empty ([]), it is NOT the same as being 'off'. It results in at least the display of the final results. As people will probably be creating "optim" with optimset(), we should expect empties, and interpret them as 'off'.

- The output structure contains two fields with a slightly different name from those of FMINSEARCH:
-- you: output.funccount FMINSEARCH: output.funcCount (capital)
-- you: output.iteration FMINSEARCH: output.iterations ('s' at the end)

- Your optim.GradConstr has a different meaning than in the optimization toolbox: in FMINCON for example, this setting means that you provide gradients of the non-linear constraint functions. As I need that setting, I propose a name change for your setting: optim.AvoidGrad (or something similar)

That's it. As I said, it's really basic stuff. If you could update FMINLBFGS() with these little things, I'd be very grateful.

Best regards,

Rody Oldenhuis

Comment only
06 Jan 2014 Tang

Tang (view profile)

hi, as a student, I try to use the fminlbfgs to estimate some parameters, but I find the exit message is " line search cannot find an acceptable point along the current search". Does that mean the results are wrong? how should i do to improve the results? Thanks a lot. hope for your reply.

05 Jul 2013 Max Neudecker

Thanks for providing this useful piece of numerical optimization code. I implemented it for fitting tomographic data to a model. I found that the step sizes for computing the gradient (DiffMinChange) seemed quite crucial for the convergence. Since my set of variables for optimization has different domains (e.g. [0...0.5] or [0....2*PI] ...), I had to modify the code so to account for different DiffMin/MaxChanges.
BTW, is there a fast MEX-version of the (L)BFGS optimizer?

27 Mar 2013 TT

TT (view profile)

Hi, as a beginner, I have some problems under the condition that there are two known variables, let's say that the objective function is f =sin(x)+cos(y) and the gradient function is obviously g = cos(x)-sin(y), how to modify the example.m, since I have no idea that how to assign the initial values, when I set the values of x and y all equal to 1, the result suggests that the dimensions is not matched. Can anybody help me to modify myfun.m and example.m to give the right answer? Thank you so much.

Comment only
02 Nov 2012 Stephen Becker

Stephen Becker (view profile)

@marco: the standard way to do this is with anonymous functions. e.g. if your function "myfun" accepts arguments (xo, addarg1, addarg2), then define a new function

newfun = @(x) myfun( x, addarg1, addarg2 );


Comment only
25 Jul 2012 Marco G

Hi. I would like to add in the optimization additional external arguments that are needed by the 'myfun', e.g.:
[RegistrationParams] = fminlbgs( @myfun, x0 , options, addarg1, addarg2, ...)

Is it possible with this function? When I try, I get 'Too many input arguments' error.


Comment only
02 Mar 2011 Dirk-Jan Kroon

Dirk-Jan Kroon (view profile)

*Chris Barber,
Constraints on the variables are not a build-in feature.
But constraining your problem can be done by adding some function to your cost function like:
C = C + abs(log(max(x-a,eps)))*1e-5
In this way x will never go lower then the constant 'a'.
You can also have a look at the constraining code of 'Rody Oldenhuis' (Comment above)

Comment only
02 Mar 2011 Chris Barber

Fantastic code! Just wondering, is there a way to put constraints on the variables?

24 Feb 2011 Siavash

25 Jan 2011 Siavash

02 Nov 2010 Dirk-Jan Kroon

Dirk-Jan Kroon (view profile)

Thanx Atiyah, I fixed the example in the the newest version

Comment only
26 Aug 2010 Atiyah

Atiyah (view profile)

I found the mistake. The small example that comes with the source code is the cause. The objective function is supposed to return a scaler. In the example. @objfunc returns a vector. I changed it and it works fine now. Overall thanks for the contribution.

Comment only
25 Aug 2010 Atiyah

Atiyah (view profile)

Such software is very useful for time-consuming optimization problems. Unfortunately I am not able to use it. Running the script example.m produces a lot of errors. I tried to fix the errors one after another but there are a lot of errors. There are a lot of comparison operations between scalers and vectors (eg. s < v). Also empty index vector (eg. x(ind)). I stopped fixing it because I don't know what is the right thing to do ?

Comment only
03 May 2010 Tomonori

03 May 2010 Tomonori

I still have the same problem of the function returning empty variable...

14 Feb 2010 Tomonori

I encountered an occasion which potentially indicates bug. When the initial parameter sets gives the minimum value of the optimization function, the return values (at least x and fval) of fminlbfgs are empty, which could halt applications utilizing this subfunction. I'm not familiar with the internal process of this subfunction and could not figure out which line needs to be modified, but from the observed input-output relationship, I guess zero repetition results in empty output.

If you have references for this optimization methods, would you mind letting us know?

03 Feb 2010 LI guang

11 Aug 2009 Dirk-Jan Kroon

Dirk-Jan Kroon (view profile)

Rody Oldenhuis, thank you for your comment and compliments.

I have no experience with (in)equality constrained optimization, or are an expert on optimization, I just needed a good optimizer for my image registration.

Your OPTIMIZE() wrapper must be very good, because John D'Errico says well done ;-).

Please explain which options or other things have to be changed to let FMINLBFGS work with your function, then I will try to change those things in FMINLBFGS, and link in the description to your code, because people like Sergio Rossi here, need constrains.

In my image registration problems I always adjust my delta-step for finite-differences to be in the order of 1% of the current line-search step length. Those derivatives are more valid for larger steps lengths than with infinite small delta-steps, especially in case of noise or small oscillating functions. Maybe this is also the reason in your case.

Comment only
11 Aug 2009 Rody Oldenhuis

Rody Oldenhuis (view profile)

I found a minor bug:
- when the maximum amount of function evaluations have been reached, the output message is 'Algorithm was terminated by the output function.'

Just a general question: I find that when passing explicit derivatives in stead of using the central-difference ones, FMINLBFGS()'s basin of attraction seems to shrink significantly. The end-results are more accurate, and my derivatives are good (I checked them on paper, and double checked them with central-differences), which hints at that they DO work, but convergence is much harder to attain. Is there something I'm missing?

Also - I'm currently adjusting my own OPTIMIZE() wapper function (see my profile) to do (non)linear (in)equality constrained optimization with FMINLBGFS(). To that end, I want to ask you to make a few changes. For example, my OPTIMIZE() must of course get the gradients of the constraint functions, which is normally set through the option field GradConstr. However, you use that option in a different way; to constrain CPU-usage on calculations. Also, you use output.funccount in stead of FMINSEARCH's output.funcCount (capital C), and many more minor things.

Anyway, if you're interested in collaboration on OPTIMIZE, let me know. Otherwise, I'll just change these little things myself and post it with the next update of OPTIMIZE() (giving you the credit of course).

But, aside from those minor things, a great function! Well commented and documented, intuitive variable names and in all...very readable. Good work!

26 May 2009 Sergio Rossi

This code is excellent: optimizations with around 100 parameters which were impossible using fminsearch on my machine, and very slow even on a dedicated server, are now a matter of minutes on the server. Thank you, Dirk, for the brilliant work. If anybody wrote a wrapper function for this which implemented linear/nonlinear constraints, it would become a state-of-the-art optimizator.

25 Mar 2009 Jackson Shen

Brilliant! Excellent Work!

11 Mar 2009 1.1

Fixed an "oscillation" bug in simple-linesearch which could occur with a non-smooth error function.

12 Mar 2009 1.3

Help extended and all parameters are now available as option

12 Mar 2009 1.4

Small NaN error solved...

13 Mar 2009 1.5

Added option to visualize line search. Lless iterations used in simple line search. Solved imag. warning.

03 May 2010 1.7

output.fval is now also set in case the initial position is already the minimum

02 Nov 2010 1.8

Fixed Example and out of memory issues.

Contact us