fmincon: less optimal results with user supplied analytic gradient?

9 views (last 30 days)
Hi there, I have a question concerning the fmincon optimization procedure in generel: I am minimizing a very complicated (unfortunately not necessarily convex and continuous) functional using the fmincon, currently along with the interior-point-algorithm.
Until some days ago, I used finite differencing to estimate the gradients. Since I derived an analytic expression for the functional, I went through the tedious work of deriving an analytic expression for the gradient, mainly to accelerate the optimization. In fact, the computation is significantly faster, but the final function value is bigger, i.e. less optimal, which I don't understand. I already compared the finite-differencing and analytic gradients, but this does not seem to be the problem (accuracy ~ 1e-6).
Do you know if there are some parameters, I should particularly adjust to tune the algorithm towards using analytic gradients, or can you imaging, where the problem might be? Thanks a lot for your help. And by the way: thanks a lot to the community in general. Although this is my first active participation, I have been extensively and successfully using the forums for years... :-)
Best Mathias
  4 Comments
Matt J
Matt J on 13 Oct 2013
Edited: Matt J on 13 Oct 2013
the function only becomes badly conditioned (noncontinuous) at the border of the search space.
Not sure what you mean by the "search space". Even when constraints are specified, fmincon searches over all of R^n, which has no borders. Constraints are only satisfied asymptotically.
Anyway, we still need to see more information. How much do the G_opts differ in each case? Why should I consider 0.0147 a large final value for the gradient? Maybe it started at 1e7. Also, you said the finite differencing gave "amost perfect results", yet in what you show above it stopped with a much larger max gradient 2.130e+02.
Similarly, why should I consider the difference between 9.62976 and 10.15389 significant? Maybe the typical/starting value of the function was 1e20.
It would help to actually show your objective function and constraints, and the code you used to initialize and call fmincon.
Mathias
Mathias on 13 Oct 2013
Edited: Mathias on 13 Oct 2013
Okay, yeah sorry, that was stupid not providing any knowledge about the problem. The functional f depending on N variables, G varies within the Bounds
G in [-Gmax,...,+Gmax]^N, Gmax = 25.0
The G_opt differes by something like 13.2403, which is more than a quarter of the full range of possible values given by Gmax. In addition, I added some lines of the iteration output in my first reply, to give some idea about the first iteration function values.
Ohh, and the optimization has only these bound constraints
lb = -Gmax;
ub = +Gmax;
No further constraints.

Sign in to comment.

Answers (1)

Matt J
Matt J on 13 Oct 2013
Edited: Matt J on 13 Oct 2013
My best appraisal, without seeing the details of your objective function, is that you have unbounded (or very large) second derivatives at/near your solution. That's the only thing that explains to me why the algorithm starts to take very tiny Newton-like steps even in regions where the gradients are still pretty large. Note that this happens both with the finite differencing and analytical case. It doesn't appear that either version successfully finished the optimization.
  8 Comments
Mathias
Mathias on 15 Oct 2013
Yeah, the objective function is unfortunately 1500 lines of code, it is a highly complicated and highly nonlinear thing... I will try to have a more detailed look on that tomorrow or the day after, maybe then I will be able to ask some more concrete questions, right now I am not really sure how to proceed. But thank you very much for the moment, you already brought some helpful insight into the problems, that I am probably having...
Matt J
Matt J on 15 Oct 2013
A mathematical description of the problem then, showing what you're trying to minimize and why/where the discontinuities occur.

Sign in to comment.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!