How to make FMINUNC work faster

4 views (last 30 days)
Karthik
Karthik on 8 Jan 2015
Edited: Matt J on 11 Jan 2015
Hello, I there anyway to make FMINUNC program to run faster than normal. I am using the following command.
c(:,i) = fminunc(@(c) (chi1(c, K0_com)), cint, options);
chi1 is the function that it accepts from a function file. Thanks.

Accepted Answer

Matt J
Matt J on 8 Jan 2015
Edited: Matt J on 8 Jan 2015
Lots of potential ways, but only broad recommendations are possible given the minmal info provided about your problem. You could use the 'UseParallel' option in conjunction with the Parallel Computing Toolbox, if you have it. You could also do your own calculation of the gradient and Hessian with the 'GradObj' and, if applicable, the 'Hessian' options. Not only can this speed convergence, but often you can recycle intermediate quantities from your objective function calculation to make the derivative calculations more efficient than what the default finite differencing methods do.
  4 Comments
John D'Errico
John D'Errico on 9 Jan 2015
Edited: John D'Errico on 9 Jan 2015
Often supplying a gradient is as computationally intensive as it would cost fminunc to estimate that same gradient. And it often gives little gain in accuracy. This is not always the case, but you might look at that gradient. Do some timing tests. Does it cost as much for a gradient call as roughly n calls to the basic objective function? If so, then you may be getting no gain. The point is, do some timing tests. Too many people just assume that because they supply the analytical gradient, that it will run faster and more accurately. This is not always true.
Far more likely to give gain is to optimize your function itself. Or find better starting values. If you can cut the number of function evaluations because of a better start point, you come out ahead.
Matt J
Matt J on 9 Jan 2015
Edited: Matt J on 11 Jan 2015
That doesn't look right. In your grad and hess calculation, you are pretending that G is constant, independent of x, and that fun() is therefore quadratic. In fact, however, G depends on x through Hv. Moreover, the dependence is non-differentiable, since the heaviside function is non-differentiable. That's a problem, I'm afraid. FMINUNC is a derivative-based solver, so the function has to be totally differentiable.
In any case, there are things you can be doing to better vectorize the code, e.g.,
K0_com_t=K0_com.'; %avoid repeated transposition
Hv=heaviside(K0_com_t*x);
Also, instead of creating the really big matrix diag(Hv), you could do
G = bsxfun(@times,K0_com, Hv(:).')*K0_com_t
Finally, you should be using speye() instead of eye().

Sign in to comment.

More Answers (0)

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!