Asked by Phil
on 21 Aug 2013

I am having issues with passing the optimization information, namely the gradient and the iteration, to the objective function to be used therein.

Currently, I am simply calling optimValues in such a way:

[x,fval,output,grad,hessian] = fminunc(@nestedfcn, x)

function[fval, fgrad] = nestedfcn(x)

fval = fcnCalc(x); iter = 0.01*optimValues.iteration; fgrad = optimValues.gradient;

g = -x + 0.1; r = 10^iter;

fval = fval + r*fgrad/g; end

Unfortunately, it does not get the desired result even when I pass optimValues into nestedfcn.

If anyone has an idea on how I might be able to use the optimValue information in my objective function, I would greatly appreciate it.

Answer by Matt J
on 21 Aug 2013

Edited by Matt J
on 21 Aug 2013

There is no reason your objective function or gradient calculation should depend on the iteration number or any other field of optimValues. The gradient and value calculations should be derived purely from x and from the mathematical form of the function you are minimizing.

The optimValues structure (which you obtained from OutputFcn?) is meant purely for managing the optimization algorithm, i.e., examining its progress and imposing any additional stopping criteria on the algorithm that you might wish.

Matt J
on 21 Aug 2013

Opportunities for recent engineering grads.

## 0 Comments