I am currently creating a computer-based lessen for students. For this I wanted to show the optimization of a function with different algorithms. As a function I used the Six-hump camel back function. Using the starting-point (x,y) = (-1,-0.5) the algorithm should find the minimum (x,y) = (0.0898,-0.7126). Fminsearch and fminunc are working just fine, but I can not get the Levenerg-Marquardt algorithm (lsqnonlin) to work. It just stops half the way.
Here is a short example of how I am using the Levenerg-Marquardt algorithm.
x0 = [-1;-0.5];
f = @(x)(4-2.1*x(1)^2+1/3*x(1)^4)*x(1)^2+x(1)*x(2)+(-4+4*x(2)^2)*x(2)^2;
OPTIONS = optimset('Algorithm','levenberg-marquardt','Jacobian','off','MaxFunEvals',20000,'MaxIter',20000); [x,resnorm,residual,exitflag,output] = lsqnonlin(f,x0,,,OPTIONS);
str = sprintf(' Number of iterations: %g. Number of function evaluations: %g. Solution: (%g, %g).',... output.iterations, output.funcCount, x(1), x(2));
I hope you can help me.
No products are associated with this question.
The problem is a misunderstanding of what lsqnonlin does. It attempts to minimize the sum of squares of the components of the objective function. In fact, it did minimize that sum of squares, and found a root of the function.
When I just ran your code I got the result:
Local minimum found.
Optimization completed because the size of the gradient is less than the default value of the function tolerance.
Number of iterations: 5. Number of function evaluations: 18. Solution: (-0.27025, -0.918189).
To see that it found a root:
Root found, NOT a minimum.
MATLAB mathematical toolbox documentation
But is there a way to modify my code, that it finds the local minimum and not just a root? It was working just fine for the bannana fruction (with defined Jacobi) or the peaks function (without defined Jacobi).
Since you know the minimum value fmin already (e.g., because you already found it with fminunc), you can use it to shift the objective uniformly, and make it strictly positive.
lsqnonlin(@(x) f(x)-fmin +1, x0,,,OPTIONS)
Because this is a scalar-valued objective, this will convert a root finding problem to a min finding problem.
Incidentally, since you are not applying any constraints, FSOLVE seems more appropriate than LSQNONLIN here. Both have a Levenberg-Marquardt solver. You would still have to apply the same tactic though to force it to do min-finding as opposed to root-finding.