Discover MakerZone

MATLAB and Simulink resources for Arduino, LEGO, and Raspberry Pi

Learn more

Discover what MATLAB® can do for your career.

Opportunities for recent engineering grads.

Apply Today

Problem using lsqnonlin on Six-hump camel back

Asked by Jens on 12 Dec 2012


I am currently creating a computer-based lessen for students. For this I wanted to show the optimization of a function with different algorithms. As a function I used the Six-hump camel back function. Using the starting-point (x,y) = (-1,-0.5) the algorithm should find the minimum (x,y) = (0.0898,-0.7126). Fminsearch and fminunc are working just fine, but I can not get the Levenerg-Marquardt algorithm (lsqnonlin) to work. It just stops half the way.

Here is a short example of how I am using the Levenerg-Marquardt algorithm.

x0 = [-1;-0.5];
f = @(x)(4-2.1*x(1)^2+1/3*x(1)^4)*x(1)^2+x(1)*x(2)+(-4+4*x(2)^2)*x(2)^2;
%% Levenerg-Marquardt:
OPTIONS = optimset('Algorithm','levenberg-marquardt','Jacobian','off','MaxFunEvals',20000,'MaxIter',20000);
[x,resnorm,residual,exitflag,output] = lsqnonlin(f,x0,[],[],OPTIONS);
str = sprintf(' Number of iterations: %g.  Number of function evaluations: %g.  Solution: (%g, %g).',...
  output.iterations, output.funcCount, x(1), x(2));

I hope you can help me.





No products are associated with this question.

3 Answers

Answer by Alan Weiss on 12 Dec 2012
Edited by Alan Weiss on 12 Dec 2012

The problem is a misunderstanding of what lsqnonlin does. It attempts to minimize the sum of squares of the components of the objective function. In fact, it did minimize that sum of squares, and found a root of the function.

When I just ran your code I got the result:

Local minimum found.
Optimization completed because the size of the gradient is less than
the default value of the function tolerance.
   Number of iterations: 5.  Number of function evaluations: 18.  Solution: (-0.27025, -0.918189).

To see that it found a root:

ans =

Root found, NOT a minimum.

Alan Weiss

MATLAB mathematical toolbox documentation


Alan Weiss
Answer by Jens on 13 Dec 2012

But is there a way to modify my code, that it finds the local minimum and not just a root? It was working just fine for the bannana fruction (with defined Jacobi) or the peaks function (without defined Jacobi).



Matt J on 13 Dec 2012

You should delete what you wrote here and re-post it as a Comment to Alan's Answer.

Alan Weiss on 14 Dec 2012

lsqnonlin works well for the banana function because that function is a sum of squares, and lsqnonlin minimizes sums of squares. The six-hump camel back function is not a sum of squares, so lsqnonlin is an inappropriate solver.

Matt's workaround, where you know the value of the minimum, is clever, but beside the point. In my opinion you should teach your students to use appropriate techniques. The optimization decision table has its entries for good reasons. Since you are using the six-hump camel back function, you might want to show your students the information in the Global Optimization decision table, too, and the Solver Characteristics table.

Alan Weiss

MATLAB mathematical toolbox documentation

Matt J on 14 Dec 2012

The six-hump camel back function is not a sum of squares, so lsqnonlin is an inappropriate solver.

Except, Alan, that the OP's ulterior motive sounded like it was to apply Levenberg-Marquardt to this function and currently the toolbox only makes Levenberg-Marquardt accessible through LSQNONLIN and FSOLVE. So, it is necessary to recast the problem into a form that they will process properly.

I don't think it's clear in advance (at least not to me) that Levenberg-Marquardt would perform badly for this function, if it were available in its more general form through FMINUNC. Regardless, sometimes you do want to run inappropriate algorithms for illustration purposes, which also sounds like what the OP was after.

Answer by Matt J on 13 Dec 2012
Edited by Matt J on 14 Dec 2012

Since you know the minimum value fmin already (e.g., because you already found it with fminunc), you can use it to shift the objective uniformly, and make it strictly positive.

    lsqnonlin(@(x) f(x)-fmin +1, x0,[],[],OPTIONS)

Because this is a scalar-valued objective, this will convert a root finding problem to a min finding problem.

Incidentally, since you are not applying any constraints, FSOLVE seems more appropriate than LSQNONLIN here. Both have a Levenberg-Marquardt solver. You would still have to apply the same tactic though to force it to do min-finding as opposed to root-finding.


Matt J

Contact us