Discover MakerZone

MATLAB and Simulink resources for Arduino, LEGO, and Raspberry Pi

Learn more

Discover what MATLAB® can do for your career.

Opportunities for recent engineering grads.

Apply Today

Levenberg-Marquardt in LSQNONLIN vs. FSOLVE

Asked by Matt J on 4 Jan 2013

In the documentation for LSQNONLIN, it says that the Levenberg-Marquardt algorithm option can't be used in conjunction with bound constraints. That being the case, is there ever any reason to run Levenberg-Marquardt under LSQNONLIN as opposed to FSOLVE? When bound constraints are omitted from LSQNONLIN, it is solving the same problem as FSOLVE. So, one would think it should just be calling the same engine.

0 Comments

Matt J

1 Answer

Answer by Shashank Prasanna on 12 Jan 2013

LSQNONLIN and FSOLVE solve different type of optimization problems. While LSQNONLIN expects that f(x) is vector valued, it also implicitly sums and squares the values and forces to minimize this sum of squares.

FSOLVE on the other hand is for system of equations F(x) = 0 where it tries to find the root (zero) of a system of nonlinear equations. The goal of equation solving is to find a vector x that makes all Fi(x) = 0 individually.

http://www.mathworks.com/help/optim/ug/equation-solving-algorithms.html

4 Comments

Shashank Prasanna on 14 Jan 2013

You are right indeed when you say that levenberg-marquardt minimizes the sum of squares of the vector valued output of F(X), in which case there is effectively no difference between LSQNONLIN and FSOLVE.

However, an algorithm specific to solving such systems is 'trust-region-dogleg' which is the default option for FSOLVE which designed to solve nonlinear equations. The other options namely, 'trust-region-reflective' and 'levenberg-marquardt' does minimize the sum of squares. You can find this information in the following link: http://www.mathworks.com/help/optim/ug/choosing-a-solver.html#bsbwxrw

For details on how dogleg is different, in the following link you can see that the objective function is defined by eq 6-121 which is different from sum of squares: http://www.mathworks.com/help/optim/ug/equation-solving-algorithms.html

Matt J on 14 Jan 2013

Eq. 6-121 is only the definition of a trust-region sub-problem at iteration k. The overall objective still looks like it's norm(F(x))^2.

In fact your link compares its performance to Gauss-Newton, which is a nonlinear least squares algorithm.

Shashank Prasanna on 14 Jan 2013

It is probably explained in the reference number 34 mentioned under Trust-Region Dogleg Implementation and you may have to open up the file to find the implementation details,

however 'trust-region-dogleg' is only offered for FSOLVE for that particular reason and here is the extract from the earlier link:

'trust-region-dogleg' is the only algorithm that is specially designed to solve nonlinear equations. The others attempt to minimize the sum of squares of the function.

The answer to your question on implementation must be found in the algorithm for dogleg if it takes the min norm (F(x))^2 or not.

Shashank Prasanna

Contact us