Levenberg-Marquardt in LSQNONLIN vs. FSOLVE
17 views (last 30 days)
Show older comments
In the documentation for LSQNONLIN, it says that the Levenberg-Marquardt algorithm option can't be used in conjunction with bound constraints. That being the case, is there ever any reason to run Levenberg-Marquardt under LSQNONLIN as opposed to FSOLVE? When bound constraints are omitted from LSQNONLIN, it is solving the same problem as FSOLVE. So, one would think it should just be calling the same engine.
0 Comments
Answers (1)
Shashank Prasanna
on 12 Jan 2013
LSQNONLIN and FSOLVE solve different type of optimization problems. While LSQNONLIN expects that f(x) is vector valued, it also implicitly sums and squares the values and forces to minimize this sum of squares.
FSOLVE on the other hand is for system of equations F(x) = 0 where it tries to find the root (zero) of a system of nonlinear equations. The goal of equation solving is to find a vector x that makes all Fi(x) = 0 individually.
4 Comments
Shashank Prasanna
on 14 Jan 2013
It is probably explained in the reference number 34 mentioned under Trust-Region Dogleg Implementation and you may have to open up the file to find the implementation details,
however 'trust-region-dogleg' is only offered for FSOLVE for that particular reason and here is the extract from the earlier link:
'trust-region-dogleg' is the only algorithm that is specially designed to solve nonlinear equations. The others attempt to minimize the sum of squares of the function.
The answer to your question on implementation must be found in the algorithm for dogleg if it takes the min norm (F(x))^2 or not.
See Also
Products
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!