why do lsqnonlin and fmincon give different optimization results when solving a set of nonlinear equations?

1 view (last 30 days)
I tried to solve a set of nonlinear equations using lsqnonlin and fmincon. All the input parameters (including initial values) are the same and I tried all the embedded algorithms they have, but they gave different optimization results. It is not minor difference, but relatively big difference. I am wondering why this happened.
The codes below includes two functions modeltest1 and modeltest2. Modeltest1 makes use of fmincon, while modeltest2 uses lsqnonlin. All the input parameters (including initial values) are the same. Modeltest1 give result,
Pvalsliding =
0.013820918065572
0.574406473683740
0.003205160530740
fval =
0.158094349821487
exitflag =
2
While the first order optimality is 1244, which is not near 0, I doubt it is the right solution.
Modeltest2 give result,
Pvalsliding =
0.031362654332190
0.600494583113902
0.005429484251305
resnorm =
1.668533048658531
residual =
-0.000006852621127
-1.291690209823506
-0.008333694119595
exitflag =
2
While the first order optimality is 46.5, which is not near 0, I doubt it is the right solution.
Is it possible that the solution is the right answer while the first order optimality is not near 0 ?
By the way, those equations are nonlinear implict equations ,so it's quite difficult to get a stable and reliable solution. Is it reliable to solve this kind of problems using matlab embedded functions like lsqnonlin and fmincon? Should I try some iterative method like Newton’s iteration method (like Newton-Raphson Iteration Method ) when solving a set of nonlinear implict equations? Are there some more efficient methods to solve a set of nonlinear implict equations?
If anyone can help, I appreciate it very much.

Accepted Answer

Matt J
Matt J on 16 Apr 2013
Try relaxing tolfun and tolx to something a bit less stringent. Is there a reason you're not using their default values of 1e-6?
  7 Comments
Matt J
Matt J on 16 Apr 2013
I think you already proved that your first solution was wrong. When you relaxed the tolerances, you got a better exitflag, a better value for the first order optimality measure, and more physically sensible values.
The right solution definitely should have a first order optimality near zero, but deciding when it is near enough to zero can be a tricky thing. For one thing, it depends on how you are scaling your function. If you divide your objective function by 1000, you would make the optimality measure 1000 times less everywhere without changing the location of the minima. It is therefore not immediately clear from your examples whether 1244 and 46.5 are "large".
Those equations are nonlinear and implict ,so it's quite difficult to get a stable and reliable solution. Should I try some iterative method like Newton’s iteration method (like Newton-Raphson Iteration Method )
I'm not sure what your reasoning is here or why you think nonlinear/implict equations are always unstable. If you know that the solution to the problem is undefined, using a different solver like Newton-Raphson won't make the solution better defined. What you would want to do is to rewrite the problem so that the solution is better defined.

Sign in to comment.

More Answers (0)

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!