Code covered by the BSD License  

Highlights from
Newton-Raphson solver

5.0

5.0 | 2 ratings Rate this file 261 Downloads (last 30 days) File Size: 372 KB File ID: #43097
image thumbnail

Newton-Raphson solver

by

 

19 Aug 2013 (Updated )

Yet another solver that uses the backslash function to solve a set of non-linear equations.

| Watch this File

File Information
Description

Although this is the most basic non-linear solver, it is surprisingly powerful. It is based on the Newton-Raphson method in chapter 9.6-7 of Numerical Recipes in C. In general for well behaved functions and decent initial guesses, its convergence is at least quadratic. However it may fail if the there are local minimums, the condition of the Jacobian is poor or the initial guess is relatively far from the solution. When convergence is negative, it will attempt to backtrack and line-search. It was validated with fsolve from the MATLAB Optimization Toolbox and IPOPT (https://projects.coin-or.org/Ipopt). Please see the help comments and the example.

Note: LSQ curve-fit type problems can also be solved using newtonraphson. These are problems where there are many data for a single function, but the coefficients of the function are unknown. Since there is more data than unknowns, and the residual is minimized for each data point, the Jacobian is not square. These problems usually exit with flag 2: "May have converged." and the solution is the best fit in the "least-squares" sense.

Credit: Moody diagram is from Wikipedia

http://upload.wikimedia.org/wikipedia/commons/8/80/Moody_diagram.jpg

Required Products MATLAB
MATLAB release MATLAB 8.1 (R2013a)
Tags for This File   Please login to tag files.
Please login to add a comment or rating.
Comments and Ratings (16)
04 Feb 2014 Mark Mikofski

@Regan Dhakshnamurthy, the latest changes to newtonraphson now allow sparse matrices, and will also quit if your jacobian is singular, but there is no preconditioning. You could hack in to lines: 98-99 where it says "% TODO: let user set weights
weight = ones(numel(FUN(x0)),1);" and change the weights to something different, it's not really the same, but this is the matrix used to scale the Jacobian as you can see on line: 112, "Jstar = J./J0; % scale Jacobian" Good luck and let me know if it works and I will make the weights argument callable

03 Jan 2014 Mark Mikofski

@ Regan: My guess is no, because if your matrix is so large that inverting is so expensive that you have to resort to pcg instead of just using the backslash operator, then it will be too big for newtonraphson, which also uses the backslash operator. If you want to try it anyway and you have sparse matrices then you'll have to comment out L108 and L181 or change `cond()` to `condest()`. Also you'll have to rewrite your objectives as `Ax - b = 0`, so that the residuals are zero.

02 Jan 2014 Regan Dhakshnamurthy

could I substitute this solver to PCG for solving laplacian matrix..

27 Nov 2013 Mark Mikofski

There is a minor bug (or feature depending how you look at it). If your Jacobian has any `Inf` or `Nan`, then `cond` throws a confusing exception: "Input to SVD must not contain NaN or Inf." That's because `cond` calls `svd` or "single-value decomposition".

I can't remember why I chose `1/cond(J)` instead of `rcond(J)` which would have returned `NaN`. Regardless - this error means your Jacobian has `NaN` or `Inf`, and is therefore badly conditioned.

It's possible that the characteristic `J0` is the culprit b/c this is `1/TYPX` which is a positive non-zero value less than or equal to one (`TYPX = max(abs(x0), 1)`. Therefore if `x0` is **very** large, then `1/x0` will approach zero, causing `Jstar` to contain `Inf`.

EG: `1/(1/(1e999)) -> Inf`

That will cause this error - so lessons learned until a patch arrives:

(1) x0 must not be **very** big, EG: x0 < 1e99,

(2) if you get the `svd` error, your Jacobian is badly conditioned, pick better initial guesses and/or better governing equations and

(3) in the future this exception will be caught and return `rcond = NaN`.

22 Nov 2013 Mark Mikofski

@Joe The way I've implemented it in the past is a loop around both the fast and slow systems or a nested loop as you described. Either *may* work. EG: system 1 is a chemical kinetics problem, F+O+M -> P+M, system 2 is mass transport, dY/dt + u*dY/dx = -D*dY2/dx^2, assume transport is slow (diffusion limited) so kinetics reach steady state (SS) instantly - solve kinetics assuming initial concentrations and removing d/dt terms in all equations EG SS solution. Then solve the diffusion problem to find concentrations, now try the kinetics again, does it change the solution? If so repeat until both solutions converge to each other. If problem is super stiff, then they can be completely decoupled, or you might to decrease your time-step to a very small value, to make the steady state assumption true. Note certain systems of equations are inherently unstable, avoid 1st derivative advection problems and backward/forward difference approximations. In general, if your problem is implicitly formulated, it should be stable. Decrease timestep, meshsize, try staggered mesh and always use center difference approximations. Hope that helps! Good luck!

22 Nov 2013 Joe

Hi,
This is a very useful program !!
In the comments and ratings as listed below, there was your suggestion using two solution methods to relax the stiffness.

But, I could not make a success in implementing the first solution method, hybrid approach(you have denoted), by iterating a separate calculation for slowest and fastest equations. I have some questions on this approach.
1. Should two equations be solved in its own independent while-loop with different values for tolfun ? This means two while-loops which consists of inner and outer while-loop.
2. If that is the case, is the calculation of solving the fastest equation placed either at inner loop or at outer loop ?

25 Oct 2013 Mark Mikofski

The Git repository has been moved from Gist to Github: https://github.com/mikofski/NewtonRaphson. You can report issues there on the issues page. The Gist will be deleted in the near future.

17 Oct 2013 Zoe

very useful

19 Sep 2013 Mark Mikofski

Fixed a huge bug - the residual function should not be scaled by F(typx), because F(typx) --> 0, therefore Ftyp --> Inf. Unfortunately previous solution may have been erroneous. Sorry! Latest is in this Gist
https://gist.github.com/mikofski/6381323
and should be online here as soon as TMW reviews it. I've also added backtracking line-search from Numerical Recipes to improve convergence with poor initial guesses.

17 Sep 2013 Mark Mikofski

Ugh, there is an error in funwrapper! Can't read nargout of an anonymous function, but ducktyping works! Just replace
`if abs(nargout(fun))>=2`
with `try` and `else` with `catch`, and voila it works! I will upload a new version when I get a chance, but it is already in the Gist.
https://gist.github.com/mikofski/6381323

06 Sep 2013 Mark Mikofski

The pipe-flow example uses `IAPWS_IF97()` which can be found on FEX here:
http://www.mathworks.com/matlabcentral/fileexchange/35710-iapwsif97-functional-form-with-no-slip

01 Sep 2013 Mark Mikofski

If you have a very old version of MATLAB, you may not be able to use "anonymous functions", but you can replace them with "inline functions".
EG: Anonymous Function
>> f = @(x)cosd(x)
>> f(90)
ans =
0
EG: Inline Function
>> f = inline('cosd(x)','x')
>> f(90)
ans =
0

29 Aug 2013 Mark Mikofski

I've uploaded a new version without the optimization only options: FinDiffRelStep is fixed at eps^(1/3) & TypicalX is set to x0, except for zeros which are replaced by ones. Also I improved the example with a pipe flow problem that uses Colebrook and Darcy-Weisbach equations, which are implicitly coupled (e.g. non-linear). Should be online within a week.

For those who can't wait, here is the gist:

https://gist.github.com/mikofski/6381323

29 Aug 2013 Mark Mikofski

also evidently typicalX is also not in base MATLAB. :(

29 Aug 2013 Mark Mikofski

I just discovered that 'FinDiffRelStop' is not available in base MATLAB; it is only part of the Optimization Toolbox. My apologies. Until I have a chance to update newtonraphson.m please delete 'FinDiffRelStep', eps^(1/3) from oldopts and set dx = eps^(1/3) in the Jacobian function.

21 Aug 2013 Mark Mikofski

NOTE: if the Jacobian is badly conditioned, then there will be very flat gradients relative to the steepest gradients, IE: there are very fast changing values and very slow changing values. Another way of saying this is that the problem is "stiff". MATLAB has 3 tools to check the condition of the Jacobian: cond [1], condest [2] & rcond [3]. NOTE: rcond = 1/condest. An even faster estimate of condition is just max(J(:))/min(J(:)). A larger condition means a stiffer system of equations, and possibly harder to solve.
What to do? The classic solution is to use a psuedo (or quasi) steady-state (or equilibrium) assumption. Either (a) solve the fastest equations with everything else held constant first and assume that equations is so fast it jumps straight to that solution, and so it is solved explicitly and can be removed from the problem, or (b) assume the slowest equations are so slow that they're constant, and so can be removed from the problem. If the solutions don't converge take a hybrid approach and iterate between the two extremes: while resnorm>tol & Niter<maxiter, solve(a); solve(b); end.
Another approach is to use damping, in other words limit the size of the change in x from guess to guess. IE: dx_star = min(dx_star, dx_max). A common approach is to scale dx with the norm of the residuals, so that when the norm is big, dx is small and as the problem converges it is damped less and less. IE: dx = dx * tol / resnorm, NOTE: resnorm/tol --> +1 and resnorm > tol always.
For an excellent reference on Newton methods see Ch. 23 of Non-linear Finite Element Methods [4].
[1] http://www.mathworks.com/help/matlab/ref/condest.html
[2] http://www.mathworks.com/help/matlab/ref/cond.html
[3] http://www.mathworks.com/help/matlab/ref/rcond.html
[4] http://www.colorado.edu/engineering/cas/courses.d/NFEM.d/NFEM.Ch23.d/NFEM.Ch23.pdf

Updates
03 Sep 2013

remove TypicalX and FinDiffRelStep, change feval to evalf & new example solves pipe flow problem using implicit Colebrook equation

19 Sep 2013

v0.3: Display RCOND and lambda each step. Use ducktyping in funwrapper. Remove Ftyp and F scaling. Use backtracking line search. Output messages, exitflag and min relative step.

19 Sep 2013

(1) fix need dummy vars dx and convergence to display 0th iteration bug, and (2) if f isnan or isinf need to set lambda2 and f2 before continue bug (3) also reattach moody chart

26 Sep 2013

version 0.4 - exit if any dx is nan or inf, allow lsq curve-fit type problems.

27 Jan 2014

* allow sparse matrices, replace cond() with condest()
* check if Jstar has NaN or Inf, return NaN or Inf for cond() and return exitflag: -1, matrix is singular.
* fix bug: max iteration detection and exitflag reporting typos

Contact us