version 1.0.0.0 (3.3 KB) by
Matthew Kelly

Solves an initial value problem. Good for very accurate solutions for smooth systems.

% [z, info] = BulirschStoer(dynFun,t,z0,tol)

%

% Solves an initial value problem using the Bulirsch-Stoer method. This

% method is ideal for high-accuracy solutions to smooth initial value

% problems.

%

% Computes z(t) such that dz/dt = dynFun(t,z), starting from the initial

% state z0. The solution at the grid-points will be accurate to within tol.

%

% If the provided grid is insufficient, this function will automatically

% introduce intermediate grid points to achieve the required accuracy.

%

% INPUTS:

% dynFun = function handle for the system dynamics

% dz = dynFun(t,z)

% t = scalar time

% z = [nz,1] = state as column vector

% dz = [nz,1] = derivative of state as column vector

% t = [1,nt] = time grid-point vector

% z0 = [nz,1] = initial state vector

% tol = [nz,1] = error tolerance along each dimension. If tol is a

% scalar, then all dimensions will satisfy that error tolerance.

%

% OUTPUTS: (nt = n+1)

% z = [nz,nt] = solution to the initial value problem

% info

% .error = [nx,nt] = error estimate at each grid point

% .nFunEval = [1,nt] = number of function evaluations for each point

%

% NOTES:

% Implementation details:

% http://web.mit.edu/ehliu/Public/Spring2006/18.304/implementation_bulirsch_stoer.pdf

%

Matthew Kelly (2021). Bulirsch-Stoer (https://www.mathworks.com/matlabcentral/fileexchange/55528-bulirsch-stoer), MATLAB Central File Exchange. Retrieved .

Created with
R2012a

Compatible with any release

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!Create scripts with code, output, and formatted text in a single executable document.

Matthew KellyIt looks like the reference in my original implementation is now a dead link. There is a good discussion of this method in "Numerical Methods in C". In the second edition is is in Chapter 16, Section 4.

The code is admittedly a bit complicated, this is due in-part to some optimization to make the code faster, and in-part because this is just a complicated algorithm. Here is the overall idea:

Let's suppose that you don't care about the "trajectory" of the solution to an ODE. You only care about accurately knowing the final state. You first try to use a single Euler step to approximate it, and find the solution is inaccurate. Next you try a single "modified midpoint method" step. This is better, but still not great. Now you take two steps. That's going to be more accurate. Here is the idea for this method: Approximate the final state (at the end of the integration step), be taking 2, then 4, then 8, then, 16, ... and so on points. Now imagine making a plot that shows "final state" as a function of "number of integration steps". You can actually plot this to help understand how the method works. You'll observer that the data points asymptotically approach some state. That is the state that would be reached at an infinite number of integration steps. Now take this idea one step further. Compute that asymtote for each new data point that is added to the data set. You can do some math to show that the result of that calculation is guaranteed to converge. The change in the "value at an infinite number of steps" between iterations (as you add more data to the plot) is an upper bound on the absolute error accumulated across your integration step. That allows this method to provide extremely accurate results.

Matthew KellyHow to speed up this computation? It depends on your goals...

- If you want high accuracy (better than 1e-9 or so), then this is likely the best method. If you want an intermediate accuracy, say 1e-6, then you might get faster results with ode45. Let's assume that you want high accuracy, so will stick with this method.

- How to make the code to be faster? Use the Matlab profiler - it is really good. Look for functions or lines that take a long time and that can be modified easily.

- Check your function: is it well vectorized? The profiler might give some suggestions as well.

- Are your dynamics smooth? If not, then this method will take a long time to converge. If there are known discontinuities, then you need to break up the calculation into chunks that are smooth.

- There is some rough memory allocation at the start of this function, based on `nRefineMax`. I doubt it is the bottle neck in the code, but you might be able to tweak that for a small improvement.

- The lines marked "Compute the extrapolation table entries" might also be able to be cleaned up, if they are too slow.

Good luck!

ryan comeauWorks well. set the tolerance to 1e-10 and decreased the computation time by a little bit. I have a 42+ dimension vector and I'm wondering if there's a way I can speed this up further?

Udayan BanerjeeJonathan Deng