% [z, info] = BulirschStoer(dynFun,t,z0,tol)
% Solves an initial value problem using the Bulirsch-Stoer method. This
% method is ideal for high-accuracy solutions to smooth initial value
% Computes z(t) such that dz/dt = dynFun(t,z), starting from the initial
% state z0. The solution at the grid-points will be accurate to within tol.
% If the provided grid is insufficient, this function will automatically
% introduce intermediate grid points to achieve the required accuracy.
% dynFun = function handle for the system dynamics
% dz = dynFun(t,z)
% t = scalar time
% z = [nz,1] = state as column vector
% dz = [nz,1] = derivative of state as column vector
% t = [1,nt] = time grid-point vector
% z0 = [nz,1] = initial state vector
% tol = [nz,1] = error tolerance along each dimension. If tol is a
% scalar, then all dimensions will satisfy that error tolerance.
% OUTPUTS: (nt = n+1)
% z = [nz,nt] = solution to the initial value problem
% .error = [nx,nt] = error estimate at each grid point
% .nFunEval = [1,nt] = number of function evaluations for each point
% Implementation details:
Matthew Kelly (2020). Bulirsch-Stoer (https://www.mathworks.com/matlabcentral/fileexchange/55528-bulirsch-stoer), MATLAB Central File Exchange. Retrieved .
How to speed up this computation? It depends on your goals...
- If you want high accuracy (better than 1e-9 or so), then this is likely the best method. If you want an intermediate accuracy, say 1e-6, then you might get faster results with ode45. Let's assume that you want high accuracy, so will stick with this method.
- How to make the code to be faster? Use the Matlab profiler - it is really good. Look for functions or lines that take a long time and that can be modified easily.
- Check your function: is it well vectorized? The profiler might give some suggestions as well.
- Are your dynamics smooth? If not, then this method will take a long time to converge. If there are known discontinuities, then you need to break up the calculation into chunks that are smooth.
- There is some rough memory allocation at the start of this function, based on `nRefineMax`. I doubt it is the bottle neck in the code, but you might be able to tweak that for a small improvement.
- The lines marked "Compute the extrapolation table entries" might also be able to be cleaned up, if they are too slow.
Works well. set the tolerance to 1e-10 and decreased the computation time by a little bit. I have a 42+ dimension vector and I'm wondering if there's a way I can speed this up further?