Adaptive simpson method for numerical integration (matlab code)

63 views (last 30 days)
I've created a simple simpson_adaptive method that uses my own simpson method. My simpson method is correct, but my adaptive method does not seem to work for the integral( sin(2*pi*x)² ) ranging from -1 to 1 The following code represents the adaptive simpson method. The parameters stand for the function, [a,b] being the interval for the integral and e being the precision.
function I = simpson_adaptief(f,a,b,e)
I1 = simpson(f,a,b,2);
I2 = simpson(f,a,b,4);
if (abs(I1-I2)<e)
I = I2;
else
I = simpson_adaptief(f,a,(a+b)/2,e) + simpson_adaptief(f,(a+b)/2,b,e);
end
end_ _
n here being the amount of parts the function is being split into.
_function I = simpson(f,a,b,n)
h = (b-a)/(n);
p=0;
q=0;
for k=1:2:(n-1)
x=a+h*k;
p=p+f(x);
end
for k=2:2:(n-1)
x=a+h*k;
q=q+f(x);
end
I = h/3*(f(a)+f(b)+4*p+2*q);
end_ _
Do you guys have any suggestions on what the possible cause of the problem could be? Other functions seem to work.
I think it has something to do with my if abs(I1-I2)<e. When I change it to abs(I1-I2)>e, it works, as my program then does the recursion step first.
Thanks in advance!
  1 Comment
Friðrik Hover
Friðrik Hover on 10 Mar 2018
I don't know if this helps. A code from my lecture notes:
%Executes composite Simpsons rule
function x=simpson(f,a,b,m)
%integrates f on [a,b] with 2m subdivisions
x=f(a) %y0
h=(b-a)/(2*m);
for i=1:m-1
a=a+h;
x=x+4*f(a); %odd numbered x_i
a=a+h;
x=x+2*f(a); %even x_i
end
a=a+h;
x=x+4*f(a)+f(b); %last two x_i
x=h/3*x;

Sign in to comment.

Answers (1)

John D'Errico
John D'Errico on 26 Apr 2014
Edited: John D'Errico on 6 Dec 2021
Ugh. To start with, your adaptive Simpson code must always strive to not reuse function evaluations. (Sorry, but this is important.) A virtue of such a code is it should NOT evaluate the function at the same point multiple times. Yours has no such property. In fact, it is far worse than any non-adaptive code would ever be. So you need to seriously re-think how you are doing it.
Really, as you have written it, this is NOT a good implementation of recurrence, and an adaptive Simpson rule is a good problem for recurrence. So back to the drawing board for you. However, since you have made a serious effort, I'll offer a few hints.
Hint 1: An adaptive Simpson code could pass into the recursive call the function values on that interval that it already knows, so it need NEVER re-evaluate the function at those points.
Hint2: As a refinement of hint 1, Suppose your top level function evaluates the function at the end points and at the midpoint. Now, build a recursive function, that will be passed in additionally the values at those three points. The recursive call need only evaluate at TWO new points. See that the recursive part can now compare the two estimates on that interval, and if needed, now call itself twice if the tolerance fails.
Hint 3: Be careful about the tolerance. Should sub-intervals have a smaller tolerance than the whole? How should that tolerance be refined?
In terms of your actual problem, think about that function. Then look at how it will be called. ALWAYS THINK about what you are doing. Test it by looking at what happens. Use the command line. Don't just throw code at something without using your mind. (Sorry, but this is an important lesson.)
fun = @(x) sin(2*pi*x).^2;
simpson(fun,-1,1,2)
ans =
3.9994e-32
simpson(fun,-1,1,4)
ans =
3.9994e-32
Now, think about why that happens on THIS particular function. Where will fun be evaluated? Are there any interesting properties about the function at those points?
Finally, I would point out that the debugger would have helped you here. Learn to use it.
It is now long since the time I've answered this question. How would I implement an adaptive Simpson quadrature?
I might write the code below. As implemented, I never evaluate the same point twice. The test for convergence is based on a 2 panel Simpson's rule, versus the Richardson extrapolant over that same interval, which reduces to Bode's rule.
As a simple test, we can test it out on a simple quadratic function.
syms X
int(X.^2,0,3)
ans =
9
adsimpson should be correct here, converging as soon as is possible.
F = @(x) x.^2;
[I,nf] = adsimpson(F,0,3)
I =
9
nf =
9
So after only 9 function evaluations, it finds the correct result, to within essentially machine precision.
I - 9
ans =
0
Next, consider the function 2 - abs(x), on the interval [-2,3]. The integral over this interval is 3.5.
syms X
int(2 - abs(X),-2,3)
ans =
7/2
So 3.5, as I said. How does adsimpson do?
opts.abstol = 1e-6;
opts.plot = true;
F = @(x) 2 - abs(x);
format long g
[I,nf] = adsimpson(F,-2,3,opts)
I =
3.50000005298191
nf =
49
So it took 49 function evaluations to produce that result. We can see the evaluation points required were:
It should be no surprise, the evaluation points are clustered around the point of derivative singularity.
opts. abstol = 1e-7;
F = @sin;
[I,nf] = adsimpson(F,0,25*pi,opts)
I =
2.00000000000081
nf =
1977
So many more function evaluations, but in each case, they are clustered around the peaks and valleys of sin(x). Of course, the integral on such an interval must be 2, since the integral of the sine function from 0 to any EVEN multiple of pi is zero. The positive and negative regions cancel each other out.
Finally, how well does it do on the test case posed by @Nicholas?
syms X
int(sin(2*pi*X).^2,-1,1)
ans =
1
F = @(x) sin(2*pi*x).^2;
[I,nf] = adsimpson(F,-1,1)
I =
1
nf =
33
So after 33 function evaluations, it got the correct result.
function [intf,nfuns,exitflag] = adsimpson(F,a,b,opts)
% adsimpson: adaptive simpson quadrature, over the interval [a,b]
% usage: intf = adsimpson(F,a,b,opts)
%
% F - function handle, assumed to be vectorized, so F(X) can return a
% result for vector input x.
%
% a - lower limit of integration
% b - upper limit of integration. If b < a, this is not a problem
%
% opts - options structure to cntrol the behavior of adsimpson
% opts.abstol - absolute tolerance on the integral estimate. If none
% supplied, the default is 1e-8.
%
% opts.plot - boolean, if true, then the points evaluated will be
% plotted. If false, then no plot is done. This option may help
% to diagnose problems in your kernel, but it can also be useful to
% learn about how an adaptive method works.
%
% opts.mindepth - scalar integer, controls the minimum recursive
% search depth for the code. opts.mindepth=3 is the default, with
% 2 as the minimum allowed. At a depth of 3, it will result in 9
% function evaluations, at the least. A larger value for mindepth
% may help to insure a proper result for spiky kernels. Many simple
% functions will be handled quite well with only a depth of 3.
%
% opts.maxdepth - scalar integer, controls the maximum search depth,
% and thus the maximum number of function evaluations.
% opts.maxdepth must be at least opts.mindepth. The default
% value of opts.maxdepth is 15. (That much of a depth should be
% rarely, if ever be necessary. If you are exceeding that depth,
% then your objective is likely noisy, and so not really amenable
% to an adaptive numerical integration.)
%
% opts.isvectorized - a boolean flag that tells the code if F is vectorized.
% If opts.isvectorized = false, then a loop will be performed to
% evaluate to evaluate F at new points as needed.
% If opts.isvectorized is false, then no loop is performed.
%
% NO test is done to verify that F is or is not properly vectorized.
%
% output arguments:
% intf - returned estimate of the integral over [a,b]
%
% nfuns -7 total number of function evaluations performed
%
% exitflag - 0 --> convergence was observed without exceeding the maximum
% search depth.
% - 1 --> the maximum search depth was hit at some point.
% convergence failure is possible.
if (nargin) < 4 || isempty(opts)
% default values for options
opts.abstol = 1e-8;
opts.plot = false;
opts.mindepth = 3;
opts.maxdepth = 15;
opts.isvectorized = true;
else
% check for any missing options. Better error checking would be smart
% here. Que sera, sera.
if ~isfield(opts,'abstol')
opts.abstol = 1e-8;
end
if ~isfield(opts,'plot')
opts.plot = false;
end
if ~isfield(opts,'mindepth')
opts.mindepth = 3;
else
% at least insure it is an integer, and at least 2.
opts.mindepth = ceil(max(opts.mindepth,2));
end
if ~isfield(opts,'maxdepth')
opts.maxdepth = max(opts.mindepth,15);
else
opts.maxdepth = max(opts.mindepth,ceil(opts.maxdepth));
end
if ~isfield(opts,'isvectorized')
opts.isvectorized = true;
end
end
% Is a < b?
if a == b
% no need to look any further
intf = 0;
return
elseif a > b
% just swap the endpoints to make things logically simpler
[a,b] = deal(b,a);
intsign = -1;
else
intsign = 1;
end
% depth MUST be at least 2, then there MUST be at least 5 points at
% which to evaluate f.
depth = 1;
nfuns = 3;
% Do we plot?
if opts.plot
figure
hold on
grid on
end
% 3 points at the top level
x = [a,(a+b)/2,b];
Fx = Fxeval(F,x,opts);
% do the recursive integration
[intf,nrecfuns,exitflag] = recsimpson(F,x,Fx,opts,depth);
% total function evals
nfuns = nfuns + nrecfuns;
% multiply by intsign, in case a and b were swapped initially
intf = intsign*intf;
end
function [intrec,nrecfuns,exitflag] = recsimpson(F,x,Fx,opts,depth)
% recursive estimation
% increment the search depth count
depth = depth + 1;
% sample two new points at the inter-quartile points.
xq = x*[0.75 0.25;0 0;0.25 0.75];
% evaluate F at the new points
Fxq = Fxeval(F,xq,opts);
% keep a count of function calls
nrecfuns = 2;
% current step
h = (x(3) - x(1))/2;
% compute two estimates of the integral on the current interval
% the 5 point, 2 panel Simpson's rule
i5 = [Fx,Fxq]*[1 2 1 4 4]'*h/6;
% Richardson extrapolant reduces to Bodes rule on the 5 point panel.
iext = [Fx,Fxq]*[14 24 14 64 64]'*h/90;
% compare I5 to the Richardson extrapolant on the same set,
% but only if the search depth is at least the min allowed
if depth >= opts.maxdepth
% the max depth has been achieved, so just return the current best
% estimate
intrec = iext;
exitflag = 1;
return
elseif (depth >= opts.mindepth) && (abs(i5 - iext) < opts.abstol)
% we see convergence, and we are at a sufficient depth
intrec = iext;
exitflag = 0;
return
else
% no convergence yet, or the depth was not sufficiently deep
% adjust the tolerance
opts.abstol = opts.abstol/sqrt(2);
% split the intervals into two calls
[intrec1,nrecfuns1,exitflag1] = recsimpson(F,[x(1),xq(1),x(2)],[Fx(1),Fxq(1),Fx(2)],opts,depth);
[intrec2,nrecfuns2,exitflag2] = recsimpson(F,[x(2),xq(2),x(3)],[Fx(2),Fxq(2),Fx(3)],opts,depth);
intrec = intrec1 + intrec2;
nrecfuns = nrecfuns + nrecfuns1 + nrecfuns2;
exitflag = exitflag1 | exitflag2;
end
end
function Fx = Fxeval(F,x,opts)
% evaluation
if opts.isvectorized
Fx = reshape(F(x),[1,numel(x)]);
else
Fx = zeros(1,numel(x));
for i = 1:numel(x)
Fx(i) = F(x(i));
end
end
% Do we plot?
if opts.plot
plot(x,Fx,'r.')
end
end
At least, that is how I would write such a code. Given more than a limited amount of time, I would do a better job on some things. I might do more exhaustive, more careful error checks. And there are some neat tricks I might try to produce an error estimate, using multiple level Richardson extrapolants.
An important thing to see is I was very careful to not evaluate the kernel at the same point twice, thus carefully reusing any previous values. This is important when your kernel function is expensive to evaluate.
Of course, you could make things simpler. For example, drop out some of the error checks. And you could simply compare a 1 panel, 3 point Simpson's rule, to a 2 panel 5 point Simpson's rule, without employing any Richardson extrapolation.
Finally, can you cause this code to fail? Well, yes. An artfully constructed integration kernel would do so. But that is always the case for any adaptive rule.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!