Discover MakerZone

MATLAB and Simulink resources for Arduino, LEGO, and Raspberry Pi

Learn more

Discover what MATLAB® can do for your career.

Opportunities for recent engineering grads.

Apply Today

Thread Subject:
Fast and accurate numerical integration

Subject: Fast and accurate numerical integration

From: Eduardo Montoya

Date: 7 May, 2011 00:42:05

Message: 1 of 5

Hello,

I was hoping to get some opinions on the fastest (but accurate) way to evaluate a quadruple integral numerically in Matlab, over a non-rectangular region.

Based on a few tests, my sense is that the adaptive methods in the quad functions are not that fast. (I think partly this is because internally dblquad and triplequad are not vectorized?)

I'd be willing to implement my own algorithm as long as it was fast and I could bound the errors. So... should I use built-in Matlab functions, modify the built-in functions, implement some algorithm that Matlab has not implemented, or should I do all of this in C or some other language?

Thanks in advance for any comments.

Subject: Fast and accurate numerical integration

From: John D'Errico

Date: 7 May, 2011 01:59:05

Message: 2 of 5

"Eduardo Montoya" wrote in message <iq24gt$5vj$1@newscl01ah.mathworks.com>...
> Hello,
>
> I was hoping to get some opinions on the fastest (but accurate) way to evaluate a quadruple integral numerically in Matlab, over a non-rectangular region.
>
> Based on a few tests, my sense is that the adaptive methods in the quad functions are not that fast. (I think partly this is because internally dblquad and triplequad are not vectorized?)
>
> I'd be willing to implement my own algorithm as long as it was fast and I could bound the errors. So... should I use built-in Matlab functions, modify the built-in functions, implement some algorithm that Matlab has not implemented, or should I do all of this in C or some other language?
>
> Thanks in advance for any comments.

Yep. That is it. Nobody has ever bothered to write
high quality, accurate, efficient, vectorized code for a
quadruple integral over a fully general non-rectangular
domain, even though it is quite easy to write. I'm sure
they just never had the time to do it.

Set(matlabroot,'Sarcasm','off')

The fact is, it may be less trivial to write than you imply.
(Having played with several versions of such a code
myself.)

John

Subject: Fast and accurate numerical integration

From: Eduardo Montoya

Date: 7 May, 2011 02:41:05

Message: 3 of 5

"John D'Errico" <woodchips@rochester.rr.com> wrote in message <iq2919$f1m$1@newscl01ah.mathworks.com>...
> "Eduardo Montoya" wrote in message <iq24gt$5vj$1@newscl01ah.mathworks.com>...
> > Hello,
> >
> > I was hoping to get some opinions on the fastest (but accurate) way to evaluate a quadruple integral numerically in Matlab, over a non-rectangular region.
> >
> > Based on a few tests, my sense is that the adaptive methods in the quad functions are not that fast. (I think partly this is because internally dblquad and triplequad are not vectorized?)
> >
> > I'd be willing to implement my own algorithm as long as it was fast and I could bound the errors. So... should I use built-in Matlab functions, modify the built-in functions, implement some algorithm that Matlab has not implemented, or should I do all of this in C or some other language?
> >
> > Thanks in advance for any comments.
>
> Yep. That is it. Nobody has ever bothered to write
> high quality, accurate, efficient, vectorized code for a
> quadruple integral over a fully general non-rectangular
> domain, even though it is quite easy to write. I'm sure
> they just never had the time to do it.
>
> Set(matlabroot,'Sarcasm','off')
>
> The fact is, it may be less trivial to write than you imply.
> (Having played with several versions of such a code
> myself.)
>
> John

Thanks for the reply, John. I've gotten a laugh from your comments before, and I did from this one too. No sarcasm. :)

I didn't mean to imply it was trivial. Maybe there's not much I can do. What I'm hoping to understand from those of you with more experience is, what's my best shot?

I've fooled around with non-adaptive CSR code with fixed numbers of nodes and seen speed improvements in lower-dimensional problems compared to the quad functions (with problems on higher-dimensional ones). Someone mentioned 'cumsum' in another post on optimising numerical integration. Any thoughts on best bets vs dead ends would be helpful. And yes, I know my question is really general at the moment, and I apologize...

Subject: Fast and accurate numerical integration

From: John D'Errico

Date: 7 May, 2011 10:39:07

Message: 4 of 5

"Eduardo Montoya" wrote in message <iq2bg1$k5b$1@newscl01ah.mathworks.com>...
> "John D'Errico" <woodchips@rochester.rr.com> wrote in message <iq2919$f1m$1@newscl01ah.mathworks.com>...
> > "Eduardo Montoya" wrote in message <iq24gt$5vj$1@newscl01ah.mathworks.com>...
> > > Hello,
> > >
> > > I was hoping to get some opinions on the fastest (but accurate) way to evaluate a quadruple integral numerically in Matlab, over a non-rectangular region.
> > >
> > > Based on a few tests, my sense is that the adaptive methods in the quad functions are not that fast. (I think partly this is because internally dblquad and triplequad are not vectorized?)
> > >
> > > I'd be willing to implement my own algorithm as long as it was fast and I could bound the errors. So... should I use built-in Matlab functions, modify the built-in functions, implement some algorithm that Matlab has not implemented, or should I do all of this in C or some other language?
> > >
> > > Thanks in advance for any comments.
> >
> > Yep. That is it. Nobody has ever bothered to write
> > high quality, accurate, efficient, vectorized code for a
> > quadruple integral over a fully general non-rectangular
> > domain, even though it is quite easy to write. I'm sure
> > they just never had the time to do it.
> >
> > Set(matlabroot,'Sarcasm','off')
> >
> > The fact is, it may be less trivial to write than you imply.
> > (Having played with several versions of such a code
> > myself.)
> >
> > John
>
> Thanks for the reply, John. I've gotten a laugh from your comments before, and I did from this one too. No sarcasm. :)
>
> I didn't mean to imply it was trivial. Maybe there's not much I can do. What I'm hoping to understand from those of you with more experience is, what's my best shot?
>
> I've fooled around with non-adaptive CSR code with fixed numbers of nodes and seen speed improvements in lower-dimensional problems compared to the quad functions (with problems on higher-dimensional ones). Someone mentioned 'cumsum' in another post on optimising numerical integration. Any thoughts on best bets vs dead ends would be helpful. And yes, I know my question is really general at the moment, and I apologize...

I really do have a code that does adaptive numerical
integration over a 2-d domain. (And I know how, in
theory to extend this to 3-d.) It adaptively refines the
domain, placing new points where they are needed,
much in the way that quad (and its cousins) does. A
logical choice here seems to be a simplicial tessellation
based scheme, which works nicely on a general domain.

The trick in these things is multifaceted. The time is
taken up in a few sources.

- If you write recursive code, this simplifies some of
the bookwork, but it may be inefficient because of
function call overhead, recursion limits, etc. If your
code does error checking, then you don't want to
recheck the inputs of your problem EVERY time the
recursive call is made anyway. That wastes time. A
solution is to write the code in a way that is effectively
recursive, but does not actually call itself. This can bv
done with simple loops and some extra bookkeeping.

- As you point out, is the code vectorized? Loops can
be a CPU hog in MATLAB.

- A good, efficient algorithm is locally high order, and
makes re-use of previous function evaluations. There are
a few variants of Gaussian rules in multiple dimensions
on a triangle that DO allow re-use of previous function
evaluations. Choosing the best such rule (i.e., highest
order relative to the overall number of function
evaluations employed at full depth) and doing so in
several dimensions is an interesting mathematical task.

- If you do so carefully, the rule employed may allow
for a Richardson extrapolation. This will give you a
bump up in the effective order of the integration, thus
faster convergence. It is well worth doing in my work,
often giving you several more digits of accuracy, but
you need to do the bookwork to maintain the Richardson
extrapolant.

- Convergence itself is an issue. How do you know that
the algorithm has converged? The Richardson scheme
itself is an option, allowing you to predict the approximate
error in the estimate. The problem is, each refinement you
make costs you additional function evaluations. You don't
want to double the number of function evaluations at the
end, only to realize that your previous estimate of the
integral was indeed sufficient to meet the convergence
tolerance. Another method of determining convergence
is to employ two different parallel schemes for the
integration. This can be done efficiently using several
choices of point placement/refinement, so that you can
compare two estimates of the integral. When they agree,
you are done. This too may cause you to make more
function evaluations than absolutely necessary, depending
on the differing polynomial orders of the schemes employed.

The MOST difficult thing I found, and the biggest reason
why the code I wrote is not out there, is the convergence
issue. I can beat tools like dblquad to an accurate estimate,
but getting my code to KNOW that it has converged, and
doing so reliably is the tripping point that I had not
finished. I've put it on the back burner for a bit, planning
on returning to it with a fresh mind some time in the future.

John

Subject: Fast and accurate numerical integration

From: Eduardo Montoya

Date: 8 May, 2011 06:10:20

Message: 5 of 5

> I really do have a code that does adaptive numerical
> integration over a 2-d domain. (And I know how, in
> theory to extend this to 3-d.) It adaptively refines the
> domain, placing new points where they are needed,
> much in the way that quad (and its cousins) does. A
> logical choice here seems to be a simplicial tessellation
> based scheme, which works nicely on a general domain.
>
> The trick in these things is multifaceted. The time is
> taken up in a few sources.
>
> - If you write recursive code, this simplifies some of
> the bookwork, but it may be inefficient because of
> function call overhead, recursion limits, etc. If your
> code does error checking, then you don't want to
> recheck the inputs of your problem EVERY time the
> recursive call is made anyway. That wastes time. A
> solution is to write the code in a way that is effectively
> recursive, but does not actually call itself. This can bv
> done with simple loops and some extra bookkeeping.
>
> - As you point out, is the code vectorized? Loops can
> be a CPU hog in MATLAB.
>
> - A good, efficient algorithm is locally high order, and
> makes re-use of previous function evaluations. There are
> a few variants of Gaussian rules in multiple dimensions
> on a triangle that DO allow re-use of previous function
> evaluations. Choosing the best such rule (i.e., highest
> order relative to the overall number of function
> evaluations employed at full depth) and doing so in
> several dimensions is an interesting mathematical task.
>
> - If you do so carefully, the rule employed may allow
> for a Richardson extrapolation. This will give you a
> bump up in the effective order of the integration, thus
> faster convergence. It is well worth doing in my work,
> often giving you several more digits of accuracy, but
> you need to do the bookwork to maintain the Richardson
> extrapolant.
>
> - Convergence itself is an issue. How do you know that
> the algorithm has converged? The Richardson scheme
> itself is an option, allowing you to predict the approximate
> error in the estimate. The problem is, each refinement you
> make costs you additional function evaluations. You don't
> want to double the number of function evaluations at the
> end, only to realize that your previous estimate of the
> integral was indeed sufficient to meet the convergence
> tolerance. Another method of determining convergence
> is to employ two different parallel schemes for the
> integration. This can be done efficiently using several
> choices of point placement/refinement, so that you can
> compare two estimates of the integral. When they agree,
> you are done. This too may cause you to make more
> function evaluations than absolutely necessary, depending
> on the differing polynomial orders of the schemes employed.
>
> The MOST difficult thing I found, and the biggest reason
> why the code I wrote is not out there, is the convergence
> issue. I can beat tools like dblquad to an accurate estimate,
> but getting my code to KNOW that it has converged, and
> doing so reliably is the tripping point that I had not
> finished. I've put it on the back burner for a bit, planning
> on returning to it with a fresh mind some time in the future.
>
> John

Thanks John. Really appreciate your taking the time.

Lots to mull over... I have some research to do before I post again, but this was very helpful.

Best,
Ed

Tags for this Thread

What are tags?

A tag is like a keyword or category label associated with each thread. Tags make it easier for you to find threads of interest.

Anyone can tag a thread. Tags are public and visible to everyone.

Contact us