"Eduardo Montoya" wrote in message <iq2bg1$k5b$1@newscl01ah.mathworks.com>...
> "John D'Errico" <woodchips@rochester.rr.com> wrote in message <iq2919$f1m$1@newscl01ah.mathworks.com>...
> > "Eduardo Montoya" wrote in message <iq24gt$5vj$1@newscl01ah.mathworks.com>...
> > > Hello,
> > >
> > > I was hoping to get some opinions on the fastest (but accurate) way to evaluate a quadruple integral numerically in Matlab, over a nonrectangular region.
> > >
> > > Based on a few tests, my sense is that the adaptive methods in the quad functions are not that fast. (I think partly this is because internally dblquad and triplequad are not vectorized?)
> > >
> > > I'd be willing to implement my own algorithm as long as it was fast and I could bound the errors. So... should I use builtin Matlab functions, modify the builtin functions, implement some algorithm that Matlab has not implemented, or should I do all of this in C or some other language?
> > >
> > > Thanks in advance for any comments.
> >
> > Yep. That is it. Nobody has ever bothered to write
> > high quality, accurate, efficient, vectorized code for a
> > quadruple integral over a fully general nonrectangular
> > domain, even though it is quite easy to write. I'm sure
> > they just never had the time to do it.
> >
> > Set(matlabroot,'Sarcasm','off')
> >
> > The fact is, it may be less trivial to write than you imply.
> > (Having played with several versions of such a code
> > myself.)
> >
> > John
>
> Thanks for the reply, John. I've gotten a laugh from your comments before, and I did from this one too. No sarcasm. :)
>
> I didn't mean to imply it was trivial. Maybe there's not much I can do. What I'm hoping to understand from those of you with more experience is, what's my best shot?
>
> I've fooled around with nonadaptive CSR code with fixed numbers of nodes and seen speed improvements in lowerdimensional problems compared to the quad functions (with problems on higherdimensional ones). Someone mentioned 'cumsum' in another post on optimising numerical integration. Any thoughts on best bets vs dead ends would be helpful. And yes, I know my question is really general at the moment, and I apologize...
I really do have a code that does adaptive numerical
integration over a 2d domain. (And I know how, in
theory to extend this to 3d.) It adaptively refines the
domain, placing new points where they are needed,
much in the way that quad (and its cousins) does. A
logical choice here seems to be a simplicial tessellation
based scheme, which works nicely on a general domain.
The trick in these things is multifaceted. The time is
taken up in a few sources.
 If you write recursive code, this simplifies some of
the bookwork, but it may be inefficient because of
function call overhead, recursion limits, etc. If your
code does error checking, then you don't want to
recheck the inputs of your problem EVERY time the
recursive call is made anyway. That wastes time. A
solution is to write the code in a way that is effectively
recursive, but does not actually call itself. This can bv
done with simple loops and some extra bookkeeping.
 As you point out, is the code vectorized? Loops can
be a CPU hog in MATLAB.
 A good, efficient algorithm is locally high order, and
makes reuse of previous function evaluations. There are
a few variants of Gaussian rules in multiple dimensions
on a triangle that DO allow reuse of previous function
evaluations. Choosing the best such rule (i.e., highest
order relative to the overall number of function
evaluations employed at full depth) and doing so in
several dimensions is an interesting mathematical task.
 If you do so carefully, the rule employed may allow
for a Richardson extrapolation. This will give you a
bump up in the effective order of the integration, thus
faster convergence. It is well worth doing in my work,
often giving you several more digits of accuracy, but
you need to do the bookwork to maintain the Richardson
extrapolant.
 Convergence itself is an issue. How do you know that
the algorithm has converged? The Richardson scheme
itself is an option, allowing you to predict the approximate
error in the estimate. The problem is, each refinement you
make costs you additional function evaluations. You don't
want to double the number of function evaluations at the
end, only to realize that your previous estimate of the
integral was indeed sufficient to meet the convergence
tolerance. Another method of determining convergence
is to employ two different parallel schemes for the
integration. This can be done efficiently using several
choices of point placement/refinement, so that you can
compare two estimates of the integral. When they agree,
you are done. This too may cause you to make more
function evaluations than absolutely necessary, depending
on the differing polynomial orders of the schemes employed.
The MOST difficult thing I found, and the biggest reason
why the code I wrote is not out there, is the convergence
issue. I can beat tools like dblquad to an accurate estimate,
but getting my code to KNOW that it has converged, and
doing so reliably is the tripping point that I had not
finished. I've put it on the back burner for a bit, planning
on returning to it with a fresh mind some time in the future.
John
