Is it possible to compute the 10-d integral shown here, or something like that, but not shown, and far more complicated? Almost always when someone asks a question as you have posed, they really have a far different and far more complicated function.
So rather trivially, the 10-fold integral can be done symbolically, yielding 155/6. (Do I really need to write it out?) TRY IT!
syms a b c d e f g h i j
K = expand((a+b+c+d+e+f+g+h+i+j).^2);
K = int(K,j,[0 1]);
K = int(K,i,[0 1]);
K = int(K,h,[0 1]);
K = int(K,g,[0 1]);
K = int(K,f,[0 1]);
K = int(K,e,[0 1]);
K = int(K,d,[0 1]);
K = int(K,c,[0 1]);
K = int(K,b,[0 1]);
K = int(K,a,[0 1])
Can you solve the problem numerically as a nested integration on some far more general nonlinear function? Well, in theory yes, but not really in practice.
Typically numerical integration tools are adaptive routines. Call them in a nested form though, and they take time to do. So a 1-d integration, even on a quite simple function still takes on the order of 100 function evaluations (but it may require many more evals too.) Expect a 2-d numerical integration to require something like 10000 function evals, etc. So I would predict that a 10-fold nested adaptive integration will use something on the rough order of 100^10=1e20 function evaluations.
That is a LOT of time to do. If some of those dimensions end up being a bit more nasty to integrate on, or if your kernel is a bit more complex in reality, then it might take more time. Even if your computer can evaluate that kernel a billion times per second, we are still talking thousands of years to finish the task. Of course, in a few thousand years from now, if you are still around, you might have a REALLY fast computer.
So no, 10-fold nested adaptive integrations are a waste of time to even try.
Typically when you start to look for very high order nested integrations, Monte Carlo integration is a far better bet. So in way less than a second of computing time, I get:
Not bad for maybe a second of CPU time.
A nice thing about the Monte Carlo is you can even compute uncertainty estimates around the result. Thats basic statistics, and I have the flu so my mind is a bit wasted. You can learn to do that.
The reason why Monte Carlo is so good for high dimensional problems is while the nested integration requires exponential time in the number of levels of nesting, the Monte Carlo has an error estimate that is proportional to sqrt(N), where N is the number of function evals taken.
Could I have done this in a different way? Yes, of course. I could, for example, use a 10-fold nested Gauss-Legendre numerical integration. The nested Gauss-Legendre is also exponential in the number of nested levels but the base will be lower than 100. Here, with 5 points in each dimension,
it would require evaluation of the kernel at roughly 10 million points. But if my function is adequately well represented by a low order polynomial, this should be accurate enough. The trivially simple kernel you have posed would be exactly integrable using a 2^10 set of points as a nested Gauss-Legendre quadrature. I'll get you started with this for the nodes:
nodes = (sqrt(3)/3*[-1 1]+1)/2
XGL = nodes(dec2bin(0:1023) - '0' + 1);
So pretty much full precision of a double. But that requires knowing the kernel was truly a very low order polynomial.