Request for help computing convolution of random variables via fft

4 views (last 30 days)
Jeff Miller on 26 Aug 2021
Commented: Jeff Miller on 30 Aug 2021
Inspired by the answers of @David Goodmanson to this earlier question, I have been trying to understand exactly how in general to compute the pdf of a convolution of random variables (RVs) using ffts. The attached script shows two of the examples that I have attempted. In one example the fft produces the convolution pdf perfectly, but in the other it doesn't, and I'm hoping someone can identify the problem with my method.
The first example looks at the convolution of several identical exponential RVs. Here, the convolution pdf computed via fft agrees perfectly with the corresponding known gamma pdf for the convolution.
The second example shows the convolution of two different normal RVs, but the convolution pdf computed via fft does not agree with the corresponding known normal convolution. Actually, the fft convolution does have exactly the right shape, but it is offset relative to the true convolution distribution (i.e., it has a different mean).
I've also tried the fft approach with a number of other example distributions, and the same sort of shift problem that I'm having with the normals is quite common. In any particular example I can easily get rid of the shift in an ad hoc manner, but I haven't yet hit on a general method that works for any arbitrary set of RVs (identical or not).
As I have not worked with fft's before, I am probably missing something quite fundamental. It would be great if one of the experts in this group could get me straightened out.
Thanks very much for any hints.

David Goodmanson on 27 Aug 2021
Edited: David Goodmanson on 27 Aug 2021
Hi Jeff,
It's nice to see a contribution of mine from almost four years ago come up, as just one of many people who have made contributions to this site and get referenced as well. Besides providing answers to many queries, the site appears to have become a searchable source of information.
The reason that the first conv example works and the second doesn't is due to the different x arrays used to create the pdfs. Suppose the fft transforms y = y(x) to z = z(ilam) (ilam being inverse wavelength, which seems to have no common symbol for it). The fft puts z( ilam=0 ), the DC value, as the first element in the resulting z array. Similarly the fft assumes that y( x=0 ) is the first element in the y array. If x=0 is somewhere else, then z picks up a phase shift in the ilam domain that can lead to incorrect results.
Your first example uses the correct x array and the method works. The second example starts out at a negative value of x to accommodate the gaussian, leading to problems as you can see.
The most straightforward approach to allow negative values of x is to start with an x array with x=0 at the midpoint (element n/2+1 for even n). Define y(x) and then use ifftshift to put the DC y value at y(1). I tend to think of the first array as a plotting array and the shifted array as the real array. After some ffts and then an ifft to get back in the x domain, fftshift up to put y( x=0 ) back in the center. For example,
N = 10000;
x0 = 5;
delx = 2*x0/N;
delilam = 1/(N*delx);
x = (-N/2:N/2-1)*delx;
mu = .1;
sig = .2;
m = 6; % number of convolutions
mp1 = m+1;
y = (1/(sqrt(pi)*sig))*exp(-(x-mu).^2/sig^2);
% exact result
yconv = (1/(sqrt(pi*mp1)*sig))*exp(-(x-mp1*mu).^2/(mp1*sig^2));
% by fft
z = fft(ifftshift(y)).^mp1*delx^mp1;
yconvfft = fftshift(ifft(z))*N*delilam;
% in the following plot the original gaussian is scaled down
figure(1); grid on
plot(x,y/2,x,yconvfft,x(1:60:end),yconv(1:60:end),'o')
I1 = trapz(x,y)
I2 = trapz(x,yconv)
I3 = trapz(x,yconvfft)
For normalization, since you are using the fft (which only does sums) to approximate an integral, there is a factor of the array spacing del_x for each transform. Coming back there is one ifft so there is a single factor of del_ilam. Except the ifft automatically divides by the number of points N, so you need to multiply by N to make up for that.
I do have one question. In the second example you have 2^13 = 8192 points instead of, say, 10000. Could you let me know the rationale behind using 2^n points?
Jeff Miller on 30 Aug 2021
Yes, every pdf is evaluated over the same X vector of increments to its minimum value. As I understand it, that's required to get the input needed by fft.
And yes, the end result will be a piecewise linear approximation over a finite support interval. To address concerns about missing the tails, the user can extend the bounds out further and pay the associated computation time penalty. With some trial and error it would presumably be clear that the results didn't change much once the bounds were extended out "far enough", so the analyst could be satisfied that the results were a useful approximation.
I imagine that the user would always want to pick NxPoints depending on the distributions and computational goal, probably after trying various values to how many are enough to achieve the required resolution. My rough impression is that computation time seems to increase only about linearly with NxPoints, so it isn't necessarily going to be that computationally expensive to get a very good approximation within the finite interval examined.
FWIW, what I'm actually trying to do with this machinery is to get maximum likelihood parameter estimates with some convolutions for which I can't find closed-form pdf expressions. I was previously doing that by numerical integration, but this is much, much faster with hardly any loss of accuracy.

Paul on 28 Aug 2021
Here is one option, illustrated with two widely separated normal pdfs:
mu1 = 10; sigma1 = 2;
mu2 = -5; sigma2 = 1;
Zextreme = 4;
delx = 0.01;
% Determine range of x values
min1 = mu1 - Zextreme*sigma1;
min2 = mu2 - Zextreme*sigma2;
max1 = mu1 + Zextreme*sigma1;
max2 = mu2 + Zextreme*sigma2;
x1 = min1:delx:max1;
x2 = min2:delx:max2;
x = mu1 + mu2 + (-Zextreme*(sigma1 + sigma2):delx:Zextreme*(sigma1 + sigma2));
% Get the pdfs of the individual distributions and of the sum, which is the
% convolution of the individuals
M1 = normpdf(x1,mu1,sigma1);
M2 = normpdf(x2,mu2,sigma2);
truepdf = normpdf(x,mu1+mu2,sqrt(sigma1^2+sigma2^2));
% Check the computations so far by examining the true distributions
% of the individual RVs and their convolution
plot(x1,M1);
hold on
plot(x2,M2);
plot(x,truepdf);grid
legend('Normal 1', 'Normal 2', 'Convolution'); % approximate the continuous convolution with a convolution sum in the time domain
convpdf = conv(M1,M2)*delx;
shift1 = round(x1(1)/delx);
shift2 = round(x2(1)/delx);
% approximate the continuous convolution with a convolution sum computed via the frequency domain
N = numel(convpdf);
fftM1 = fft(M1,N);
fftM2 = fft(M2,N);
fftpdf = fftM2.*fftM1;
fftpdf = real(ifft(fftpdf));
% compare
subplot(311); plot(x,truepdf);grid
subplot(312); plot( ((0:N-1) + shift1 + shift2)*delx,convpdf),grid
subplot(313); plot( ((0:N-1) + shift1 + shift2)*delx,real(fftpdf)*delx),grid
set(get(gcf,'Children'),'XLim',[x(1) x(end)]); Jeff Miller on 28 Aug 2021
I definitely want to handle cases where the RVs are independent but have different distributions. David's method seems to be working with the appropriate transformations (see my reply to him).