MATLAB fft speed with and without padding

6 views (last 30 days)
XC
XC on 26 Mar 2015
Answered: Adam on 26 Mar 2015
Hello,
I was trying to time the how long does fft() takes with tic and toc functions of MATLAB.
The data I have is a 20e6 long real number vector named B. I thought by padding the vector with zeros to next 2^N would speed up the fft, however my test results seemed to be the opposite.
Please see the following command lines and results. Without zero padding, fft() took 1.5 seconds, with zero paddings to next power of 2 length, it took 2.5 seconds instead.
In addition, if the original data length is lightly different, e.g. real number vector A with length 20.833339e6, then fft(A,NFFT) is much faster than fft(A), 2.5 sec vs. 7.7 sec. In both case the NFFT is same.
Can someone please explain why this is? I have some flexibility in choosing the size of data to be collected then processed with fft(), is there a guideline on what size and padding combination should cost less time in fft?
Thanks in advance!
XC
>> length(B)
ans =
20000000
>> NFFT=2^nextpow2(B)
NFFT =
33554432
>> tic;fft(B);toc
Elapsed time is 1.484898 seconds.
>> tic;fft(B,NFFT);toc
Elapsed time is 2.514710 seconds.

Answers (2)

dpb
dpb on 26 Mar 2015
>> 33554432/20000000
ans =
1.6777
>> 2.514710/1.484898
ans =
1.6935
>>
Seems to make sense to me.
Read the section Algorithms at
doc fft
to see that it's not just a simple power-of-2 question.
The answer to your fundamental question of how much data to collect is really one you need to answer based on what it is you need from the results. That long of a sample record isn't likely the optimum answer but we've no information to base any recommendation on.
  1 Comment
XC
XC on 26 Mar 2015
Thanks for reply.
There should be more than the data length effect though, since the length ratio doesn't match the fft ratio for the second example given in the original post, when length of A is 20,833,339, compared to pad it longer to 2^25.
I did went reading the fft help file as you suggested, and now understood that 20,833,229 data points fft takes much longer because the data length N has very larger prime factor, preventing fft to break it into smaller ffts.
The question remains for why 20,000,000 data points took less time to fft compared to when padding it up to 2^25 (33554432)data points. I though the general guideline is to pad up the data to next power of 2 so fft will be faster.
If this rule is not so 'general', then how to decide if the data should be padded or not?

Sign in to comment.


Adam
Adam on 26 Mar 2015
20,833,339 only has two prime factors, one of them very large. 20,000,000 has just 2 and 5 as its prime factors so according to the documentation will be fast. As such it is likely to be faster than the next power of 2 up where that involves an > 60% increase in the fft length.
The padded versions of both sizes take roughly the same time since they are the same length, but 20,833,339 is especially slow due to having large prime factors.

Categories

Find more on Fourier Analysis and Filtering in Help Center and File Exchange

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!