On Saturday, April 20, 2013 9:22:09 AM UTC+12, Eva Bal wrote:
> Hi,
>
>
>
> I have been looking at this example for quite some time but I still cannot figure out why when the power spectrum density is calculated, it is multiplied by (1/(Fs*N))
>
>
>
> http://www.mathworks.se/help/signal/ug/psdestimateusingfft.html
>
>
>
> The part of the code that troubles me is marked below:
>
>
>
> rng default;
>
> Fs = 1000;
>
> t = linspace(0,1,1000);
>
> x = cos(2*pi*100*t)+randn(size(t));
>
> N = length(x);
>
> xdft = fft(x);
>
> xdft = xdft(1:N/2+1);
>
> psdx = (1/(Fs*N)).*abs(xdft).^2; % HERE is the point that I do not understand
>
> psdx(2:end1) = 2*psdx(2:end1);
>
> freq = 0:Fs/length(x):Fs/2;
>
> plot(freq,10*log10(psdx));
>
>
>
>
>
> I have been reading a lot of answers on fft scaling but it is the first time that I see the sampling frequency (Fs) as a scaling factor. What purpose does it serve? Is it just for normalisation?
>
>
>
> Thank you,
>
> Eva
The area under the spectrum must equal the variance (Parseval's Law).
Therefore, you must divide the energy by the interval in frequency, df=Fs/N to get PSD.
BTW, your frequencies are wrong. Since you're using psdx(2:end1), they should be:
freq = df:df:Fs/2;
you've thrown away the zero frequency (i.e., the mean).
There are a couple of other things wrong as well:
No windowing
No ensemble or frequency averaging
