How to do estimate Non-gaussianity using Negentropy?

I truly appreciate if someone direct me.I am trying to estimate the non-gaussianity using Negentropy. I have already measured the entropy but I don't know how extract the negentropy. Is the negentropy a reverse of entropy? Would appreciate your help.

 Accepted Answer

You will find several questions and answers about negentropy if you search Matlab Answers for negentropy.
Negentropy is not the reverse of entropy. Suppose your signal is y. The negentropy of the signal, J(y), is the entropy of a Gaussian noise signal with the same mean and variance as y, minus the entropy of y.
where = entropy of y. A Gaussian signal has maximum possible entropy, so J(y) will be non-negative. The bigger J(y) is, the more non-Gaussian y is. You said you already have code to measure the entropy of your signal. Make a Gaussian signal, , with the same mean and variance as y, and use your code to measure its entropy, and the entropy of your signal (y), and compute J(y). Here's how to make a Gaussian signal with the same mean and variance as y:
yGauss=randn(size(y))*std(y)+mean(y);
Good luck.

5 Comments

Thank you so much for your direction. My data is in attached. I have used this code to estimate the entropy and negentropy and I got the exact opposit value. I have replaced the new and got a new value. Is that right? Sorry about your inconvenience in advance as I am really new in MATLAB :(
clc;
clear all;
close all;
data=xlsread('1');
RC=data(:,1);
r=RC(1:1:end);
p1 = hist(r);
e6 = -sum(p1.*log2(p1)) %entropy
Var=var(r); %variance(RCs)
negen= 1/2*log(2*pi*Var)-e6 %negentropy
% but now, I should replace this???
clc;
clear all;
close all;
data=xlsread('1');
RC=data(:,1);
r=RC(1:1:end);
p1 = hist(r);
e1 = -sum(p1.*log2(p1))
SD=std(r);
Mu=mean(r);
yGauss=rand(size(r))*SD+Mu;
p2 = hist(yGauss);
e2 = -sum(p2.*log2(p2)) %entropy Gauss
negen= e2-e1 %negentropy
I have not worked with entropy of a signal, or negative entropy, until reading your post. Where did you find the code fragment below:
p1 = hist(r);
e6 = -sum(p1.*log2(p1)) %entropy
Var=var(r); %variance(RCs)
negen= 1/2*log(2*pi*Var)-e6 %negentropy
It is evident that e6 in your code is supposed to equal what I called H(y), the entropy of y. It is also evident that 1/2*log(2*pi*Var) is supposed to equal what I called , the entropy of a Gaussian signal of equal variance. If those two suppositions are correct, then your "negen" calculation is also correct.
That is why I am curious about the source of the code above.
The line
e6 = -sum(p1.*log2(p1)) %entropy
will compute the entropy, if p1 is a probability denisty, and if the width of the bins is unity. But I dont think p1 is a density, because if I add it up, I don;t get unity - and I should, it it is a density. Look at this example and result:
>> x=randn(1,1000);
>> p1=hist(x);
>> disp(sum(p1))
1000
Therefore I am worried that e6, as calculated in your code, is not really the entropy.
@William Rose, Thank you for your explanations. I got the code of entropy from this site.Please see here.
"It is evident that e6 in your code is supposed to equal what I called H(y), the entropy of y. It is also evident that 1/2*log(2*pi*Var) is supposed to equal what I called , the entropy of a Gaussian signal of equal variance ". Yes. That is right.
It seems I should replace the probability and recalculate the entropy. I truly appreciate your valuable tips.
If the formulas in the code are correct, then it should be the case that negentropy will =0 if the signal is Gaussian random noise. Which means e6 should = 1/2*log(2*pi*Var). Let's try it.
y=randn(1,1000); %Gaussian noise with var=stdev=1
p1=hist(y);
e6 = -sum(p1.*log2(p1)); %entropy
Var=var(y); %variance(RCs)
fprintf('e6=%.3f, 1/2*log(2*pi*Var)=%.3f\n',e6,0.5*log(2*pi*Var));
e6=-7498.099, 1/2*log(2*pi*Var)=0.894
We see that e6 is not the expected value, so something is wrong with the formula.
@William Rose, I got the equation from a distinguished paper and I am looking for another equation. Thank you fpr your point!

Sign in to comment.

More Answers (1)

I figured it out. Here is an example of a signal y, and how to calculate its entropy (Hent) and its negentropy (J). You may get the values for y from a file - as in your code, where you read it from a spreadsheet. In this case, I am generating the signal y by calling the uniform random number generator, rand().
N=1024; %signal length
y=sqrt(12)*(rand(1,N)-.5); %uniform random noise, mean=0, stdev=1
h=histogram(y,'Normalization','pdf'); %estimate f(x)=the p.d.f.
Hent=-h.BinWidth*sum(h.Values(h.Values>0).*log(h.Values(h.Values>0))/log(2)); %compute entropy
J=log(std(y))/log(2)+2.0471-Hent; %compute negentropy
fprintf('Uniform random noise: Hent=%.4f, J=%.4f\n',Hent,J); %display results
The attached document explains the rationale for the code above.
I am attaching a script which generates four signals: square wave, sine wave, uniform noise, Gaussian noise. The probability density of each one is displayed (below). It is evident from the plots that the density is most un-Gaussian at the top and gradually changes to Gussian at the bottom. The entropy and negentropy of each signal is computed using the formulas above. The values are as expected: negentropy is biggest for the top plot (most un-Gaussian) and smallest (approximately zero) for the bottom. The signals all have the same mean and standard deviation.

3 Comments

@William Rose, Really appreciated. I don't know how I should express my appreciation for your briliant and wonderful instruction. I have never expected this fantastic solution. I applied it on my dataset and the outcomes are so reasonable.Super perfect!!
You're welcome @Maria Amr! If you like the answers you get on this site, then give them a thumbs-up vote.
I have realized something else: Entropy and negentropy are insensitive to the time order of the data. If I take all the values from a sine wave and shuffle their order randomly, it will look like random noise, but the histogram of values will be the same, so its entropy and negentropy will also be the same. Likewise, I could take all the values from a Gaussian random noise signal and re-order them into ascending order. This would look like a very different, and totally non-random, signal. But its entropy and negentropy would be unchanged by this transformation. Therefore entropy and negentropy are not sufficient for evaluating Gaussian randomness. How can you evaluate the time-ordering aspect of randomness? You could estimate the autocorrelation function. You could compute the entropy of the power spectrum of the signal.
Good luck!
@ William Rose, Excellent! I am going to try it in different sampling rate. Thank you!

Sign in to comment.

Asked:

on 22 May 2021

Commented:

on 28 May 2021

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!