No BSD License  

Highlights from
KLDIV

3.83333

3.8 | 6 ratings Rate this file 104 Downloads (last 30 days) File Size: 4.5 KB File ID: #13089

KLDIV

by

 

23 Nov 2006 (Updated )

Kullback-Leibler or Jensen-Shannon divergence between two distributions.

| Watch this File

File Information
Description

KLDIV Kullback-Leibler or Jensen-Shannon divergence between two distributions.

KLDIV(X,P1,P2) returns the Kullback-Leibler divergence between two distributions specified over the M variable values in vector X. P1 is a length-M vector of probabilities representing distribution 1, and P2 is a length-M vector of probabilities representing distribution 2. Thus, the probability of value X(i) is P1(i) for distribution 1 and P2(i) for distribution 2. The Kullback-Leibler divergence is given by:

   KL(P1(x),P2(x)) = sum[P1(x).log(P1(x)/P2(x))]

If X contains duplicate values, there will be an warning message, and these values will be treated as distinct values. (I.e., the actual values do not enter into the computation, but the probabilities for the two duplicate values will be considered as probabilities corresponding to two unique values.) The elements of probability vectors P1 and P2 must each sum to 1 +/- .00001.

A "log of zero" warning will be thrown for zero-valued probabilities. Handle this however you wish. Adding 'eps' or some other small value to all probabilities seems reasonable. (Renormalize if necessary.)

KLDIV(X,P1,P2,'sym') returns a symmetric variant of the Kullback-Leibler divergence, given by [KL(P1,P2)+KL(P2,P1)]/2. See Johnson and Sinanovic (2001).

KLDIV(X,P1,P2,'js') returns the Jensen-Shannon divergence, given by [KL(P1,Q)+KL(P2,Q)]/2, where Q = (P1+P2)/2. See the Wikipedia article for "Kullback–Leibler divergence". This is equal to 1/2 the so-called "Jeffrey divergence." See Rubner et al. (2000).

EXAMPLE: Let the event set and probability sets be as follow:
   X = [1 2 3 3 4]';
   P1 = ones(5,1)/5;
   P2 = [0 0 .5 .2 .3]' + eps;
 
Note that the event set here has duplicate values (two 3's). These will be treated as DISTINCT events by KLDIV. If you want these to be treated as the SAME event, you will need to collapse their probabilities together before running KLDIV. One way to do this is to use UNIQUE to find the set of unique events, and then iterate over that set, summing probabilities for each instance of each unique event. Here, we just leave the duplicate values to be treated independently (the default):
   KL = kldiv(X,P1,P2);
   KL =
        19.4899

Note also that we avoided the log-of-zero warning by adding 'eps' to all probability values in P2. We didn't need to renormalize because we're still within the sum-to-one tolerance.

REFERENCES:
1) Cover, T.M. and J.A. Thomas. "Elements of Information Theory," Wiley, 1991.
2) Johnson, D.H. and S. Sinanovic. "Symmetrizing the Kullback-Leibler distance." IEEE Transactions on Information Theory (Submitted).
3) Rubner, Y., Tomasi, C., and Guibas, L. J., 2000. "The Earth Mover's distance as a metric for image retrieval." International Journal of Computer Vision, 40(2): 99-121.
4) <a href="http://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence">Kullback–Leibler divergence</a>. Wikipedia, The Free Encyclopedia.

See also: MUTUALINFO, ENTROPY

MATLAB release MATLAB 7.3 (R2006b)
Tags for This File   Please login to tag files.
Please login to add a comment or rating.
Comments and Ratings (9)
12 Oct 2012 Wajahat

Thanks a lot. Its a good implementation

16 Feb 2012 Tomasz Galka  
18 Oct 2011 Omid G  
01 Jun 2010 Dushyant Kumar

Thanks for writing KLD for two discrete random variables and thanks for sharing.

27 Apr 2010 Yuval Aviel

Bao: See "A 'log of zero' warning will be thrown for zero-valued probabilities. " in the text above.

13 Apr 2010 Do Quoc Bao

Anyone has taken care about the log of 0? Because in thi formula, we have Log(pi) - Log(qi) !!!!

30 Sep 2008 Bin Liu

This implementation may be fine for two discrete multinomial distribution. A very good notes about this:
http://www.snl.salk.edu/~shlens/pub/notes/kl.pdf
I have not find any general solution this problem.

09 Sep 2008 Atif Evren  
08 Apr 2008 Alfred Ultsch

This is NOT an implementation of the Kullback-Leibler divergence (KLD) for probability densities P1 and P2. It may be an implementation for KLD for 2 discrete random variables. In this case the vector X is unnecessary,X in fact only used by the code for uniqueness of the P-values

Contact us