Code covered by the BSD License
15 Oct 2010
Fully vectorized implementation normalized mutual information
Watch this File
Normalized mutual information is often used for evaluating clustering result, information retrieval, feature selection etc. This is a optimized implementation of the function which has no for loops.
This file inspired Information Theory Toolbox.
Why is it important to verify the integrity of results, i.e. length(labels)==length(results)?
Isn't it possible to use nmi for clustering with k that is different from number of labels?
I changed the script for my use so that it will accept this. should I upload?
please add more comments, especially for the inputs. Thanks
It doesn't work for large matrix as it rans out of memory in line 16 and 17.
grateful if more comments provided
Good job, I used your function to validate mine :). Sometime in research you cannot trust even yourself.
Can you give a detailed docomented .m file so that we can have a easy understanding of NMI.
I agree, an example... and a little description of the inputs would be strongly appreciated! Thank you!
Could you provide some examples of running your code to prove it is correct?