Reliable and extremely fast kernel density estimator for one-dimensional data;
Gaussian kernel is assumed and the bandwidth is chosen automatically;
Unlike many other implementations, this one is immune to problems
caused by multimodal densities with widely separated modes (see example). The
estimation does not deteriorate for multimodal densities, because we never assume
a parametric model for the data (like those used in rules of thumb).
data - a vector of data from which the density estimate is constructed;
n - the number of mesh points used in the uniform discretization of the
interval [MIN, MAX]; n has to be a power of two; if n is not a power of two, then
n is rounded up to the next power of two, i.e., n is set to n=2^ceil(log2(n));
the default value of n is n=2^12;
MIN, MAX - defines the interval [MIN,MAX] on which the density estimate is constructed;
the default values of MIN and MAX are:
MIN=min(data)-Range/10 and MAX=max(data)+Range/10, where Range=max(data)-min(data);
bandwidth - the optimal bandwidth (Gaussian kernel assumed);
density - column vector of length 'n' with the values of the density
estimate at the grid points;
xmesh - the grid over which the density estimate is computed;
- If no output is requested, then the code automatically plots a graph of
the density estimate.
cdf - column vector of length 'n' with the values of the cdf
Kernel density estimation via diffusion
Z. I. Botev, J. F. Grotowski, and D. P. Kroese (2010)
Annals of Statistics, Volume 38, Number 5, pages 2916-2957
Example (run in command window):
This function is useful and fast to estimate the density and CDF, how can I obtain the PDF form such method, other than plot(xmesh, density) ?
Can I ask just for the bandwidth and then use it inside ksdensity to estimate the cdf?
Quick bug. If you only ask for one output (the bandwidth), the code throws an error. The problem is the line
"density(density<0)=eps; % remove negatives due to round-off error"
By moving this line within the "if (nargout>1)|(nargout==0)" statement, I was able to solve the problem and the code appears to be working well.
Thank you for this function!
Is there any way to calculate any performance parameter of the distribution, i.e. the ISE, MISE etc.?
Thank you, learned a lot today from papaer, really appreciate it!
Hello, Every body !
i am new in matlab. i am estimating density of 100 data points but it return density of 128 * 128 matrix . how to sum up to get only density of desired 100 data points.
Thanks, very useful. Strangely I get very different results on Matlab 2011b and 2013b with the same data. On the recent version, the density distribution is more smooth and has a stronger tendency to not go to 0 at the ends of the distribution. I'm guessing this is due to changes in a Matlab function. Any ideas?
my data does not have meaning on negative values, but constructing histograms using kde returns frequencies on negative values and even if I determine the lower limit of x on zero, it returns on zero a big value. (i expect my histogram to start like x^2).
I have encountered a problem with your implementation and seeking your help. The PDFs obtained using translated versions of the signal (image histogram, in this case) is not the same.
data = [23 23 23 22 22 22 21 22 23];
data = [53 53 53 52 52 52 51 52 53];
MIN = 0
MAX = 255
n = 256;
This gives a good uni-modal estimate, whereas the second one is incomprehensible.
Please take a look at the density plots in each case.
This might be a problem with the bandwidth estimation but I don't know how to solve it.
Any help is appreciated.
Brilliant! Saves me a lot of computation time and I gain in precision :-)
Hi Steven. It is the integral of the pdf function should be 1. So if your x-interval is very small, then the y-value of the pdf function could be larger than 1. E.g. A uniform distribution on x=[0,0.01]. Then y need to be 100 to make the integral 1.
I am using Botev tools and do not understand why the density function has values greater than one. I am knew to KDE and don't understand this yet. I figue a density function is suppose to add up to 1 when you integrate it?
Hi all, I have a problem with pdf estimate that needs your help. I am asked to verify that the probability of rv Z (number of trials) given that there are exactly 2 successes is a negative binomial distribution. I generate random numbers many times and record the number of trials required for 2 successes everytime. Then, I find the frequency at which each values of Z occur (Ex: repeating 1000 times the experiment, I find that only 60 times that the number of trials is 10 for exact 2 successes to occur, hence the frequency is 60/1000 = 0.06). After that, I try to estimate the pdf of Z using Kernel and compare with the plot by using nbinpdf available in MATLAB but the result is so terrible. I'm thinking of using kde function but do not know how to use. I really aprreciate your help, please.
Zdravko's kernel density estimator works a lot more quicker than traditional methods although I am getting spurious artifacts due to too low a bandwidth selected of 0.02 (a third smaller than when i used another selector which minimised expected L2 loss between estimate and underlying). The latter bandwidth works smoothly but takes a bit longer. Also, I get negative densities at the outliers so I adjusted the minmax boundaries. Is there a way to alter the estimator to avoid this issue?
hi, it's a really a fast and robust script. I have a question about what the time complexity (in terms of data size n) is, namely O(n) or O(n^2)? Could someone provide some time complexity analysis ? Great thanks~
Question: is there any way to incorporate observation weights? I have calculated a weight based on other considerations (measurement error and goodness of fit, e.g.) for each data point in my distribution and want to incorporate this into the density estimate. Thanks in advance.
Thanks, It helps a lot.
Thanks a lot!It works very well.
Exellent script. Very fast and efficient. I have a question is there a similar script for a m-dimentional data (with m>2)?
My apologies... I think the 13 Jan 2011 update fixed that crash (the 100 length vector now works).
This has been excellent in general. A few times it has crashed at line 57:
because "??? Error using ==> fzero at 283
The function values at the interval endpoints must differ in sign."
The data doesn't look obviously bad in these cases. A short example vector is [14.0534 13.2851 13.0951 13.1159 14.2221] (this is a shortened version of a length 100 vector that also crashed).
Due to numerical round-off error from the fft.m function, it is possible to get density values of -1.38e-018 (instead of 0) and cdf values slightly larger than 1.
If this is a problem, one can correct the output from kde by overwriting:
The author fixed the bug and it works without a problem. Good job!
The code crashes at line 57 when length(data) is small. e.g kde(rand(100,1)) or kde(randn(30,1)).
Dear George, the kde function works as it should. There is no problem with the kde. What you call a problem is actually one of the main strengths of the routine.
By typing data = [d1;d1;d1;d1;d1;d2;d3];
you are creating DISCRETE data, because you create ties (the same values appear multiple times). For a truly continuous data, there can be no ties or repeated values!!!
If you have ties, then the data CANNOT be continuous be definition.
The kde.m CORRECTLY recognizes that the data you have provided is perfectly discrete and since discrete data does not need smoothing, the selected bandwidth should be zero. kde.m is the only routine I am aware of that does this correctly, every other routine fails this BASIC theoretical test.
Botev's kernel density estimator works admirably for me, except with weighted data, where the bandwidth selector "fails".
kde finds a bandwidth of about 0.6, which is reasonable.
Now weight the first Gaussian 5 times:
data = [d1;d1;d1;d1;d1;d2;d3];
kde now finds a bandwidth of 0.001, which is not reasonable.
Is there a way to enter weighted data sets or change the bandwidth estimator to avoid this problem?
Extremely fast and easy to use.
Could someone provide me with a code for nonparametric bayesian density estimation using a dirichlet prior? I am stuck.
Fantastic script - fast and easy to use!
can someone provide me with hierarchical token bucket(HTB) algorithm used to optimize bandwidth? kinda stucked
i am using your above code and my data is plotting density values well over 1 (i.e. >500). I looked at your example
% data=[randn(100,1);randn(100,1)*2+35 ;randn(100,1)+55];
but even then, sum(density) = 235.6368, which obviously is greater than 1. It should be 1 if it's a pdf right?
So, does your code generate a pdf? Or is scaled in some other way? If it is not a pdf, do you know how to convert it to a pdf? (do you just normalize by sum(density) ?)
I think the new version just missed the heading line. Please check it. But still good job. Thank.
Yes, the method seems to scale the function so that it becomes a pdf. But my data do not represent a pdf. How can I modify the method so that it works for general (nondensity) estimation?
I was incorrect but there does seem to be a scale factor on the density functions
Great code but I believe line 83 should be:
in order to get an accurately scaled density function.
Thanks for sharing this code. However using Matlab 6.5R13 I had to debug it: inputs arguments I,a2,N not specified for function fixed_point (for example).
Not Bad, But this program is only available for 1d data. But it is still useful some questions . In any way Thanks for sharing.
thanks a lot! it's good.
Highly recommend this! Very fast and robust.
Check this out. Much better than the currently available density estimation procedures!
corrected the title back to "kernel density estimator" ; updated reference
bug fixes: 1) in some rare cases with small 'n', fzero used to fail; code now deals with these failures;
- the updated version provides additionally a cdf estimator as an output argument
-Published in the Annals of Statistics, 2010, see Section 5.
As pointed out by Dazhi Jiang in the comments section, the healine
updated the reference - now a journal paper submitted to the Annals of Statistics
Using higher order asymptotic approximations to achieve superior estimation accuracy for problems with few data points.
Create scripts with code, output, and formatted text in a single executable document.