version 1.0.0.0 (5.38 KB) by
Mo Chen

Variational Bayes method (mean field) for GMM can auto determine the number of components

This is the variational Bayesian inference method for Gaussian mixture model. Unlike the EM algorithm (maximum likelihood estimation), it can automatically determine the number of the mixture components k. Please try following code for a demo:

close all; clear;

d = 2;

k = 3;

n = 2000;

[X,z] = mixGaussRnd(d,k,n);

plotClass(X,z);

m = floor(n/2);

X1 = X(:,1:m);

X2 = X(:,(m+1):end);

% VB fitting

[y1, model, L] = mixGaussVb(X1,10);

figure;

plotClass(X1,y1);

figure;

plot(L)

% Predict testing data

[y2, R] = mixGaussVbPred(model,X2);

figure;

plotClass(X2,y2);

The data set is of 3 clusters. You only need to set a number (say 10) which is larger than the intrinsic number of clusters. The algorithm will automatically find the proper k.

Detail description of the algorithm can be found in the reference.

Pattern Recognition and Machine Learning by Christopher M. Bishop (P.474)

Upon the request, I provided the prediction function for out-of-sample inference.

This function is now a part of the PRML toolbox (http://www.mathworks.com/matlabcentral/fileexchange/55826-pattern-recognition-and-machine-learning-toolbox).

Mo Chen (2021). Variational Bayesian Inference for Gaussian Mixture Model (https://www.mathworks.com/matlabcentral/fileexchange/35362-variational-bayesian-inference-for-gaussian-mixture-model), MATLAB Central File Exchange. Retrieved .

Created with
R2016a

Compatible with any release

**Inspired by:**
EM Algorithm for Gaussian Mixture Model (EM GMM), Pattern Recognition and Machine Learning Toolbox

**Inspired:**
GMMVb_SB(X), Dirichlet Process Gaussian Mixture Model, EM Algorithm for Gaussian Mixture Model (EM GMM)

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!Create scripts with code, output, and formatted text in a single executable document.

mingyang zhaoLingyun Guoquisic mJames TooCarlasHi Mo,

What do I cite?

Best,

Carlas

Hamed NikbakhtThank you very much for the code. I have a question in this regard.

How can we extract the mixing coefficient in your code (like the one in Gaussian mixture model). I cannot find it unfortunately. Your help would be greatly appreciated.

Manjushree AithalI tried running code but I'm getting error, gmrnd function not found. Can someone guide with this?

Thank!

imene ben rejebthank you for this helpfull topic

Azat GarifullinAzat GarifullinRiniHi,

I have two cluster data in 1D. Can I use it as it just by changing k? I ran the code and when I plot L the curve is not stable after the initial burn in.

Did I miss something?

Thank you,

Sharmi

Liviu VladutucatuitousI checked Bishop and it appears the mixing coefficient/weight can be calculated by simply averaging the appropriate model.R column.

By adding this code to line 36 I added pre-calculated weights to the output model.

% Addition which calculates component mixing coefficients and stores them

model.weights = zeros(1, size(model.R, 2));

for i=1:size(model.R, 2)

model.weights(1, i) = sum(model.R(:,i) / size(model.R, 1));

end;

catuitousDid anyone ever manage to interpret the model data that this method provides? Backward engineering the code is very difficult!

I believe that model.m relates to the gaussian means and that model.M provides the covariance matrices (I am just guessing).

However I can not determine how to get/calculate the weights for each mixture. It's driving me crazy. Anyone have any ideas?

JohannesHi,

I solved my problem by removing components that approach zero. Adding this after line 128:

% remove components that are zero

nk = sum(R,1); % 10.51

idx = find(nk<0.01);

% idx = find(nk<4);

if ~isempty(idx)

R(:,idx) = [];

alpha(:,idx) = [];

kappa(:,idx) = [];

m(:,idx) = [];

v(:,idx) = [];

M(:,:,idx) = [];

%Renormalization

tmp = sum(R,2);

R = bsxfun(@times,R,1./tmp);

logR = log(R);

model.alpha = alpha;

model.kappa = kappa;

model.m = m;

model.v = v;

model.M = M; % Whishart: M = inv(W)

end

also added some noise to the matrix in line 176 as suggested below:

V = chol(Xs*Xs'/nk(i) + diag(ones(1,d)*1e-3)); % add some noise to avoid singular matrix

Hope this helps somebody else to. By the way is there a better way to exchange this information? I would have sent you the updated file, but I couldn't find how to do this here.

JohannesHi,

This is a great help for me. However I also have the problem with multdimensional data.

Here is how to reproduce the error:

I create some dummy data (10 gaussian clusters in 3D)

numClusters = 10;

allClusters = [];

for ii = 1:numClusters

sigmaX = 20;

sigmaY = 20;

sigmaZ = 80;

sigmas = diag([sigmaX sigmaY sigmaZ]);

imageSize = diag([ 1000 1000 20 ]);

simGauss = sigmas * randn(3,2e2);

mu = imageSize * rand(3,1);

cluster = bsxfun(@plus, simGauss,mu);

allClusters = cat(2, allClusters, cluster);

end

label=vbgm(allClusters,20);

Error using chol

Matrix must be positive definite.

Sometimes it works, most of the time not however. I trace the problem to entries of nk becoming zero and a subsequent division by zero.

I have tried to replace the zero values with small ones, but that didn't help. Have no solution so far :-/

BogdanThanks for this! Is there anyway to use it with multi-dimensional data points? It's giving me an error about the matrix not being positive definite.

Thanks.

Jose CaballeroThanks for the code!

Could you explain how to derive the variance of components from the output?

Chi-FuJia Tsing NgIn response to Venkat R,

It's usual to have that problem. Easiest 'hack' is to add some noise on the diagonal of your matrix before chol-ing it.

PaulEasy to use and quick for my data (~4000 pts, yielding ~20 clusters).

That having been said, is there any chance of getting more documentation on the outputs?

I need to take the identified clusters and use the model to classify future data. I've read and understood Bishop, but it is still difficult to reverse engineer the code (terse the variable names) to actually use the identified model.

AndrewHi Chandra,

It will likely work if you replace all ~ in the file with an unused word (ex. 'trash'). In newer versions of Matlab, a ~ can be used in place of an output var when none is desired.

Chandra ShekharHello Mo Chen.

When i run this code in MATLAB 2009a, i am getting following error.

??? Error: File: vbgm.m Line: 33 Column: 3

Expression or statement is incorrect--possibly unbalanced (, {, or

[.

could you explain me please..?

Can You

yuVenkat RHi

I was able to run the test case successfully. But when I give my data as input, I get the error

Error using ==> chol

Matrix must be positive definite.

Error in ==> vbgm>vbound at 176

V = chol(Xs*Xs'/nk(i));

Error in ==> vbgm at 28

L(t) = vbound(X,model,prior)/n;

My data (286x162) doesn't contain any complete row or column as 0, although it does contain 0 at few discreet places. Does this method have a limitation?

regards,

Venkat

Nicolás de la MazaHellor Michael,

could you explain de outputs of this algorithm please? I mean, I want to know where to find the mean, covariance and mixture components pounds vectors in order to compare with Matlabs function gmdistribution.fit

Best regards!