Expectation Maximization of Gaussian Mixture Models via CUDA
Updated 21 May 2009
This is a parallel implementation of the Expectation Maximization algorithm for multidimensional Gaussian Mixture Models, designed to run on NVidia graphics cards supporting
CUDA. On my machine, it provides up to 170x performance increases (16 dims, 16 clusters, 1000000 data points).
See the report available at http://andrewharp.com/gmmcuda for more information.
The interesting code is all in gpugaumixmod.h and gpugaumixmod_kernel.h.
The reference CPU implementation is in cpugaumixmod.h.
It can be integrated into any C program on a CUDA enabled system. Additionally, Matlab integration is provided in gmm.cu.
Since the initial release I have added simultaneous random restarts. Experiment1 now takes advantage of this.
The config files are set up to run on my Windows Vista 64bit machine, but it's just a standard Cuda kernel underneath so it should be portable. A precompiled Windows 64-bit version is included.
See compile.m for the command I use to compile the CUDA/Mex files.
Go here to find the toolkit that contains the files you'll need for compiling on your platform: http://developer.nvidia.com/object/matlab_cuda.html
Once compiled, start off by running gmm_example in Matlab to see it in action.
See experiment1, experiment2, experiment3 for ready to run prebuilt speed analysis experiments.
Andrew Harp (2023). Expectation Maximization of Gaussian Mixture Models via CUDA (https://www.mathworks.com/matlabcentral/fileexchange/24020-expectation-maximization-of-gaussian-mixture-models-via-cuda), MATLAB Central File Exchange. Retrieved .
MATLAB Release Compatibility
Platform CompatibilityWindows macOS Linux
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!Start Hunting!
Discover Live Editor
Create scripts with code, output, and formatted text in a single executable document.
-Now handles simultaneous random restarts. Just make the
-Fixed synchronization issues in kernel