Naive Bayes models assume that observations have some multivariate distribution given class membership, but the predictor or features composing the observation are independent. This framework can accommodate a complete feature set such that an observation is a set of multinomial counts.
To train a naive Bayes model, use
the command-line interface. After training, predict labels or estimate
posterior probabilities by passing the model and predictor data to
While there are many Statistics and Machine Learning Toolbox™ algorithms for supervised learning, most use the same basic workflow for obtaining a predictor model.
Classification algorithms vary in speed, memory usage, interpretability, and flexibility.
The naive Bayes classifier is designed for use when predictors are independent of one another within each class, but it appears to work well in practice even when that independence assumption is not valid.
Learn how the naive Bayes classification model supports normal (Gaussian), kernel, multinomial, and multivariate, multinomial predictor conditional distributions.