Naive Bayes models assume that observations have some multivariate distribution given class membership, but the predictor or features composing the observation are independent. This framework can accommodate a complete feature set such that an observation is a set of multinomial counts.
To train a naive Bayes model, use
fitcnb in the command-line interface. After training, predict labels or estimate posterior probabilities by passing the model and predictor data to
|Classification Learner||Train models to classify data using supervised machine learning|
|Cross-validate machine learning model|
|Classification edge for cross-validated classification model|
|Classification loss for cross-validated classification model|
|Cross-validate function for classification|
|Classification margins for cross-validated classification model|
|Classify observations in cross-validated classification model|
|Classification loss for naive Bayes classifier|
|Resubstitution classification loss|
|Log unconditional probability density for naive Bayes classifier|
|Compare accuracies of two classification models using new data|
|Classification edge for naive Bayes classifier|
|Classification margins for naive Bayes classifier|
|Resubstitution classification edge|
|Resubstitution classification margin|
|Compare accuracies of two classification models by repeated cross-validation|
Create and compare naive Bayes classifiers, and export trained models to make predictions for new data.
Understand the steps for supervised learning and the characteristics of nonparametric classification and regression functions.
Categorical response data
The naive Bayes classifier is designed for use when predictors are independent of one another within each class, but it appears to work well in practice even when that independence assumption is not valid.
This example shows how to visualize classification probabilities for the Naive Bayes classification algorithm.
This example shows how to perform classification using discriminant analysis, naive Bayes classifiers, and decision trees.
This example shows how to visualize the decision surface for different classification algorithms.