Naive Bayes models assume that observations have some multivariate distribution given class membership, but the predictor or features composing the observation are independent. This framework can accommodate a complete feature set such that an observation is a set of multinomial counts.
To train a naive Bayes model, use
the command-line interface. After training, predict labels or estimate
posterior probabilities by passing the model and predictor data to
|Cross-validated naive Bayes classifier|
|Classification edge for observations not used for training|
|Classification loss for observations not used for training|
|Cross validate function|
|Classification margins for observations not used for training|
|Predict response for observations not used for training|
|Classification error for naive Bayes classifier|
|Classification loss for naive Bayes classifiers by resubstitution|
|Log unconditional probability density for naive Bayes classifier|
|Compare accuracies of two classification models using new data|
|Classification edge for naive Bayes classifiers|
|Classification margins for naive Bayes classifiers|
|Classification edge for naive Bayes classifiers by resubstitution|
|Classification margins for naive Bayes classifiers by resubstitution|
Understand the steps for supervised learning and the characteristics of nonparametric classification and regression functions.
Categorical response data
The naive Bayes classifier is designed for use when predictors are independent of one another within each class, but it appears to work well in practice even when that independence assumption is not valid.
This example shows how to visualize classification probabilities for the Naive Bayes classification algorithm.
This example shows how to perform classification using discriminant analysis, naive Bayes classifiers, and decision trees.
This example shows how to visualize the decision surface for different classification algorithms.