Documentation Center 
Crossvalidated classification model
ClassificationPartitionedModel is a set of classification models trained on crossvalidated folds. Estimate the quality of classification by cross validation using one or more "kfold" methods: kfoldPredict, kfoldLoss, kfoldMargin, kfoldEdge, and kfoldfun.
Every "kfold" method uses models trained on infold observations to predict the response for outoffold observations. For example, suppose you cross validate using five folds. In this case, every training fold contains roughly 4/5 of the data and every test fold contains roughly 1/5 of the data. The first model stored in Trained{1} was trained on X and Y with the first 1/5 excluded, the second model stored in Trained{2} was trained on X and Y with the second 1/5 excluded, and so on. When you call kfoldPredict, it computes predictions for the first 1/5 of the data using the first model, for the second 1/5 of data using the second model, and so on. In short, response for every observation is computed by kfoldPredict using the model trained without this observation.
CVModel = crossval(Model) creates a crossvalidated classification model from a classification model (Model).
Alternatively:
CVDiscrModel = fitcdiscr(X,Y,Name,Value)
CVEnsModel = fitensemble(X,Y,Name,Value)
CVKNNModel = fitcknn(X,Y,Name,Value)
CVSVMModel = fitcsvm(X,Y,Name,Value)
CVTreeModel = fitctree(X,Y,Name,Value)
create a crossvalidated model when name is one of 'CrossVal', 'KFold', 'Holdout', 'Leaveout', or 'CVPartition'. For syntax details, see fitcdiscr, fitensemble, fitcknn, fitcsvm, and fitctree.
Model 
A classification model. Model can be any of the following:

CategoricalPredictors 
List of categorical predictors. If Model is a trained classification tree, then CategoricalPredictors is a numeric vector with indices from 1 to p, where p is the number of columns of X. If Model is a trained discriminant analysis or support vector machine classifier, then CategoricalPredictors is an empty vector ([]).  
ClassNames 
List of elements in Y with duplicates removed. ClassNames has the same data type as the data in the argument Y, and therefore can be a categorical or character array, logical or numeric vector, or cell array of strings.  
Cost 
Square matrix, where Cost(i,j) is the cost of classifying a point into class j if its true class is i. If CVModel is a crossvalidated ClassificationDiscriminant model, then you can change its cost matrix to, e.g., CostMatrix, using dot notation. CVModel.Cost = CostMatrix  
CrossValidatedModel 
Name of the crossvalidated model, a string.  
KFold 
Number of folds used in crossvalidated model, a positive integer.  
ModelParameters 
Object holding parameters of CVModel.  
Partition 
The partition of class CVPartition used in creating the crossvalidated model.  
PredictorNames 
Cell array of strings containing the predictor names, in the order that they appear in X.  
Prior 
Prior probabilities for each class. Prior is a numeric vector whose entries relate to the corresponding ClassNames property. If CVModel is a crossvalidated ClassificationDiscriminant model, then you can change its vector of priors to, e.g., priorVector, using dot notation. CVModel.Prior = priorVector  
ResponseName 
String describing the response variable Y.  
ScoreTransform 
String representing a builtin transformation function, or a function handle for transforming predicted classification scores. To change the score transformation function to, e.g., function, use dot notation.
 
Trained 
The trained learners, a cell array of compact classification models.  
W 
The scaled weights, a vector with length n, the number of rows in X.  
X 
Numeric matrix of predictor values. Each column of X represents one variable, and each row represents one observation.  
Y 
Categorical or character array, logical or numeric vector, or cell array of strings specifying the class labels for each observation. Y has the same number of rows as X, and each entry of Y is the response to the data in the corresponding row of X. 
kfoldEdge  Classification edge for observations not used for training 
kfoldfun  Cross validate function 
kfoldLoss  Classification loss for observations not used for training 
kfoldMargin  Classification margins for observations not used for training 
kfoldPredict  Predict response for observations not used for training 
Value. To learn how value classes affect copy operations, see Copying Objects in the MATLAB documentation.
To estimate posterior probabilities of trained, crossvalidated SVM classifiers, use fitSVMPosterior.
Evaluate the kfold crossvalidation error for a classification tree model of the Fisher iris data:
load fisheriris tree = ClassificationTree.fit(meas,species); cvtree = crossval(tree); L = kfoldLoss(cvtree) L = 0.0600
Determine positive class posterior probabilities for the test set of an SVM algorithm.
Load the ionosphere data set.
load ionosphere
Train an SVM classifier. Specify a 20% holdout sample. It is good practice to standardize the predictors and specify the class order.
rng(1) % For reproducibility CVSVMModel = fitcsvm(X,Y,'Holdout',0.2,'Standardize',true,... 'ClassNames',{'b','g'});
CVSVMModel is a trained ClassificationPartitionedModel crossvalidated classifier.
Estimate the optimal score function for mapping observation scores to posterior probabilities of an observation being classified as 'g'.
ScoreCVSVMModel = fitSVMPosterior(CVSVMModel);
ScoreSVMModel is a trained ClassificationPartitionedModel crossvalidated classifier containing the optimal score transformation function estimated from the training data.
Estimate the outofsample positive class posterior probabilities. Display the results for the first 10 outofsample observations.
[~,OOSPostProbs] = kfoldPredict(ScoreCVSVMModel); indx = ~isnan(OOSPostProbs(:,2)); hoObs = find(indx); % Holdout observation numbers OOSPostProbs = [hoObs, OOSPostProbs(indx,2)]; table(OOSPostProbs(1:10,1),OOSPostProbs(1:10,2),... 'VariableNames',{'ObservationIndex','PosteriorProbability'})
ans = ObservationIndex PosteriorProbability ________________ ____________________ 6 0.17379 7 0.89639 8 0.0076593 9 0.91603 16 0.026714 22 4.607e06 23 0.9024 24 2.413e06 38 0.0004266 41 0.86427
ClassificationDiscriminant  ClassificationEnsemble  ClassificationKNN  ClassificationPartitionedEnsemble  ClassificationSVM  ClassificationTree  CompactClassificationDiscriminant  CompactClassificationEnsemble  CompactClassificationSVM  CompactClassificationTree  fitcdiscr  fitcknn  fitcsvm  fitctree  fitensemble  fitSVMPosterior  RegressionPartitionedModel