ClassificationPartitionedModel class

Cross-validated classification model

Description

ClassificationPartitionedModel is a set of classification models trained on cross-validated folds. Estimate the quality of classification by cross validation using one or more "kfold" methods: kfoldPredict, kfoldLoss, kfoldMargin, kfoldEdge, and kfoldfun.

Every "kfold" method uses models trained on in-fold observations to predict the response for out-of-fold observations. For example, suppose you cross validate using five folds. In this case, the software randomly assigns each observation into five roughly equally sized groups. The training fold contains four of the groups (i.e., roughly 4/5 of the data) and the test fold contains the other group (i.e., roughly 1/5 of the data). In this case, cross validation proceeds as follows:

  • The software trains the first model (stored in CVMdl.Trained{1}) using the observations in the last four groups and reserves the observations in the first group for validation.

  • The software trains the second model (stored in CVMdl.Trained{2}) using the observations in the first group and last three groups, and reserves the observations in the second group for validation.

  • The software proceeds in a similar fashion for the third to fifth models.

If you validate by calling kfoldPredict, it computes predictions for the observations in group 1 using the first model, group 2 for the second model, and so on. In short, the software estimates a response for every observation using the model trained without that observation.

Construction

CVMdl = crossval(Mdl) creates a cross-validated classification model from a classification model (Mdl).

Alternatively:

  • CVDiscrMdl = fitcdiscr(X,Y,Name,Value)

  • CVEnsMdl = fitensemble(X,Y,Name,Value)

  • CVKNNMdl = fitcknn(X,Y,Name,Value)

  • CVNBMdl = fitcnb(X,Y,Name,Value)

  • CVSVMMdl = fitcsvm(X,Y,Name,Value)

  • CVTreeMdl = fitctree(X,Y,Name,Value)

create a cross-validated model when name is either 'CrossVal', 'KFold', 'Holdout', 'Leaveout', or 'CVPartition'. For syntax details, see fitcdiscr, fitensemble, fitcknn, fitcnb, fitcsvm, and fitctree.

For a cross-validated, error-correcting output code multiclass model, see ClassificationPartitionedECOC and fitcecoc.

Input Arguments

Mdl

A classification model. Mdl can be any of the following:

  • A classification tree trained using fitctree

  • A classification ensemble trained using fitensemble

  • A discriminant analysis classifier trained using fitcdiscr

  • A naive Bayes classifier trained using fitcnb

  • A nearest-neighbor classifier trained using fitcknn

  • A support vector machine classifier trained using fitcsvm

Properties

CategoricalPredictors

List of categorical predictors.

If Model is a trained classification tree, then CategoricalPredictors is a numeric vector with indices from 1 through p, where p is the number of columns of X.

If Model is a trained discriminant analysis or support vector machine classifier, then CategoricalPredictors is an empty vector ([]).

ClassNames

List of elements in Y with duplicates removed. ClassNames has the same data type as the data in the argument Y, and therefore can be a categorical or character array, logical or numeric vector, or cell array of strings.

Cost

Square matrix, where Cost(i,j) is the cost of classifying a point into class j if its true class is i.

If CVModel is a cross-validated ClassificationDiscriminant, ClassificationKNN, or ClassificationNaiveBayes model, then you can change its cost matrix to e.g., CostMatrix, using dot notation.

CVModel.Cost = CostMatrix

CrossValidatedModel

Name of the cross-validated model, which is a string.

KFold

Number of folds used in cross-validated model, which is a positive integer.

ModelParameters

Object holding parameters of CVModel.

Partition

The partition of class CVPartition used in creating the cross-validated model.

PredictorNames

Cell array of strings containing the predictor names, in the order that they appear in X.

Prior

Prior probabilities for each class. Prior is a numeric vector whose entries relate to the corresponding ClassNames property.

If CVModel is a cross-validated ClassificationDiscriminant or ClassificationNaiveBayes model, then you can change its vector of priors to e.g., priorVector, using dot notation.

CVModel.Prior = priorVector

ResponseName

String describing the response variable Y.

ScoreTransform

String representing a built-in transformation function, or a function handle for transforming predicted classification scores.

To change the score transformation function to, e.g., function, use dot notation.

  • For a built-in function, enter a string.

    SVMModel.ScoreTransform = 'function';

    This table contains the available, built-in functions.

    StringFormula
    'doublelogit'1/(1 + e–2x)
    'invlogit'log(x / (1–x))
    'ismax'Set the score for the class with the largest score to 1, and scores for all other classes to 0.
    'logit'1/(1 + ex)
    'none'x (no transformation)
    'sign'–1 for x < 0
    0 for x = 0
    1 for x > 0
    'symmetric'2x – 1
    'symmetriclogit'2/(1 + ex) – 1
    'symmetricismax'Set the score for the class with the largest score to 1, and scores for all other classes to -1.

  • For a MATLAB® function, or a function that you define, enter its function handle.

    SVMModel.ScoreTransform = @function;

    function should accept a matrix (the original scores) and return a matrix of the same size (the transformed scores).

Trained

The trained learners, which is a cell array of compact classification models.

W

The scaled weights, which is a vector with length n, the number of rows in X.

X

Numeric matrix of predictor values. Each column of X represents one variable, and each row represents one observation.

Y

Categorical or character array, logical or numeric vector, or cell array of strings specifying the class labels for each observation. Y has the same number of rows as X, and each entry of Y is the response to the data in the corresponding row of X.

Methods

kfoldEdgeClassification edge for observations not used for training
kfoldfunCross validate function
kfoldLossClassification loss for observations not used for training
kfoldMarginClassification margins for observations not used for training
kfoldPredictPredict response for observations not used for training

Copy Semantics

Value. To learn how value classes affect copy operations, see Copying Objects in the MATLAB documentation.

Tips

To estimate posterior probabilities of trained, cross-validated SVM classifiers, use fitSVMPosterior.

Examples

expand all

Evaluate the Classification Error of a Classification Tree Classifier

Evaluate the k-fold cross-validation error for a classification tree model.

Load Fisher's iris data set.

load fisheriris

Train a classification tree using default options.

Mdl = fitctree(meas,species);

Cross validate the classification tree model.

CVMdl = crossval(Mdl);

Estimate the 10-fold cross-validation loss.

L = kfoldLoss(CVMdl)
L =

    0.0533

Estimate Posterior Probabilities for Test Samples

Estimate positive class posterior probabilities for the test set of an SVM algorithm.

Load the ionosphere data set.

load ionosphere

Train an SVM classifier. Specify a 20% holdout sample. It is good practice to standardize the predictors and specify the class order.

rng(1) % For reproducibility
CVSVMModel = fitcsvm(X,Y,'Holdout',0.2,'Standardize',true,...
    'ClassNames',{'b','g'});

CVSVMModel is a trained ClassificationPartitionedModel cross-validated classifier.

Estimate the optimal score function for mapping observation scores to posterior probabilities of an observation being classified as 'g'.

ScoreCVSVMModel = fitSVMPosterior(CVSVMModel);

ScoreSVMModel is a trained ClassificationPartitionedModel cross-validated classifier containing the optimal score transformation function estimated from the training data.

Estimate the out-of-sample positive class posterior probabilities. Display the results for the first 10 out-of-sample observations.

[~,OOSPostProbs] = kfoldPredict(ScoreCVSVMModel);
indx = ~isnan(OOSPostProbs(:,2));
hoObs = find(indx); % Holdout observation numbers
OOSPostProbs = [hoObs, OOSPostProbs(indx,2)];
table(OOSPostProbs(1:10,1),OOSPostProbs(1:10,2),...
    'VariableNames',{'ObservationIndex','PosteriorProbability'})
ans = 

    ObservationIndex    PosteriorProbability
    ________________    ____________________

     6                    0.17379           
     7                    0.89639           
     8                  0.0076593           
     9                    0.91603           
    16                   0.026714           
    22                  4.607e-06           
    23                     0.9024           
    24                  2.413e-06           
    38                  0.0004266           
    41                    0.86427           

Was this topic helpful?