Class: CompactClassificationEnsemble

Predict classification


labels = predict(ens,X)
[labels,score] = predict(ens,X)
[labels,...] = predict(ens,X,Name,Value)


labels = predict(ens,X) returns a vector of predicted class labels for a matrix X, based on ens, a trained full or compact classification ensemble.

[labels,score] = predict(ens,X) also returns scores for all classes.

[labels,...] = predict(ens,X,Name,Value) predicts classifications with additional options specified by one or more Name,Value pair arguments.

Input Arguments


A classification ensemble created by fitensemble, or a compact classification ensemble created by compact.


A matrix where each row represents an observation, and each column represents a predictor. The number of columns in X must equal the number of predictors in ens.

Name-Value Pair Arguments

Specify optional comma-separated pairs of Name,Value arguments. Name is the argument name and Value is the corresponding value. Name must appear inside single quotes (' '). You can specify several name and value pair arguments in any order as Name1,Value1,...,NameN,ValueN.


Indices of weak learners predict uses for computation of responses, a numeric vector.

Default: 1:T, where T is the number of weak learners in ens


A logical matrix of size N-by-T, where:

  • N is the number of rows of X.

  • T is the number of weak learners in ens.

When UseObsForLearner(i,j) is true, learner j is used in predicting the class of row i of X.

Default: true(N,T)

Output Arguments


Vector of classification labels. labels has the same data type as the labels used in training ens.


A matrix with one row per observation and one column per class. For each observation and each class, the score generated by each tree is the probability of this observation originating from this class computed as the fraction of observations of this class in a tree leaf. predict averages these scores over all trees in the ensemble.


Score (ensemble)

For ensembles, a classification score represents the confidence of a classification into a class. The higher the score, the higher the confidence.

Different ensemble algorithms have different definitions for their scores. Furthermore, the range of scores depends on ensemble type. For example:

  • AdaBoostM1 scores range from –∞ to ∞.

  • Bag scores range from 0 to 1.


Train a boosting ensemble for the ionosphere data, and predict the classification of the mean of the data:

load ionosphere;
ada = fitensemble(X,Y,'AdaBoostM1',100,'tree');
Xbar = mean(X);
[ypredict score] = predict(ada,Xbar)

ypredict = 

score =
   -2.9460    2.9460

See Also

| |

Was this topic helpful?