Accelerating the pace of engineering and science

# ClassificationDiscriminant class

Superclasses: CompactClassificationDiscriminant

Discriminant analysis classification

## Description

A ClassificationDiscriminant object encapsulates a discriminant analysis classifier, which is a Gaussian mixture model for data generation. A ClassificationDiscriminant object can predict responses for new data using the predict method. The object contains the data used for training, so can compute resubstitution predictions.

## Construction

obj = fitcdiscr(x,y) creates a discriminant classification object based on the input variables (also known as predictors, features, or attributes) x and output (response) y. For syntax details, see fitcdiscr.

obj = fitcdiscr(x,y,Name,Value) creates a classifier with additional options specified by one or more Name,Value pair arguments. If you use one of the following five options, obj is of class ClassificationPartitionedModel: 'CrossVal', 'KFold', 'Holdout', 'Leaveout', or 'CVPartition'. Otherwise, obj is of class ClassificationDiscriminant.

### Input Arguments

 x Matrix of numeric predictor values. Each column of x represents one variable, and each row represents one observation. NaN values in x are considered missing values. Observations with missing values for x are not used in the fit. y A categorical array, cell array of strings, character array, logical vector, or a numeric vector with the same number of rows as x. Each row of y represents the classification of the corresponding row of x. NaN values in y are considered missing values. Observations with missing values for y are not used in the fit.

## Methods

 compact Compact discriminant analysis classifier crossval Cross-validated discriminant analysis classifier cvshrink Cross-validate regularization of linear discriminant resubEdge Classification edge by resubstitution resubLoss Classification error by resubstitution resubMargin Classification margins by resubstitution resubPredict Predict resubstitution response of classifier

### Inherited Methods

 edge Classification edge logP Log unconditional probability density for discriminant analysis classifier loss Classification error mahal Mahalanobis distance to class means margin Classification margins nLinearCoeffs Number of nonzero linear coefficients predict Predict classification

## Definitions

### Discriminant Classification

The model for discriminant analysis is:

• Each class (Y) generates data (X) using a multivariate normal distribution. That is, the model assumes X has a Gaussian mixture distribution (gmdistribution).

• For linear discriminant analysis, the model has the same covariance matrix for each class, only the means vary.

• For quadratic discriminant analysis, both means and covariances of each class vary.

predict classifies so as to minimize the expected classification cost:

$\stackrel{^}{y}=\underset{y=1,...,K}{\mathrm{arg}\mathrm{min}}\sum _{k=1}^{K}\stackrel{^}{P}\left(k|x\right)C\left(y|k\right),$

where

• $\stackrel{^}{y}$ is the predicted classification.

• K is the number of classes.

• $\stackrel{^}{P}\left(k|x\right)$ is the posterior probability of class k for observation x.

• $C\left(y|k\right)$ is the cost of classifying an observation as y when its true class is k.

For details, see How the predict Method Classifies.

### Regularization

Regularization is the process of finding a small set of predictors that yield an effective predictive model. For linear discriminant analysis, there are two parameters, γ and δ, that control regularization as follows. cvshrink helps you select appropriate values of the parameters.

Let Σ represent the covariance matrix of the data X, and let $\stackrel{^}{X}$ be the centered data (the data X minus the mean by class). Define

$D=\text{diag}\left({\stackrel{^}{X}}^{T}*\stackrel{^}{X}\right).$

The regularized covariance matrix $\stackrel{˜}{\Sigma }$ is

$\stackrel{˜}{\Sigma }=\left(1-\gamma \right)\Sigma +\gamma D.$

Whenever γ ≥ MinGamma, $\stackrel{˜}{\Sigma }$ is nonsingular.

Let μk be the mean vector for those elements of X in class k, and let μ0 be the global mean vector (the mean of the rows of X). Let C be the correlation matrix of the data X, and let $\stackrel{˜}{C}$ be the regularized correlation matrix:

$\stackrel{˜}{C}=\left(1-\gamma \right)C+\gamma I,$

where I is the identity matrix.

The linear term in the regularized discriminant analysis classifier for a data point x is

${\left(x-{\mu }_{0}\right)}^{T}{\stackrel{˜}{\Sigma }}^{-1}\left({\mu }_{k}-{\mu }_{0}\right)=\left[{\left(x-{\mu }_{0}\right)}^{T}{D}^{-1/2}\right]\left[{\stackrel{˜}{C}}^{-1}{D}^{-1/2}\left({\mu }_{k}-{\mu }_{0}\right)\right].$

The parameter δ enters into this equation as a threshold on the final term in square brackets. Each component of the vector $\left[{\stackrel{˜}{C}}^{-1}{D}^{-1/2}\left({\mu }_{k}-{\mu }_{0}\right)\right]$ is set to zero if it is smaller in magnitude than the threshold δ. Therefore, for class k, if component j is thresholded to zero, component j of x does not enter into the evaluation of the posterior probability.

The DeltaPredictor property is a vector related to this threshold. When δ ≥ DeltaPredictor(i), all classes k have

$|{\stackrel{˜}{C}}^{-1}{D}^{-1/2}\left({\mu }_{k}-{\mu }_{0}\right)|\le \delta .$

Therefore, when δ ≥ DeltaPredictor(i), the regularized classifier does not use predictor i.

## Copy Semantics

Value. To learn how value classes affect copy operations, see Copying Objects in the MATLAB® documentation.

## Examples

Create a discriminant analysis classifier for the Fisher iris data:

obj = fitcdiscr(meas,species)

obj =
ClassificationDiscriminant:
PredictorNames: {'x1'  'x2'  'x3'  'x4'}
ResponseName: 'Y'
ClassNames: {'setosa'  'versicolor'  'virginica'}
ScoreTransform: 'none'
NumObservations: 150
DiscrimType: 'linear'
Mu: [3x4 double]
Coeffs: [3x3 struct]

## References

[1] Guo, Y., T. Hastie, and R. Tibshirani. Regularized linear discriminant analysis and its application in microarrays. Biostatistics, Vol. 8, No. 1, pp. 86–100, 2007.