# Documentation

### This is machine translation

Translated by
Mouseover text to see original. Click the button below to return to the English verison of the page.

# fitSVMPosterior

Fit posterior probabilities

## Syntax

• ``ScoreSVMModel = fitSVMPosterior(SVMModel)``
example
• ``ScoreSVMModel = fitSVMPosterior(SVMModel,TBL,ResponseVarName)``
• ``ScoreSVMModel = fitSVMPosterior(SVMModel,TBL,Y)``
• ``ScoreSVMModel = fitSVMPosterior(SVMModel,X,Y)``
example
• ``ScoreSVMModel = fitSVMPosterior(___,Name,Value)``
example
• ``````[ScoreSVMModel,ScoreTransform] = fitSVMPosterior(___)``````
example

## Description

example

````ScoreSVMModel = fitSVMPosterior(SVMModel)` returns `ScoreSVMModel`, which is a trained, support vector machine (SVM) classifier containing the optimal score-to-posterior-probability transformation function for two-class learning.The software fits the appropriate score-to-posterior-probability transformation function using the SVM classifier`SVMModel`, and by cross validation using the stored predictor data (`SVMModel.X`) and the class labels (`SVMModel.Y`). The transformation function computes the posterior probability that an observation is classified into the positive class (`SVMModel.Classnames(2)`).If the classes are inseparable, then the transformation function is the sigmoid function.If the classes are perfectly separable, the transformation function is the step function.In two-class learning, if one of the two classes has a relative frequency of 0, then the transformation function is the constant function. `fitSVMPosterior` is not appropriate for one-class learning.If `SVMModel` is a `ClassificationSVM` classifier, then the software estimates the optimal transformation function by 10-fold cross validation as outlined in [1]. Otherwise, `SVMModel` must be a `ClassificationPartitionedModel` classifier. `SVMModel` specifies the cross-validation method.The software stores the optimal transformation function in `ScoreSVMModel.ScoreTransform`.```
````ScoreSVMModel = fitSVMPosterior(SVMModel,TBL,ResponseVarName)` returns a trained support vector classifier containing the transformation function from the trained, compact SVM classifier `SVMModel`. The software estimates the score transformation function using predictor data in the table `TBL` and class labels `TBL.ResponseVarName`.```
````ScoreSVMModel = fitSVMPosterior(SVMModel,TBL,Y)` returns a trained support vector classifier containing the transformation function from the trained, compact SVM classifier `SVMModel`. The software estimates the score transformation function using predictor data in the table `TBL` and class labels `Y`.```

example

````ScoreSVMModel = fitSVMPosterior(SVMModel,X,Y)` returns a trained support vector classifier containing the transformation function from the trained, compact SVM classifier `SVMModel`. The software estimates the score transformation function using predictor data `X` and class labels `Y`.```

example

````ScoreSVMModel = fitSVMPosterior(___,Name,Value)` uses additional options specified by one or more `Name,Value` pair arguments provided `SVMModel` is a `ClassificationSVM` classifier. For example, you can specify the number of folds to use in k-fold cross validation.```

example

``````[ScoreSVMModel,ScoreTransform] = fitSVMPosterior(___)``` additionally returns the transformation function parameters (`ScoreTransform`) using any of the input arguments in the previous syntaxes.```

## Examples

collapse all

Load Fisher's iris data set. Train the classifier using the petal lengths and widths, and remove the virginica species from the data.

```load fisheriris classKeep = ~strcmp(species,'virginica'); X = meas(classKeep,3:4); y = species(classKeep); gscatter(X(:,1),X(:,2),y); title('Scatter Diagram of Iris Measurements') xlabel('Petal length') ylabel('Petal width') legend('Setosa','Versicolor') ```

The classes are perfectly separable. Therefore, the score transformation function is a step function.

Train an SVM classifier using the data. Cross validate the classifer using 10-fold cross validation (the default).

```rng(1); CVSVMModel = fitcsvm(X,y,'CrossVal','on'); ```

`CVSVMModel` is a trained `ClassificationPartitionedModel` SVM classifier.

Estimate the step function that transforms scores to posterior probabilities.

```[ScoreCVSVMModel,ScoreParameters] = fitSVMPosterior(CVSVMModel); ```
```Warning: Classes are perfectly separated. The optimal score-to-posterior transformation is a step function. ```

`fitSVMPosterior` does the following:

• Uses the data that the software stored in `CVSVMModel` to fit the transformation function

• Warns whenever the classes are separable

• Stores the step function in `ScoreCSVMModel.ScoreTransform`

Display the score function type and its parameter values.

```ScoreParameters ```
```ScoreParameters = struct with fields: Type: 'step' LowerBound: -0.8431 UpperBound: 0.6897 PositiveClassProbability: 0.5000 ```

`ScoreParameters` is a structure array with four fields:

• The score transformation function type (`Type`)

• The score corresponding to the negative class boundary (`LowerBound`)

• The score corresponding to the positive class boundary (`UpperBound`)

• The positive class probability (`PositiveClassProbability`)

Since the classes are separable, the step function transforms the score to either `0` or `1`, which is the posterior probability that an observation is a versicolor iris.

Load the `ionosphere` data set.

```load ionosphere ```

The classes of this data set are not separable.

Train an SVM classifier. Cross validate using 10-fold cross validation (the default). It is good practice to standardize the predictors and specify the class order.

```rng(1) % For reproducibility CVSVMModel = fitcsvm(X,Y,'ClassNames',{'b','g'},'Standardize',true,... 'CrossVal','on'); ScoreTransform = CVSVMModel.ScoreTransform ```
```ScoreTransform = none ```

`CVSVMModel` is a trained `ClassificationPartitionedModel` SVM classifier. The positive class is `'g'`. The `ScoreTransform` property is `none`.

Estimate the optimal score function for mapping observation scores to posterior probabilities of an observation being classified as `'g'`.

```[ScoreCVSVMModel,ScoreParameters] = fitSVMPosterior(CVSVMModel); ScoreTransform = ScoreCVSVMModel.ScoreTransform ScoreParameters ```
```ScoreTransform = @(S)sigmoid(S,-9.482017e-01,-1.218360e-01) ScoreParameters = struct with fields: Type: 'sigmoid' Slope: -0.9482 Intercept: -0.1218 ```

`ScoreTransform` is the optimal score transform function. `ScoreParameters` contains the score transformation function, slope estimate, and the intercept estimate.

You can estimate test-sample, posterior probabilities by passing `ScoreCVSVMModel` to `kfoldPredict`.

Estimate positive class posterior probabilities for the test set of an SVM algorithm.

Load the `ionosphere` data set.

```load ionosphere ```

Train an SVM classifier. Specify a 20% holdout sample. It is good practice to standardize the predictors and specify the class order.

```rng(1) % For reproducibility CVSVMModel = fitcsvm(X,Y,'Holdout',0.2,'Standardize',true,... 'ClassNames',{'b','g'}); ```

`CVSVMModel` is a trained `ClassificationPartitionedModel` cross-validated classifier.

Estimate the optimal score function for mapping observation scores to posterior probabilities of an observation being classified as `'g'`.

```ScoreCVSVMModel = fitSVMPosterior(CVSVMModel); ```

`ScoreSVMModel` is a trained `ClassificationPartitionedModel` cross-validated classifier containing the optimal score transformation function estimated from the training data.

Estimate the out-of-sample positive class posterior probabilities. Display the results for the first 10 out-of-sample observations.

```[~,OOSPostProbs] = kfoldPredict(ScoreCVSVMModel); indx = ~isnan(OOSPostProbs(:,2)); hoObs = find(indx); % Holdout observation numbers OOSPostProbs = [hoObs, OOSPostProbs(indx,2)]; table(OOSPostProbs(1:10,1),OOSPostProbs(1:10,2),... 'VariableNames',{'ObservationIndex','PosteriorProbability'}) ```
```ans = ObservationIndex PosteriorProbability ________________ ____________________ 6 0.17379 7 0.89639 8 0.0076593 9 0.91603 16 0.026714 22 4.607e-06 23 0.9024 24 2.413e-06 38 0.0004266 41 0.86427 ```

## Input Arguments

collapse all

Trained SVM classifier, specified as a `ClassificationSVM`, `CompactClassificationSVM`, or `ClassificationPartitionedModel` classifier.

If `SVMModel` is a `ClassificationSVM` classifier, then you can set optional name-value pair arguments.

If `SVMModel` is a `CompactClassificationSVM` classifier, then you must input predictor data `X` and class labels `Y`.

Sample data, specified as a table. Each row of `TBL` corresponds to one observation, and each column corresponds to one predictor variable. Optionally, `TBL` can contain additional columns for the response variable and observation weights. `TBL` must contain all of the predictors used to train `SVMModel`. Multi-column variables and cell arrays other than cell arrays of character vectors are not allowed.

If `TBL` contains the response variable used to train `SVMModel`, then you do not need to specify `ResponseVarName` or `Y`.

If you trained `SVMModel` using sample data contained in a `table`, then the input data for this method must also be in a table.

Data Types: `table`

Predictor data used to estimate the score-to-posterior-probability transformation function, specified as a matrix.

Each row of `X` corresponds to one observation (also known as an instance or example), and each column corresponds to one variable (also known as a feature).

The length of `Y` and the number of rows of `X` must be equal.

If you set `'Standardize',true` in `fitcsvm` to train `SVMModel`, then the software standardizes the columns of `X` using the corresponding means in `SVMModel.Mu` and standard deviations in `SVMModel.Sigma`. If the software fits the transformation-function parameter estimates using standardized data, then the estimates might differ from estimation without standardized data.

Data Types: `double` | `single`

Response variable name, specified as the name of a variable in `TBL`. If `TBL` contains the response variable used to train `SVMModel`, then you do not need to specify `ResponseVarName`.

If you specify `ResponseVarName`, then you must do so as a character vector. For example, if the response variable is stored as `TBL.Response`, then specify it as `'Response'`. Otherwise, the software treats all columns of `TBL`, including `TBL.Response`, as predictors.

The response variable must be a categorical or character array, logical or numeric vector, or cell array of character vectors. If the response variable is a character array, then each element must correspond to one row of the array.

Class labels used to estimate the score-to-posterior-probability transformation function, specified as a categorical or character array, logical or numeric vector, or cell array of character vectors.

If `Y` is a character array, then each element must correspond to one class label.

The length of `Y` and the number of rows of `X` must be equal.

### Name-Value Pair Arguments

Specify optional comma-separated pairs of `Name,Value` arguments. `Name` is the argument name and `Value` is the corresponding value. `Name` must appear inside single quotes (`' '`). You can specify several name and value pair arguments in any order as `Name1,Value1,...,NameN,ValueN`.

Example: `'KFold',8` performs 8-fold cross validation when `SVMModel` is a `ClassificationSVM` classifier.

collapse all

Cross-validation partition used to compute the transformation function, specified as the comma-separated pair consisting of `'CVPartition'` and a `cvpartition` partition as created by `cvpartition`. You can use only one of these four options at a time for creating a cross-validated model: `'KFold'`, `'Holdout'`, `'Leaveout'`, or `'CVPartition'`.

`crossval` splits the data into subsets using `cvpartition`.

Fraction of data for holdout validation used to compute the transformation function, specified as the comma-separated pair consisting of `'Holdout'` and a scalar value in the range (0,1). Holdout validation tests the specified fraction of the data, and uses the remaining data for training.

You can use only one of these four options at a time for creating a cross-validated model: `'KFold'`, `'Holdout'`, `'Leaveout'`, or `'CVPartition'`.

Example: `'Holdout',0.1`

Data Types: `double` | `single`

Number of folds to use when computing the transformation function, specified as the comma-separated pair consisting of `'KFold'` and a positive integer value greater than 1.

You can use only one of these four options at a time for creating a cross-validated model: `'KFold'`, `'Holdout'`, `'Leaveout'`, or `'CVPartition'`.

Example: `'KFold',8`

Data Types: `single` | `double`

Leave-one-out cross-validation flag indicating whether to use leave-one-out cross validation to compute the transformation function, specified as the comma-separated pair consisting of `'Leaveout'` and `'on'` or `'off'`. Use leave-one-out cross validation by using `'on'`.

You can use only one of these four options at a time for creating a cross-validated model: `'KFold'`, `'Holdout'`, `'Leaveout'`, or `'CVPartition'`.

Example: `'Leaveout','on'`

## Output Arguments

collapse all

Trained SVM classifier containing the estimated score transformation function, returned as a `ClassificationSVM`, `CompactClassificationSVM`, or `ClassificationPartitionedModel` classifier.

The `ScoreSVMModel` classifier type is the same as the `SVMModel` classifier type.

To estimate posterior probabilities, pass `ScoreSVMModel` and predictor data to `predict`. If you set `'Standardize',true` in `fitcsvm` to train `SVMModel`, then `predict` standardizes the columns of `X` using the corresponding means in `SVMModel.Mu` and standard deviations in `SVMModel.Sigma`.

Optimal score-to-posterior-probability transformation function parameters, specified as a structure array. If field `Type` is:

• `sigmoid`, then `ScoreTransform` has these fields:

• `Slope` — The value of A in the sigmoid function

• `Intercept` — The value of `B` in the sigmoid function

• `step`, then `ScoreTransform` has these fields:

• `PositiveClassProbability`: the value of π in the step function. π represents:

• The probability that an observation is in the positive class.

• The posterior probability that a score is in the interval (`LowerBound`,`UpperBound`).

• `LowerBound`: the value $\underset{{y}_{n}=-1}{\mathrm{max}}{s}_{n}$ in the step function. It represents the lower bound of the interval that assigns the posterior probability of being in the positive class `PositiveClassProbability` to scores. Any observation with a score less than `LowerBound` has posterior probability of being the positive class `0`.

• `UpperBound`: the value $\underset{{y}_{n}=+1}{\mathrm{min}}{s}_{n}$ in the step function. It represents the upper bound of the interval that assigns the posterior probability of being in the positive class `PositiveClassProbability`. Any observation with a score greater than `UpperBound` has posterior probability of being the positive class `1`.

• `constant`, then `ScoreTransform.PredictedClass` contains the name of the class prediction.

This result is the same as `SVMModel.ClassNames`. The posterior probability of an observation being in `ScoreTransform.PredictedClass` is always `1`.

collapse all

### Sigmoid Function

The sigmoid function that maps score sj corresponding to observation j to the positive class posterior probability is

`$P\left({s}_{j}\right)=\frac{1}{1+\mathrm{exp}\left(A{s}_{j}+B\right)}.$`

If the output argument `ScoreTransform.Type` is `sigmoid`, then parameters A and B correspond to the fields `Scale` and `Intercept` of `ScoreTransform`, respectively.

### Step Function

The step function that maps score sj corresponding to observation j to the positive class posterior probability is

`$P\left({s}_{j}\right)=\left\{\begin{array}{l}\begin{array}{cc}0;& s<\underset{{y}_{k}=-1}{\mathrm{max}}{s}_{k}\end{array}\\ \begin{array}{cc}\pi ;& \underset{{y}_{k}=-1}{\mathrm{max}}{s}_{k}\le {s}_{j}\le \underset{{y}_{k}=+1}{\mathrm{min}}{s}_{k}\end{array}\\ \begin{array}{cc}1;& {s}_{j}>\underset{{y}_{k}=+1}{\mathrm{min}}{s}_{k}\end{array}\end{array},$`

where:

• sj the score of observation j.

• +1 and –1 denote the positive and negative classes, respectively.

• π is the prior probability that an observation is in the positive class.

If the output argument `ScoreTransform.Type` is `step`, then the quantities $\underset{{y}_{k}=-1}{\mathrm{max}}{s}_{k}$ and $\underset{{y}_{k}=+1}{\mathrm{min}}{s}_{k}$correspond to the fields `LowerBound` and `UpperBound` of `ScoreTransform`, respectively.

### Constant Function

The constant function maps all scores in a sample to posterior probabilities 1 or 0.

If all observations have posterior probability 1, then they are expected to come from the positive class.

If all observations have posterior probability 0, then they are not expected to come from the positive class.

### Tips

Here is one way to predict positive class posterior probabilities.

1. Train an SVM classifier by passing the data to `fitcsvm`. The result is a trained SVM classifier, such as, `SVMModel`, that stores the data. The software sets the score transformation function property (`SVMModel.ScoreTransformation`) to `none`.

2. Pass the trained SVM classifier `SVMModel` to `fitSVMPosterior` or `fitPosterior`. The result, for example, `ScoreSVMModel`, is the same, trained SVM classifier as `SVMModel`, except the software sets `ScoreSVMModel.ScoreTransformation` to the optimal score transformation function.

If you skip step 2, then `predict` returns the positive class score rather than the positive class posterior probability.

3. Pass the trained SVM classifier containing the optimal score transformation function (`ScoreSVMModel`) and predictor data matrix to `predict`. The second column of the second output argument stores the positive class posterior probabilities corresponding to each row of the predictor data matrix.

### Algorithms

If you reestimate the score-to-posterior-probability transformation function, that is, if you pass an SVM classifier to `fitPosterior` or `fitSVMPosterior` and its `ScoreTransform` property is not `none`, then the software:

• Displays a warning

• Resets the original transformation function to `'none'` before estimating the new one

## References

[1] Platt, J. "Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods". In: Advances in Large Margin Classifiers. Cambridge, MA: The MIT Press, 2000, pp. 61–74.