# Documentation

### This is machine translation

Translated by
Mouseover text to see original. Click the button below to return to the English verison of the page.

# resubLoss

Class: ClassificationKNN

Loss of k-nearest neighbor classifier by resubstitution

## Syntax

`L = resubLoss(mdl)L = resubLoss(mdl,Name,Value)`

## Description

`L = resubLoss(mdl)` returns the resubstitution loss, meaning the loss computed for the data that `fitcknn` used to create `mdl`.

`L = resubLoss(mdl,Name,Value)` returns loss statistics with additional options specified by one or more `Name,Value` pair arguments.

## Input Arguments

expand all

k-nearest neighbor classifier model, returned as a classifier model object.

Note that using the `'CrossVal'`, `'KFold'`, `'Holdout'`, `'Leaveout'`, or `'CVPartition'` options results in a model of class `ClassificationPartitionedModel`. You cannot use a partitioned tree for prediction, so this kind of tree does not have a `predict` method.

Otherwise, `mdl` is of class `ClassificationKNN`, and you can use the `predict` method to make predictions.

### Name-Value Pair Arguments

Specify optional comma-separated pairs of `Name,Value` arguments. `Name` is the argument name and `Value` is the corresponding value. `Name` must appear inside single quotes (`' '`). You can specify several name and value pair arguments in any order as `Name1,Value1,...,NameN,ValueN`.

expand all

Loss function, specified as the comma-separated pair consisting of `'LossFun'` and a built-in, loss-function name or function handle.

• The following lists available loss functions. Specify one using its corresponding character vector.

ValueDescription
`'binodeviance'`Binomial deviance
`'classiferror'`Classification error
`'exponential'`Exponential
`'hinge'`Hinge
`'logit'`Logistic
`'mincost'`Minimal expected misclassification cost (for classification scores that are posterior probabilities)
`'quadratic'`Quadratic

`'mincost'` is appropriate for classification scores that are posterior probabilities k-nearest neighbor models return posterior probabilities as classification scores by default (see `predict`).

• Specify your own function using function handle notation.

Suppose that `n` be the number of observations in `X` and `K` be the number of distinct classes (`numel(mdl.ClassNames)`). Your function must have this signature

``lossvalue = lossfun(C,S,W,Cost)``
where:

• The output argument `lossvalue` is a scalar.

• You choose the function name (`lossfun`).

• `C` is an `n`-by-`K` logical matrix with rows indicating which class the corresponding observation belongs. The column order corresponds to the class order in `mdl.ClassNames`.

Construct `C` by setting ```C(p,q) = 1``` if observation `p` is in class `q`, for each row. Set all other elements of row `p` to `0`.

• `S` is an `n`-by-`K` numeric matrix of classification scores. The column order corresponds to the class order in `mdl.ClassNames`. `S` is a matrix of classification scores, similar to the output of `predict`.

• `W` is an `n`-by-1 numeric vector of observation weights. If you pass `W`, the software normalizes them to sum to `1`.

• `Cost` is a K-by-`K` numeric matrix of misclassification costs. For example, ```Cost = ones(K) - eye(K)``` specifies a cost of `0` for correct classification, and `1` for misclassification.

Specify your function using `'LossFun',@lossfun`.

For more details on loss functions, see Classification Loss.

Data Types: `char` | `function_handle`

## Output Arguments

 `L` Classification loss, a scalar. The meaning of the error depends on the values in `weights` and `lossfun`.

## Examples

expand all

Construct a k-nearest neighbor classifier for the Fisher iris data, where = 5.

```load fisheriris ```

Construct a classifier for 5-nearest neighbors.

```mdl = fitcknn(meas,species,'NumNeighbors',5); ```

Examine the resubstitution loss of the classifier.

```L = resubLoss(mdl) ```
```L = 0.0333 ```

The classifier predicts incorrect classifications for 1/30 of its training data.

expand all