# resubPredict

Classify observations in multiclass error-correcting output codes (ECOC) model

## Description

example

label = resubPredict(Mdl) returns a vector of predicted class labels (label) for the trained multiclass error-correcting output codes (ECOC) model Mdl using the predictor data stored in Mdl.X.

The software predicts the classification of an observation by assigning the observation to the class yielding the largest negated average binary loss (or, equivalently, the smallest average binary loss).

example

label = resubPredict(Mdl,Name,Value) returns predicted class labels with additional options specified by one or more name-value pair arguments. For example, specify the posterior probability estimation method, decoding scheme, or verbosity level.

example

[label,NegLoss,PBScore] = resubPredict(___) uses any of the input argument combinations in the previous syntaxes and additionally returns the negated average binary loss per class (NegLoss) for observations, and the positive-class scores (PBScore) for the observations classified by each binary learner.

example

[label,NegLoss,PBScore,Posterior] = resubPredict(___) additionally returns posterior class probability estimates for observations (Posterior).

To obtain posterior class probabilities, you must set 'FitPosterior',true when training the ECOC model using fitcecoc. Otherwise, resubPredict throws an error.

## Examples

collapse all

Load Fisher's iris data set. Specify the predictor data X, the response data Y, and the order of the classes in Y.

X = meas;
Y = categorical(species);
classOrder = unique(Y);

Train an ECOC model using SVM binary classifiers. Standardize the predictors using an SVM template, and specify the class order.

t = templateSVM('Standardize',true);
Mdl = fitcecoc(X,Y,'Learners',t,'ClassNames',classOrder);

t is an SVM template object. During training, the software uses default values for empty properties in t. Mdl is a ClassificationECOC model.

Predict the labels of the training data. Print a random subset of true and predicted labels.

labels = resubPredict(Mdl);

rng(1); % For reproducibility
n = numel(Y); % Sample size
idx = randsample(n,10);
table(Y(idx),labels(idx),'VariableNames',{'TrueLabels','PredictedLabels'})
ans=10×2 table
TrueLabels    PredictedLabels
__________    _______________

setosa          setosa
versicolor      versicolor
virginica       virginica
setosa          setosa
versicolor      versicolor
setosa          setosa
versicolor      versicolor
versicolor      versicolor
setosa          setosa
setosa          setosa

Mdl correctly labels the observations with indices idx.

Load Fisher's iris data set. Specify the predictor data X, the response data Y, and the order of the classes in Y.

X = meas;
Y = categorical(species);
classOrder = unique(Y); % Class order

Train an ECOC model using SVM binary classifiers. Standardize the predictors using an SVM template, and specify the class order.

t = templateSVM('Standardize',true);
Mdl = fitcecoc(X,Y,'Learners',t,'ClassNames',classOrder);

t is an SVM template object. During training, the software uses default values for empty properties in t. Mdl is a ClassificationECOC model.

SVM scores are signed distances from the observation to the decision boundary. Therefore, the domain is $\left(-\infty ,\infty \right)$. Create a custom binary loss function that does the following:

• Map the coding design matrix (M) and positive-class classification scores (s) for each learner to the binary loss for each observation.

• Use linear loss.

• Aggregate the binary learner loss using the median.

You can create a separate function for the binary loss function, and then save it on the MATLAB® path. Or, you can specify an anonymous binary loss function. In this case, create a function handle (customBL) to an anonymous binary loss function.

customBL = @(M,s)nanmedian(1 - bsxfun(@times,M,s),2)/2;

Predict labels for the training data and estimate the median binary loss per class. Print the median negative binary losses per class for a random set of 10 observations.

[label,NegLoss] = resubPredict(Mdl,'BinaryLoss',customBL);

rng(1); % For reproducibility
n = numel(Y); % Sample size
idx = randsample(n,10);
classOrder
classOrder = 3x1 categorical array
setosa
versicolor
virginica

table(Y(idx),label(idx),NegLoss(idx,:),'VariableNames',...
{'TrueLabel','PredictedLabel','NegLoss'})
ans=10×3 table
TrueLabel     PredictedLabel                NegLoss
__________    ______________    _______________________________

setosa          versicolor      0.12364      1.9566     -3.5803
versicolor      versicolor       -1.017     0.62911     -1.1122
virginica       virginica       -1.9081    -0.21777     0.62589
setosa          versicolor      0.43829      2.2439     -4.1822
versicolor      versicolor      -1.0733     0.39619    -0.82288
setosa          versicolor      0.26656         2.2     -3.9666
versicolor      versicolor      -1.1234     0.69884     -1.0755
versicolor      versicolor      -1.2709     0.51797    -0.74702
setosa          versicolor      0.35175      2.0676     -3.9194
setosa          versicolor      0.23343      2.1883     -3.9217

The order of the columns corresponds to the elements of classOrder. The software predicts the label based on the maximum negated loss. The results indicate that the median of the linear losses might not perform as well as other losses.

Train an ECOC classifier using SVM binary learners. First predict the training-sample labels and class posterior probabilities. Then predict the maximum class posterior probability at each point in a grid. Visualize the results.

Load Fisher's iris data set. Specify the petal dimensions as the predictors and the species names as the response.

X = meas(:,3:4);
Y = species;
rng(1); % For reproducibility

Create an SVM template. Standardize the predictors, and specify the Gaussian kernel.

t = templateSVM('Standardize',true,'KernelFunction','gaussian');

t is an SVM template. Most of its properties are empty. When the software trains the ECOC classifier, it sets the applicable properties to their default values.

Train the ECOC classifier using the SVM template. Transform classification scores to class posterior probabilities (which are returned by predict or resubPredict) using the 'FitPosterior' name-value pair argument. Specify the class order using the 'ClassNames' name-value pair argument. Display diagnostic messages during training by using the 'Verbose' name-value pair argument.

Mdl = fitcecoc(X,Y,'Learners',t,'FitPosterior',true,...
'ClassNames',{'setosa','versicolor','virginica'},...
'Verbose',2);
Training binary learner 1 (SVM) out of 3 with 50 negative and 50 positive observations.
Negative class indices: 2
Positive class indices: 1

Fitting posterior probabilities for learner 1 (SVM).
Training binary learner 2 (SVM) out of 3 with 50 negative and 50 positive observations.
Negative class indices: 3
Positive class indices: 1

Fitting posterior probabilities for learner 2 (SVM).
Training binary learner 3 (SVM) out of 3 with 50 negative and 50 positive observations.
Negative class indices: 3
Positive class indices: 2

Fitting posterior probabilities for learner 3 (SVM).

Mdl is a ClassificationECOC model. The same SVM template applies to each binary learner, but you can adjust options for each binary learner by passing in a cell vector of templates.

Predict the training-sample labels and class posterior probabilities. Display diagnostic messages during the computation of labels and class posterior probabilities by using the 'Verbose' name-value pair argument.

[label,~,~,Posterior] = resubPredict(Mdl,'Verbose',1);
Predictions from all learners have been computed.
Loss for all observations has been computed.
Computing posterior probabilities...
Mdl.BinaryLoss
ans =

The software assigns an observation to the class that yields the smallest average binary loss. Because all binary learners are computing posterior probabilities, the binary loss function is quadratic.

Display a random set of results.

idx = randsample(size(X,1),10,1);
Mdl.ClassNames
ans = 3x1 cell array
{'setosa'    }
{'versicolor'}
{'virginica' }

table(Y(idx),label(idx),Posterior(idx,:),...
'VariableNames',{'TrueLabel','PredLabel','Posterior'})
ans=10×3 table
TrueLabel         PredLabel                     Posterior
______________    ______________    ______________________________________

{'virginica' }    {'virginica' }     0.0039316     0.0039864       0.99208
{'virginica' }    {'virginica' }      0.017065      0.018261       0.96467
{'virginica' }    {'virginica' }      0.014946      0.015854        0.9692
{'versicolor'}    {'versicolor'}    2.2197e-14       0.87318       0.12682
{'setosa'    }    {'setosa'    }         0.999    0.00025091     0.0007464
{'versicolor'}    {'virginica' }    2.2195e-14      0.059423       0.94058
{'versicolor'}    {'versicolor'}    2.2194e-14       0.97002      0.029983
{'setosa'    }    {'setosa'    }         0.999    0.00024989    0.00074741
{'versicolor'}    {'versicolor'}     0.0085637       0.98259     0.0088481
{'setosa'    }    {'setosa'    }         0.999    0.00025012    0.00074719

The columns of Posterior correspond to the class order of Mdl.ClassNames.

Define a grid of values in the observed predictor space. Predict the posterior probabilities for each instance in the grid.

xMax = max(X);
xMin = min(X);

x1Pts = linspace(xMin(1),xMax(1));
x2Pts = linspace(xMin(2),xMax(2));
[x1Grid,x2Grid] = meshgrid(x1Pts,x2Pts);

[~,~,~,PosteriorRegion] = predict(Mdl,[x1Grid(:),x2Grid(:)]);

For each coordinate on the grid, plot the maximum class posterior probability among all classes.

contourf(x1Grid,x2Grid,...
reshape(max(PosteriorRegion,[],2),size(x1Grid,1),size(x1Grid,2)));
h = colorbar;
h.YLabel.String = 'Maximum posterior';
h.YLabel.FontSize = 15;

hold on
gh = gscatter(X(:,1),X(:,2),Y,'krk','*xd',8);
gh(2).LineWidth = 2;
gh(3).LineWidth = 2;

title('Iris Petal Measurements and Maximum Posterior')
xlabel('Petal length (cm)')
ylabel('Petal width (cm)')
axis tight
legend(gh,'Location','NorthWest')
hold off

Train a multiclass ECOC model and estimate the posterior probabilities using parallel computing.

Load the arrhythmia data set. Examine the response data Y, and determine the number of classes.

Y = categorical(Y);
tabulate(Y)
Value    Count   Percent
1      245     54.20%
2       44      9.73%
3       15      3.32%
4       15      3.32%
5       13      2.88%
6       25      5.53%
7        3      0.66%
8        2      0.44%
9        9      1.99%
10       50     11.06%
14        4      0.88%
15        5      1.11%
16       22      4.87%
K = numel(unique(Y));

Several classes are not represented in the data, and many other classes have low relative frequencies.

Specify an ensemble learning template that uses the GentleBoost method and 50 weak classification tree learners.

t = templateEnsemble('GentleBoost',50,'Tree');

t is a template object. Most of its properties are empty ([]). The software uses default values for all empty properties during training.

Because the response variable contains many classes, specify a sparse random coding design.

rng(1); % For reproducibility
Coding = designecoc(K,'sparserandom');

Train an ECOC model using parallel computing. Specify to fit posterior probabilities.

pool = parpool;                      % Invokes workers
Starting parallel pool (parpool) using the 'local' profile ...
Connected to the parallel pool (number of workers: 6).
options = statset('UseParallel',true);
Mdl = fitcecoc(X,Y,'Learner',t,'Options',options,'Coding',Coding,...
'FitPosterior',true);

Mdl is a ClassificationECOC model. You can access its properties using dot notation.

The pool invokes six workers, although the number of workers might vary among systems.

Estimate posterior probabilities, and display the posterior probability of being classified as not having arrhythmia (class 1) given a random subset of the training data.

[~,~,~,posterior] = resubPredict(Mdl);

n = numel(Y);
idx = randsample(n,10,1);
table(idx,Y(idx),posterior(idx,1),...
'VariableNames',{'ObservationIndex','TrueLabel','PosteriorNoArrythmia'})
ans=10×3 table
ObservationIndex    TrueLabel    PosteriorNoArrythmia
________________    _________    ____________________

79              1                0.93436
248              1                0.95574
398              10              0.032378
207              1                0.97965
340              1                0.93656
206              1                0.97795
345              10              0.015642
296              2                0.13433
391              1                 0.9648
406              1                0.94861

## Input Arguments

collapse all

Full, trained multiclass ECOC model, specified as a ClassificationECOC model trained with fitcecoc.

### Name-Value Pair Arguments

Specify optional comma-separated pairs of Name,Value arguments. Name is the argument name and Value is the corresponding value. Name must appear inside quotes. You can specify several name and value pair arguments in any order as Name1,Value1,...,NameN,ValueN.

Example: resubPredict(Mdl,'BinaryLoss','linear','Decoding','lossbased') specifies a linear binary learner loss function and a loss-based decoding scheme for aggregating the binary losses.

Binary learner loss function, specified as the comma-separated pair consisting of 'BinaryLoss' and a built-in loss function name or function handle.

• This table describes the built-in functions, where yj is a class label for a particular binary learner (in the set {–1,1,0}), sj is the score for observation j, and g(yj,sj) is the binary loss formula.

ValueDescriptionScore Domaing(yj,sj)
'binodeviance'Binomial deviance(–∞,∞)log[1 + exp(–2yjsj)]/[2log(2)]
'exponential'Exponential(–∞,∞)exp(–yjsj)/2
'hamming'Hamming[0,1] or (–∞,∞)[1 – sign(yjsj)]/2
'hinge'Hinge(–∞,∞)max(0,1 – yjsj)/2
'linear'Linear(–∞,∞)(1 – yjsj)/2
'logit'Logistic(–∞,∞)log[1 + exp(–yjsj)]/[2log(2)]

The software normalizes binary losses so that the loss is 0.5 when yj = 0. Also, the software calculates the mean binary loss for each class.

• For a custom binary loss function, for example customFunction, specify its function handle 'BinaryLoss',@customFunction.

customFunction has this form:

bLoss = customFunction(M,s)
where:

• M is the K-by-L coding matrix stored in Mdl.CodingMatrix.

• s is the 1-by-L row vector of classification scores.

• bLoss is the classification loss. This scalar aggregates the binary losses for every learner in a particular class. For example, you can use the mean binary loss to aggregate the loss over the learners for each class.

• K is the number of classes.

• L is the number of binary learners.

For an example of passing a custom binary loss function, see Predict Test-Sample Labels of ECOC Model Using Custom Binary Loss Function.

The default BinaryLoss value depends on the score ranges returned by the binary learners. This table describes some default BinaryLoss values based on the given assumptions.

AssumptionDefault Value
All binary learners are SVMs or either linear or kernel classification models of SVM learners.'hinge'
All binary learners are ensembles trained by AdaboostM1 or GentleBoost.'exponential'
All binary learners are ensembles trained by LogitBoost.'binodeviance'
All binary learners are linear or kernel classification models of logistic regression learners. Or, you specify to predict class posterior probabilities by setting 'FitPosterior',true in fitcecoc.'quadratic'

To check the default value, use dot notation to display the BinaryLoss property of the trained model at the command line.

Example: 'BinaryLoss','binodeviance'

Data Types: char | string | function_handle

Decoding scheme that aggregates the binary losses, specified as the comma-separated pair consisting of 'Decoding' and 'lossweighted' or 'lossbased'. For more information, see Binary Loss.

Example: 'Decoding','lossbased'

Number of random initial values for fitting posterior probabilities by Kullback-Leibler divergence minimization, specified as the comma-separated pair consisting of 'NumKLInitializations' and a nonnegative integer scalar.

If you do not request the fourth output argument (Posterior) and set 'PosteriorMethod','kl' (the default), then the software ignores the value of NumKLInitializations.

For more details, see Posterior Estimation Using Kullback-Leibler Divergence.

Example: 'NumKLInitializations',5

Data Types: single | double

Estimation options, specified as the comma-separated pair consisting of 'Options' and a structure array returned by statset.

To invoke parallel computing:

• You need a Parallel Computing Toolbox™ license.

• Specify 'Options',statset('UseParallel',true).

Posterior probability estimation method, specified as the comma-separated pair consisting of 'PosteriorMethod' and 'kl' or 'qp'.

• If PosteriorMethod is 'kl', then the software estimates multiclass posterior probabilities by minimizing the Kullback-Leibler divergence between the predicted and expected posterior probabilities returned by binary learners. For details, see Posterior Estimation Using Kullback-Leibler Divergence.

• If PosteriorMethod is 'qp', then the software estimates multiclass posterior probabilities by solving a least-squares problem using quadratic programming. You need an Optimization Toolbox™ license to use this option. For details, see Posterior Estimation Using Quadratic Programming.

• If you do not request the fourth output argument (Posterior), then the software ignores the value of PosteriorMethod.

Example: 'PosteriorMethod','qp'

Verbosity level, specified as the comma-separated pair consisting of 'Verbose' and 0 or 1. Verbose controls the number of diagnostic messages that the software displays in the Command Window.

If Verbose is 0, then the software does not display diagnostic messages. Otherwise, the software displays diagnostic messages.

Example: 'Verbose',1

Data Types: single | double

## Output Arguments

collapse all

Predicted class labels, returned as a categorical or character array, logical or numeric vector, or cell array of character vectors.

label has the same data type as Mdl.ClassNames and has the same number of rows as Mdl.X.

The software predicts the classification of an observation by assigning the observation to the class yielding the largest negated average binary loss (or, equivalently, the smallest average binary loss).

Negated average binary losses, returned as a numeric matrix. NegLoss is an n-by-K matrix, where n is the number of observations (size(Mdl.X,1)) and K is the number of unique classes (size(Mdl.ClassNames,1)).

Positive-class scores for each binary learner, returned as a numeric matrix. PBScore is an n-by-L matrix, where n is the number of observations (size(Mdl.X,1)) and L is the number of binary learners (size(Mdl.CodingMatrix,2)).

Posterior class probabilities, returned as a numeric matrix. Posterior is an n-by-K matrix, where n is the number of observations (size(Mdl.X,1)) and K is the number of unique classes (size(Mdl.ClassNames,1)).

To request Posterior, you must set 'FitPosterior',true when training the ECOC model using fitcecoc. Otherwise, the software throws an error.

collapse all

### Binary Loss

A binary loss is a function of the class and classification score that determines how well a binary learner classifies an observation into the class.

Suppose the following:

• mkj is element (k,j) of the coding design matrix M (that is, the code corresponding to class k of binary learner j).

• sj is the score of binary learner j for an observation.

• g is the binary loss function.

• $\stackrel{^}{k}$ is the predicted class for the observation.

In loss-based decoding [Escalera et al.], the class producing the minimum sum of the binary losses over binary learners determines the predicted class of an observation, that is,

$\stackrel{^}{k}=\underset{k}{\text{argmin}}\sum _{j=1}^{L}|{m}_{kj}|g\left({m}_{kj},{s}_{j}\right).$

In loss-weighted decoding [Escalera et al.], the class producing the minimum average of the binary losses over binary learners determines the predicted class of an observation, that is,

$\stackrel{^}{k}=\underset{k}{\text{argmin}}\frac{\sum _{j=1}^{L}|{m}_{kj}|g\left({m}_{kj},{s}_{j}\right)}{\sum _{j=1}^{L}|{m}_{kj}|}.$

Allwein et al. suggest that loss-weighted decoding improves classification accuracy by keeping loss values for all classes in the same dynamic range.

This table summarizes the supported loss functions, where yj is a class label for a particular binary learner (in the set {–1,1,0}), sj is the score for observation j, and g(yj,sj).

ValueDescriptionScore Domaing(yj,sj)
'binodeviance'Binomial deviance(–∞,∞)log[1 + exp(–2yjsj)]/[2log(2)]
'exponential'Exponential(–∞,∞)exp(–yjsj)/2
'hamming'Hamming[0,1] or (–∞,∞)[1 – sign(yjsj)]/2
'hinge'Hinge(–∞,∞)max(0,1 – yjsj)/2
'linear'Linear(–∞,∞)(1 – yjsj)/2
'logit'Logistic(–∞,∞)log[1 + exp(–yjsj)]/[2log(2)]

The software normalizes binary losses such that the loss is 0.5 when yj = 0, and aggregates using the average of the binary learners [Allwein et al.].

Do not confuse the binary loss with the overall classification loss (specified by the 'LossFun' name-value pair argument of the loss and predict object functions), which measures how well an ECOC classifier performs as a whole.

## Algorithms

collapse all

The software can estimate class posterior probabilities by minimizing the Kullback-Leibler divergence or by using quadratic programming. For the following descriptions of the posterior estimation algorithms, assume that:

• mkj is the element (k,j) of the coding design matrix M.

• I is the indicator function.

• ${\stackrel{^}{p}}_{k}$ is the class posterior probability estimate for class k of an observation, k = 1,...,K.

• rj is the positive-class posterior probability for binary learner j. That is, rj is the probability that binary learner j classifies an observation into the positive class, given the training data.

### Posterior Estimation Using Kullback-Leibler Divergence

By default, the software minimizes the Kullback-Leibler divergence to estimate class posterior probabilities. The Kullback-Leibler divergence between the expected and observed positive-class posterior probabilities is

$\Delta \left(r,\stackrel{^}{r}\right)=\sum _{j=1}^{L}{w}_{j}\left[{r}_{j}\mathrm{log}\frac{{r}_{j}}{{\stackrel{^}{r}}_{j}}+\left(1-{r}_{j}\right)\mathrm{log}\frac{1-{r}_{j}}{1-{\stackrel{^}{r}}_{j}}\right],$

where ${w}_{j}=\sum _{{S}_{j}}{w}_{i}^{\ast }$ is the weight for binary learner j.

• Sj is the set of observation indices on which binary learner j is trained.

• ${w}_{i}^{\ast }$ is the weight of observation i.

The software minimizes the divergence iteratively. The first step is to choose initial values ${\stackrel{^}{p}}_{k}^{\left(0\right)};\text{\hspace{0.17em}}\text{\hspace{0.17em}}k=1,...,K$ for the class posterior probabilities.

• If you do not specify 'NumKLIterations', then the software tries both sets of deterministic initial values described next, and selects the set that minimizes Δ.

• ${\stackrel{^}{p}}_{k}^{\left(0\right)}=1/K;\text{\hspace{0.17em}}\text{\hspace{0.17em}}k=1,...,K.$

• ${\stackrel{^}{p}}_{k}^{\left(0\right)};\text{\hspace{0.17em}}\text{\hspace{0.17em}}k=1,...,K$ is the solution of the system

${M}_{01}{\stackrel{^}{p}}^{\left(0\right)}=r,$

where M01 is M with all mkj = –1 replaced with 0, and r is a vector of positive-class posterior probabilities returned by the L binary learners [Dietterich et al.]. The software uses lsqnonneg to solve the system.

• If you specify 'NumKLIterations',c, where c is a natural number, then the software does the following to choose the set ${\stackrel{^}{p}}_{k}^{\left(0\right)};\text{\hspace{0.17em}}\text{\hspace{0.17em}}k=1,...,K$, and selects the set that minimizes Δ.

• The software tries both sets of deterministic initial values as described previously.

• The software randomly generates c vectors of length K using rand, and then normalizes each vector to sum to 1.

At iteration t, the software completes these steps:

1. Compute

${\stackrel{^}{r}}_{j}^{\left(t\right)}=\frac{\sum _{k=1}^{K}{\stackrel{^}{p}}_{k}^{\left(t\right)}I\left({m}_{kj}=+1\right)}{\sum _{k=1}^{K}{\stackrel{^}{p}}_{k}^{\left(t\right)}I\left({m}_{kj}=+1\cup {m}_{kj}=-1\right)}.$

2. Estimate the next class posterior probability using

${\stackrel{^}{p}}_{k}^{\left(t+1\right)}={\stackrel{^}{p}}_{k}^{\left(t\right)}\frac{\sum _{j=1}^{L}{w}_{j}\left[{r}_{j}I\left({m}_{kj}=+1\right)+\left(1-{r}_{j}\right)I\left({m}_{kj}=-1\right)\right]}{\sum _{j=1}^{L}{w}_{j}\left[{\stackrel{^}{r}}_{j}^{\left(t\right)}I\left({m}_{kj}=+1\right)+\left(1-{\stackrel{^}{r}}_{j}^{\left(t\right)}\right)I\left({m}_{kj}=-1\right)\right]}.$

3. Normalize ${\stackrel{^}{p}}_{k}^{\left(t+1\right)};\text{\hspace{0.17em}}\text{\hspace{0.17em}}k=1,...,K$ so that they sum to 1.

4. Check for convergence.

For more details, see [Hastie et al.] and [Zadrozny].

### Posterior Estimation Using Quadratic Programming

Posterior probability estimation using quadratic programming requires an Optimization Toolbox license. To estimate posterior probabilities for an observation using this method, the software completes these steps:

1. Estimate the positive-class posterior probabilities, rj, for binary learners j = 1,...,L.

2. Using the relationship between rj and ${\stackrel{^}{p}}_{k}$ [Wu et al.], minimize

$\sum _{j=1}^{L}{\left[-{r}_{j}\sum _{k=1}^{K}{\stackrel{^}{p}}_{k}I\left({m}_{kj}=-1\right)+\left(1-{r}_{j}\right)\sum _{k=1}^{K}{\stackrel{^}{p}}_{k}I\left({m}_{kj}=+1\right)\right]}^{2}$

with respect to ${\stackrel{^}{p}}_{k}$ and the restrictions

$\begin{array}{l}0\le {\stackrel{^}{p}}_{k}\le 1\\ \sum _{k}{\stackrel{^}{p}}_{k}=1.\end{array}$

The software performs minimization using quadprog.

## References

[1] Allwein, E., R. Schapire, and Y. Singer. “Reducing multiclass to binary: A unifying approach for margin classiﬁers.” Journal of Machine Learning Research. Vol. 1, 2000, pp. 113–141.

[2] Dietterich, T., and G. Bakiri. “Solving Multiclass Learning Problems Via Error-Correcting Output Codes.” Journal of Artificial Intelligence Research. Vol. 2, 1995, pp. 263–286.

[3] Escalera, S., O. Pujol, and P. Radeva. “On the decoding process in ternary error-correcting output codes.” IEEE Transactions on Pattern Analysis and Machine Intelligence. Vol. 32, Issue 7, 2010, pp. 120–134.

[4] Escalera, S., O. Pujol, and P. Radeva. “Separability of ternary codes for sparse designs of error-correcting output codes.” Pattern Recogn. Vol. 30, Issue 3, 2009, pp. 285–297.

[5] Hastie, T., and R. Tibshirani. “Classification by Pairwise Coupling.” Annals of Statistics. Vol. 26, Issue 2, 1998, pp. 451–471.

[6] Wu, T. F., C. J. Lin, and R. Weng. “Probability Estimates for Multi-Class Classification by Pairwise Coupling.” Journal of Machine Learning Research. Vol. 5, 2004, pp. 975–1005.

[7] Zadrozny, B. “Reducing Multiclass to Binary by Coupling Probability Estimates.” NIPS 2001: Proceedings of Advances in Neural Information Processing Systems 14, 2001, pp. 1041–1048.