E = edge(obj,X,Y)
E = edge(obj,X,Y,Name,Value)
Matrix where each row represents an observation, and each column
represents a predictor. The number of columns in
Class labels, with the same data type as exists in
Specify optional comma-separated pairs of
Name is the argument
Value is the corresponding
Name must appear
inside single quotes (
You can specify several name and value pair
arguments in any order as
Observation weights, a numeric vector of length
Edge, a scalar representing the weighted average value of the margin.
The edge is the weighted mean value of the classification margin. The weights are class prior probabilities. If you supply additional weights, those weights are normalized to sum to the prior probabilities in the respective classes, and are then used to compute the weighted average.
The classification margin is the difference between the classification score for the true class and maximal classification score for the false classes.
The classification margin is a column vector with the same number
of rows as in the matrix
X. A high value of margin
indicates a more reliable prediction than a low value.
For discriminant analysis, the score of a classification is the posterior probability of the classification. For the definition of posterior probability in discriminant analysis, see Posterior Probability.
Compute the classification edge and margin for the Fisher iris data, trained on its first two columns of data, and view the last 10 entries:
load fisheriris X = meas(:,1:2); obj = fitcdiscr(X,species); E = edge(obj,X,species) E = 0.4980 M = margin(obj,X,species); M(end-10:end) ans = 0.6551 0.4838 0.6551 -0.5127 0.5659 0.4611 0.4949 0.1024 0.2787 -0.1439 -0.4444
The classifier trained on all the data is better:
obj = fitcdiscr(meas,species); E = edge(obj,meas,species) E = 0.9454 M = margin(obj,meas,species); M(end-10:end) ans = 0.9983 1.0000 0.9991 0.9978 1.0000 1.0000 0.9999 0.9882 0.9937 1.0000 0.9649