E = edge(ens,tbl,ResponseVarName)
E = edge(ens,tbl,Y)
E = edge(ens,X,Y)
E = edge(___,Name,Value)
returns
the classification edge for E
= edge(ens
,tbl
,ResponseVarName
)ens
with data tbl
and
classification tbl.ResponseVarName
.
returns
the classification edge for E
= edge(ens
,tbl
,Y
)ens
with data tbl
and
classification Y
.
returns
the classification edge for E
= edge(ens
,X
,Y
)ens
with data X
and
classification Y
.
computes
the edge with additional options specified by one or more E
= edge(___,Name,Value
)Name,Value
pair
arguments, using any of the previous syntaxes.

A classification ensemble constructed with 

Sample data, specified as a table. Each row of If you trained 

Response variable name, specified as the name of a variable
in You must specify 

A matrix where each row represents an observation, and each
column represents a predictor. The number of columns in If you trained 

Class labels, with the same data type as exists in 
Specify optional commaseparated pairs of Name,Value
arguments.
Name
is the argument
name and Value
is the corresponding
value. Name
must appear
inside single quotes (' '
).
You can specify several name and value pair
arguments in any order as Name1,Value1,...,NameN,ValueN
.

Indices of weak learners in the ensemble ranging from Default: 

Meaning of the output
Default: 

A logical matrix of size When Default: 

Observation weights, a numeric vector of length Default: 

The classification edge, a vector or scalar depending on the
setting of the 
The classification margin is the difference
between the classification score for the true
class and maximal classification score for the false classes. Margin
is a column vector with the same number of rows as in the matrix X
.
For ensembles, a classification score represents the confidence of a classification into a class. The higher the score, the higher the confidence.
Different ensemble algorithms have different definitions for their scores. Furthermore, the range of scores depends on ensemble type. For example:
AdaBoostM1
scores range from –∞
to ∞.
Bag
scores range from 0
to 1
.
The edge is the weighted mean value of
the classification margin. The weights are the class probabilities
in ens
.Prior
. If you supply
weights in the weights
namevalue pair, those weights
are used instead of class probabilities.
Make a boosted ensemble classifier for the ionosphere
data,
and find the classification edge for the last few rows:
load ionosphere ens = fitensemble(X,Y,'AdaboostM1',100,'Tree'); E = edge(ens,X(end10:end,:),Y(end10:end)) E = 8.3310