L = loss(tree,X,Y)
[L,se] =
loss(tree,X,Y)
[L,se,NLeaf]
= loss(tree,X,Y)
[L,se,NLeaf,bestlevel]
= loss(tree,X,Y)
L = loss(tree,X,Y,Name,Value)
returns
the mean squared error between the predictions of L
= loss(tree
,X
,Y
)tree
to
the data in X
, compared to the true responses Y
.
[
returns
the standard error of the loss.L
,se
] =
loss(tree
,X
,Y
)
[
returns
the number of leaves (terminal nodes) in the tree.L
,se
,NLeaf
]
= loss(tree
,X
,Y
)
[
returns
the optimal pruning level for L
,se
,NLeaf
,bestlevel
]
= loss(tree
,X
,Y
)tree
.
computes
the error in prediction with additional options specified by one or
more L
= loss(tree
,X
,Y
,Name,Value
)Name,Value
pair arguments.

Regression tree created with 

A matrix of predictor values. Each column of 

A numeric column vector with the same number of rows as 
Specify optional commaseparated pairs of Name,Value
arguments.
Name
is the argument
name and Value
is the corresponding
value. Name
must appear
inside single quotes (' '
).
You can specify several name and value pair
arguments in any order as Name1,Value1,...,NameN,ValueN
.

Function handle for loss, or the string fun(Y,Yfit,W)
All the vectors have the same number of rows as Default: 

A vector of nonnegative integers in ascending order or If you specify a vector, then all elements must be at least If you specify
To invoke Default: 

A string, either:


Numeric vector of observation weights with the same number of
elements as Default: 

Classification error, a vector the length of 

Standard error of loss, a vector the length of 

Number of leaves (terminal nodes) in the pruned subtrees, a
vector the length of 

A scalar whose value depends on

The mean squared error m of the predictions f(X_{n}) with weight vector w is
$$m=\frac{{\displaystyle \sum {w}_{n}{\left(f\left({X}_{n}\right){Y}_{n}\right)}^{2}}}{{\displaystyle \sum {w}_{n}}}.$$