vals = cvshrink(ens)
[vals,nlearn]
= cvshrink(ens)
[vals,nlearn]
= cvshrink(ens,Name,Value)
returns
an vals
= cvshrink(ens
)L
byT
matrix with crossvalidated
values of the mean squared error. L
is the number
of lambda
values in the ens.Regularization
structure. T
is
the number of threshold
values on weak learner
weights. If ens
does not have a Regularization
property
filled in by the regularize
method, pass a lambda
namevalue
pair.
[
returns an vals
,nlearn
]
= cvshrink(ens
)L
byT
matrix
of the mean number of learners in the crossvalidated ensemble.
[
cross
validates with additional options specified by one or more vals
,nlearn
]
= cvshrink(ens
,Name,Value
)Name,Value
pair
arguments. You can specify several namevalue pair arguments in any
order as Name1,Value1,…,NameN,ValueN
.

A regression ensemble, created with 
Specify optional commaseparated pairs of Name,Value
arguments.
Name
is the argument
name and Value
is the corresponding
value. Name
must appear
inside single quotes (' '
).
You can specify several name and value pair
arguments in any order as Name1,Value1,...,NameN,ValueN
.

A partition created with 

Holdout validation tests the specified fraction of the data,
and uses the rest of the data for training. Specify a numeric scalar
from 

Number of folds to use in a crossvalidated tree, a positive
integer. If you do not supply a crossvalidation method, Default: 

Vector of nonnegative regularization parameter values for lasso.
If empty, Default: 

Use leaveoneout cross validation by setting to 

Numeric vector with lower cutoffs on weights for weak learners. Default: 




Create a regression ensemble for predicting mileage from the carsmall
data.
Cross validate the ensemble for three values each of lambda
and threshold
.
load carsmall X = [Displacement Horsepower Weight]; ens = fitensemble(X,MPG,'bag',100,'Tree',... 'type','regression'); [vals nlearn] = cvshrink(ens,'lambda',[.01 .1 1],... 'threshold',[0 .01 .1]) vals = 20.0949 19.9007 131.6316 20.0924 19.8431 128.0989 19.9759 19.7987 119.5574 nlearn = 13.3000 11.6000 3.5000 13.2000 11.5000 3.6000 13.4000 11.4000 3.9000
Clearly, setting a threshold of 0.1
leads
to unacceptable errors, while a threshold of 0.01
gives
similar errors to a threshold of 0
. The mean number
of learners with a threshold of 0.1
is about 11.5
,
whereas the mean number is about 13.2
when the
threshold is 0
.