ens1 = regularize(ens)
ens1 = regularize(ens,Name,Value)
finds
optimal weights for learners in ens1
= regularize(ens
)ens
by lasso regularization. regularize
returns
a regression ensemble identical to ens
, but with
a populated Regularization
property.
computes
optimal weights with additional options specified by one or more ens1
= regularize(ens
,Name,Value
)Name,Value
pair
arguments. You can specify several namevalue pair arguments in any
order as Name1,Value1,…,NameN,ValueN
.

A regression ensemble, created by 
Specify optional commaseparated pairs of Name,Value
arguments.
Name
is the argument
name and Value
is the corresponding
value. Name
must appear
inside single quotes (' '
).
You can specify several name and value pair
arguments in any order as Name1,Value1,...,NameN,ValueN
.

Vector of nonnegative regularization parameter values for lasso.
For the default setting of Default: 

Maximal number of passes for lasso optimization, a positive integer. Default: 

Relative tolerance on the regularized loss for lasso, a numeric positive scalar. Default: 

Verbosity level, either Default: 

A regression ensemble. Usually you set 
The lasso algorithm finds an optimal set of learner weights α_{t} that minimize
$\sum}_{n=1}^{N}{w}_{n}g\left(\left({\displaystyle \sum}_{t=1}^{T}{\alpha}_{t}{h}_{t}\left({x}_{n}\right)\right),{y}_{n}\right)+\lambda {\displaystyle \sum}_{t=1}^{T}\left{\alpha}_{t}\right.$
Here
λ ≥ 0 is a parameter you provide, called the lasso parameter.
h_{t} is a weak learner in the ensemble trained on N observations with predictors x_{n}, responses y_{n}, and weights w_{n}.
g(f,y) = (f – y)^{2} is the squared error.
Regularize an ensemble of bagged trees:
X = rand(2000,20); Y = repmat(1,2000,1); Y(sum(X(:,1:5),2)>2.5) = 1; bag = fitensemble(X,Y,'Bag',300,'Tree',... 'type','regression'); bag = regularize(bag,'lambda',[0.001 0.1],'verbose',1);
regularize
reports on its progress.
To see the resulting regularization structure:
bag.Regularization ans = Method: 'Lasso' TrainedWeights: [300x2 double] Lambda: [1.0000e003 0.1000] ResubstitutionMSE: [0.0616 0.0812] CombineWeights: @classreg.learning.combiner.WeightedSum
See how many learners in the regularized ensemble have positive weights (so would be included in a shrunken ensemble):
sum(bag.Regularization.TrainedWeights > 0) ans = 116 91
To shrink the ensemble using the weights from Lambda
= 0.1
:
cmp = shrink(bag,'weightcolumn',2) cmp = classreg.learning.regr.CompactRegressionEnsemble: PredictorNames: {1x20 cell} CategoricalPredictors: [] ResponseName: 'Y' ResponseTransform: 'none' NumTrained: 91
There are 91
members in the regularized ensemble,
which is less than 1/3 of the original 300
.