Lasso or elastic net regularization for generalized linear model regression
B = lassoglm(X,Y)
B = lassoglm(X,Y,distr)
B = lassoglm(X,Y,distr,Name,Value)
[B,FitInfo]
= lassoglm(___)
returns
penalized maximumlikelihood fitted coefficients for a generalized
linear model of the response B
= lassoglm(X
,Y
)Y
to the data matrix X
.
The values in Y
are assumed to have a Gaussian
probability distribution.
fits
the model using the probability distribution type for B
= lassoglm(X
,Y
,distr
)Y
specified
in distr
.
fits
regularized generalized linear regressions with additional options
specified by one or more B
= lassoglm(X
,Y
,distr
,Name,Value
)Name,Value
pair arguments.
[
, for any previous input syntax,
also returns a structure containing information about the fits.B
,FitInfo
]
= lassoglm(___)

Numeric matrix with 

When When


Distributional family for the nonsystematic variation in the responses. Choices:
By default, 
Specify optional
commaseparated pairs of Name,Value
arguments. Name
is
the argument name and Value
is the corresponding value.
Name
must appear inside single quotes (' '
). You can
specify several name and value pair arguments in any order as
Name1,Value1,...,NameN,ValueN
.

Scalar value from Default:  

Method
Default:  

Maximum number of nonzero coefficients in the model. Default:  

Vector of nonnegative
Default: Geometric sequence of  

Positive scalar, the ratio of the smallest to the largest If you set Default:  

Specify the mapping between the mean µ of the response and the linear predictor Xb.
 

Maximum number of iterations allowed, specified as positive
integer. If the algorithm executes Default:  

Positive integer, the number of Monte Carlo repetitions for crossvalidation.
Default:  

Positive integer, the number of Default:  

Numeric vector with the same number of rows as  

Structure that specifies whether to crossvalidate in parallel,
and specifies the random stream or streams. Create the
 

Cell array of character vectors representing names of the predictor
variables, in the order in which they appear in Default:  

Convergence threshold for the coordinate descent algorithm (see
Friedman, Tibshirani, and Hastie [3]).
The algorithm terminates when successive estimates of the coefficient
vector differ in the L^{2} norm
by a relative amount less than Default:  

Boolean value specifying whether Default:  

Observation weights, a nonnegative vector of length Default: 

Fitted coefficients, a  

Structure containing information about the model fits.
If you set the

[1] Tibshirani, R. “Regression Shrinkage and Selection via the Lasso.” Journal of the Royal Statistical Society. Series B, Vol. 58, No. 1, 1996, pp. 267–288.
[2] Zou, H. and T. Hastie. “Regularization and Variable Selection via the Elastic Net.” Journal of the Royal Statistical Society. Series B, Vol. 67, No. 2, 2005, pp. 301–320.
[3] Friedman, J., R. Tibshirani, and T. Hastie.
“Regularization Paths for Generalized Linear Models via Coordinate
Descent.” Journal of Statistical Software. Vol.
33, No. 1, 2010. http://www.jstatsoft.org/v33/i01
[4] Hastie, T., R. Tibshirani, and J. Friedman. The Elements of Statistical Learning. 2nd edition. New York: Springer, 2008.
[5] Dobson, A. J. An Introduction to Generalized Linear Models. 2nd edition. New York: Chapman & Hall/CRC Press, 2002.
[6] McCullagh, P., and J. A. Nelder. Generalized Linear Models. 2nd edition. New York: Chapman & Hall/CRC Press, 1989.
[7] Collett, D. Modelling Binary Data, 2nd edition. New York: Chapman & Hall/CRC Press, 2003.