Lasso or elastic net regularization for generalized linear model regression
B = lassoglm(X,Y)
[B,FitInfo]
= lassoglm(X,Y)
[B,FitInfo]
= lassoglm(X,Y,distr)
[B,FitInfo]
= lassoglm(X,Y,distr,Name,Value)
returns
penalized maximumlikelihood fitted coefficients for a generalized
linear model of the response B
= lassoglm(X
,Y
)Y
to the data matrix X
. Y
are
assumed to have a Gaussian probability distribution.
[
returns
a structure containing information about the fits.B
,FitInfo
]
= lassoglm(X
,Y
)
[
fits
the model using the probability distribution type for B
,FitInfo
]
= lassoglm(X
,Y
,distr
)Y
as
specified in distr
.
[
fits
regularized generalized linear regressions with additional options
specified by one or more B
,FitInfo
]
= lassoglm(X
,Y
,distr
,Name,Value
)Name,Value
pair arguments.

Numeric matrix with 

When When


Distributional family for the nonsystematic variation in the responses, a string. Choices:
By default, 
Specify optional commaseparated pairs of Name,Value
arguments.
Name
is the argument
name and Value
is the corresponding
value. Name
must appear
inside single quotes (' '
).
You can specify several name and value pair
arguments in any order as Name1,Value1,...,NameN,ValueN
.

Scalar value from Default:  

Method
Default:  

Maximum number of nonzero coefficients in the model. Default:  

Vector of nonnegative
Default: Geometric sequence of  

Positive scalar, the ratio of the smallest to the largest If you set Default:  

Specify the mapping between the mean µ of the response and the linear predictor Xb.
 

Positive integer, the number of Monte Carlo repetitions for cross validation.
Default:  

Positive integer, the number of Default:  

Numeric vector with the same number of rows as  

Structure that specifies whether to cross validate in parallel,
and specifies the random stream or streams. Create the
 

Cell array of strings representing names of the predictor variables,
in the order in which they appear in Default:  

Convergence threshold for the coordinate descent algorithm (see
Friedman, Tibshirani, and Hastie [3]).
The algorithm terminates when successive estimates of the coefficient
vector differ in the L^{2} norm
by a relative amount less than Default:  

Boolean value specifying whether Default:  

Observation weights, a nonnegative vector of length Default: 

Fitted coefficients, a  

Structure containing information about the model fits.
If you set the

[1] Tibshirani, R. Regression Shrinkage and Selection via the Lasso. Journal of the Royal Statistical Society, Series B, Vol. 58, No. 1, pp. 267–288, 1996.
[2] Zou, H. and T. Hastie. Regularization and Variable Selection via the Elastic Net. Journal of the Royal Statistical Society, Series B, Vol. 67, No. 2, pp. 301–320, 2005.
[3] Friedman, J., R. Tibshirani, and T. Hastie. Regularization
Paths for Generalized Linear Models via Coordinate Descent. Journal
of Statistical Software, Vol. 33, No. 1, 2010. http://www.jstatsoft.org/v33/i01
[4] Hastie, T., R. Tibshirani, and J. Friedman. The Elements of Statistical Learning, 2nd edition. Springer, New York, 2008.
[5] Dobson, A. J. An Introduction to Generalized Linear Models, 2nd edition. Chapman & Hall/CRC Press, New York, 2002.
[6] McCullagh, P., and J. A. Nelder. Generalized Linear Models, 2nd edition. Chapman & Hall/CRC Press, New York, 1989.
[7] Collett, D. Modelling Binary Data, 2nd edition. Chapman & Hall/CRC Press, New York, 2003.