Regularized leastsquares regression using lasso or elastic net algorithms
B = lasso(X,Y)
[B,FitInfo]
= lasso(X,Y)
[B,FitInfo]
= lasso(X,Y,Name,Value)
returns
fitted leastsquares regression coefficients for a set of regularization
coefficients B
= lasso(X
,Y
)Lambda
.
[
returns
a structure containing information about the fits.B
,FitInfo
]
= lasso(X
,Y
)
[
fits
regularized regressions with additional options specified by one or
more B
,FitInfo
]
= lasso(X
,Y
,Name,Value
)Name,Value
pair arguments.

Numeric matrix with 

Numeric vector of length 
Specify optional commaseparated pairs of Name,Value
arguments.
Name
is the argument
name and Value
is the corresponding
value. Name
must appear
inside single quotes (' '
).
You can specify several name and value pair
arguments in any order as Name1,Value1,...,NameN,ValueN
.

Scalar value from Default: 

Method
Default: 

Maximum number of nonzero coefficients in the model. Default: 

Vector of nonnegative
Default: Geometric sequence of 

Positive scalar, the ratio of the smallest to the largest If you set Default: 

Positive integer, the number of Monte Carlo repetitions for cross validation.
Default: 

Positive integer, the number of Default: 

Structure that specifies whether to cross validate in parallel,
and specifies the random stream or streams. Create the


Cell array of strings representing names of the predictor variables,
in the order in which they appear in Default: 

Convergence threshold for the coordinate descent algorithm (see
Friedman, Tibshirani, and Hastie [3]).
The algorithm terminates when successive estimates of the coefficient
vector differ in the L^{2} norm
by a relative amount less than Default: 

Boolean value specifying whether
Default: 

Observation weights, a nonnegative vector of length Default: 

Fitted coefficients, a  

Structure containing information about the model fits.
If you set the

[1] Tibshirani, R. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society, Series B, Vol 58, No. 1, pp. 267–288, 1996.
[2] Zou, H. and T. Hastie. Regularization and variable selection via the elastic net. Journal of the Royal Statistical Society, Series B, Vol. 67, No. 2, pp. 301–320, 2005.
[3] Friedman, J., R. Tibshirani, and T. Hastie. Regularization
paths for generalized linear models via coordinate descent. Journal
of Statistical Software, Vol 33, No. 1, 2010. http://www.jstatsoft.org/v33/i01
[4] Hastie, T., R. Tibshirani, and J. Friedman. The Elements of Statistical Learning, 2nd edition. Springer, New York, 2008.