LinearModel.stepwise
Class: LinearModel
Create linear regression model by stepwise regression
LinearModel.stepwise
will be removed in a
future release. Use stepwiselm
instead.
Syntax
mdl = LinearModel.stepwise(tbl,modelspec)
mdl = LinearModel.stepwise(X,y,modelspec)
mdl = LinearModel.stepwise(___,modelspec,Name,Value)
Description
mdl
= LinearModel.stepwise(tbl
,modelspec
)
returns
a linear model of a table or dataset array tbl
,
using stepwise regression to add or remove predictors. modelspec
is
the starting model for the stepwise procedure.
mdl
= LinearModel.stepwise(X
,y
,modelspec
)
creates
a linear model of the responses y
to a data matrix X
,
using stepwise regression to add or remove predictors. modelspec
is
the starting model for the stepwise procedure.
mdl
= LinearModel.stepwise(___,modelspec
,Name,Value
)
creates
a linear model for any of the inputs in the previous syntaxes, with
additional options specified by one or more Name,Value
pair
arguments.
For example, you can specify the categorical variables, the
smallest or largest set of terms to use in the model, the maximum
number of steps to take, or the criterion LinearModel.stepwise
uses
to add or remove terms.
Tips
You cannot use robust regression with stepwise regression.
Check your data for outliers before using LinearModel.stepwise
.
For other methods or properties of the LinearModel
object,
see LinearModel
.
Input Arguments
expand all
Input data, specified as a table or dataset array. When modelspec
is
a formula
, it specifies the variables to be used
as the predictors and response. Otherwise, if you do not specify the
predictor and response variables, the last variable is the response
variable and the others are the predictor variables by default.
Predictor variables can be numeric, or any grouping variable
type, such as logical or categorical (see Grouping Variables). The response must
be numeric or logical.
To set a different column as the response variable, use the ResponseVar
namevalue
pair argument. To use a subset of the columns as predictors, use the PredictorVars
namevalue
pair argument.
Data Types: single
 double
 logical
Predictor variables, specified as an nbyp matrix,
where n is the number of observations and p is
the number of predictor variables. Each column of X
represents
one variable, and each row represents one observation.
By default, there is a constant term in the model, unless you
explicitly remove it, so do not include a column of 1s in X
.
Data Types: single
 double
 logical
Response variable, specified as an nby1
vector, where n is the number of observations.
Each entry in y
is the response for the corresponding
row of X
.
Data Types: single
 double
Starting model for the stepwise regression, specified as one
of the following:
Character vector specifying the type of starting model.
Character Vector  Model Type 
'constant'  Model contains only a constant (intercept) term. 
'linear'  Model contains an intercept and linear terms for each predictor. 
'interactions'  Model contains an intercept, linear terms, and all products
of pairs of distinct predictors (no squared terms). 
'purequadratic'  Model contains an intercept, linear terms, and squared terms. 
'quadratic'  Model contains an intercept, linear terms, interactions, and
squared terms. 
'polyijk '  Model is a polynomial with all terms up to degree i in
the first predictor, degree j in the second
predictor, etc. Use numerals 0 through 9 .
For example, 'poly2111' has a constant plus all
linear and product terms, and also contains terms with predictor 1
squared. 
If you want to specify the smallest or largest set of terms
in the model, use the Lower
and Upper
namevalue
pair arguments.
tby(p+1) matrix,
namely a terms matrix,
specifying terms to include in model, where t is
the number of terms and p is the number of predictor
variables, and plus one is for the response variable.
Character vector representing a formula in the form
'Y
~ terms
'
,
where
the terms
are in Wilkinson Notation.
NameValue Pair Arguments
Specify optional commaseparated pairs of Name,Value
arguments.
Name
is the argument
name and Value
is the corresponding
value. Name
must appear
inside single quotes (' '
).
You can specify several name and value pair
arguments in any order as Name1,Value1,...,NameN,ValueN
.
expand all
Categorical variables in the fit, specified as the commaseparated
pair consisting of 'CategoricalVars'
and either
a cell array of character vectors of the names of the categorical
variables in the table or dataset array tbl
, or
a logical or numeric index vector indicating which columns are categorical.
If data is in a table or dataset array tbl
,
then the default is to treat all categorical or logical variables,
character arrays, or cell arrays of character vectors as categorical
variables.
If data is in matrix X
, then the
default value of this namevalue pair argument is an empty matrix []
.
That is, no variable is categorical unless you specify it.
For example, you can specify the observations 2 and 3 out of
6 as categorical using either of the following examples.
Example: 'CategoricalVars',[2,3]
Example: 'CategoricalVars',logical([0 1 1 0 0 0])
Data Types: single
 double
 logical
Criterion to add or remove terms, specified as the commaseparated
pair consisting of 'Criterion'
and one of the following:
'sse'
— Default for stepwiselm
. pvalue
for an Ftest of the change in the sum of squared
error by adding or removing the term.
'aic'
— Change in the value
of Akaike information criterion (AIC).
'bic'
— Change in the value
of Bayesian information criterion (BIC).
'rsquared'
— Increase in
the value of R^{2}.
'adjrsquared'
— Increase
in the value of adjusted R^{2}.
Example: 'Criterion','bic'
Observations to exclude from the fit, specified as the commaseparated
pair consisting of 'Exclude'
and a logical or numeric
index vector indicating which observations to exclude from the fit.
For example, you can exclude observations 2 and 3 out of 6 using
either of the following examples.
Example: 'Exclude',[2,3]
Example: 'Exclude',logical([0 1 1 0 0 0])
Data Types: single
 double
 logical
Indicator the for constant term (intercept) in the fit, specified
as the commaseparated pair consisting of 'Intercept'
and
either true
to include or false
to
remove the constant term from the model.
Use 'Intercept'
only when specifying the
model using a character vector, not a formula or matrix.
Example: 'Intercept',false
Model specification describing terms that cannot be removed
from the model, specified as the commaseparated pair consisting of 'Lower'
and
one of the options for modelspec
naming the model.
Example: 'Lower','linear'
Number of steps to take, specified as the commaseparated pair
consisting of 'NSteps'
and a positive integer.
Data Types: single
 double
Improvement measure for adding a term, specified as the commaseparated
pair consisting of 'PEnter'
and a scalar value.
The default values are below.
Criterion  Default
value  Decision 
'Deviance'  0.05  If the pvalue of F or
chisquared statistic is smaller than PEnter , add
the term to the model. 
'SSE'  0.05  If the SSE of the model is smaller than PEnter ,
add the term to the model. 
'AIC'  0  If the change in the AIC of the model is smaller than PEnter ,
add the term to the model. 
'BIC'  0  If the change in the BIC of the model is smaller than PEnter ,
add the term to the model. 
'Rsquared'  0.1  If the increase in the Rsquared of the model is larger than PEnter ,
add the term to the model. 
'AdjRsquared'  0  If the increase in the adjusted Rsquared of the model is larger
than PEnter , add the term to the model. 
For more information on the criteria, see Criterion
namevalue
pair argument.
Example: 'PEnter',0.075
Predictor variables to use in the fit, specified as the commaseparated
pair consisting of 'PredictorVars'
and either a
cell array of character vectors of the variable names in the table
or dataset array tbl
, or a logical or numeric index
vector indicating which columns are predictor variables.
The character vectors should be among the names in tbl
,
or the names you specify using the 'VarNames'
namevalue
pair argument.
The default is all variables in X
, or all
variables in tbl
except for ResponseVar
.
For example, you can specify the second and third variables
as the predictor variables using either of the following examples.
Example: 'PredictorVars',[2,3]
Example: 'PredictorVars',logical([0 1 1 0 0 0])
Data Types: single
 double
 logical
 cell
Improvement measure for removing a term, specified as the commaseparated
pair consisting of 'PRemove'
and a scalar value.
Criterion  Default
value  Decision 
'Deviance'  0.10  If the pvalue of F or
chisquared statistic is larger than PRemove , remove
the term from the model. 
'SSE'  0.10  If the pvalue of the F statistic is larger
than PRemove , remove the term from the model. 
'AIC'  0.01  If the change in the AIC of the model is larger than PRemove ,
remove the term from the model. 
'BIC'  0.01  If the change in the BIC of the model is larger than PRemove ,
remove the term from the model. 
'Rsquared'  0.05  If the increase in the Rsquared value of the model is smaller
than PRemove , remove the term from the model. 
'AdjRsquared'  0.05  If the increase in the adjusted Rsquared value of the model
is smaller than PRemove , remove the term from the
model. 
At each step, stepwise algorithm also checks whether any term
is redundant (linearly dependent) with other terms in the current
model. When any term is linearly dependent with other terms in the
current model, it is removed, regardless of the criterion value.
For more information on the criteria, see Criterion
namevalue
pair argument.
Example: 'PRemove',0.05
Response variable to use in the fit, specified as the commaseparated
pair consisting of 'ResponseVar'
and either a character
vector containing the variable name in the table or dataset array tbl
,
or a logical or numeric index vector indicating which column is the
response variable. You typically need to use 'ResponseVar'
when
fitting a table or dataset array tbl
.
For example, you can specify the fourth variable, say yield
,
as the response out of six variables, in one of the following ways.
Example: 'ResponseVar','yield'
Example: 'ResponseVar',[4]
Example: 'ResponseVar',logical([0 0 0 1 0 0])
Data Types: single
 double
 logical
 char
Model specification describing the largest set of terms in the
fit, specified as the commaseparated pair consisting of 'Upper'
and
one of the character vector options for modelspec
naming
the model.
Example: 'Upper','quadratic'
Names of variables in fit, specified as the commaseparated
pair consisting of 'VarNames'
and a cell array
of character vectors including the names for the columns of X
first,
and the name for the response variable y
last.
'VarNames'
is not applicable to variables
in a table or dataset array, because those variables already have
names.
For example, if in your data, horsepower, acceleration, and
model year of the cars are the predictor variables, and miles per
gallon (MPG) is the response variable, then you can name the variables
as follows.
Example: 'VarNames',{'Horsepower','Acceleration','Model_Year','MPG'}
Data Types: cell
Control for display of information, specified as the commaseparated
pair consisting of 'Verbose'
and one of the following:
0
— Suppress all display.
1
— Display the action taken
at each step.
2
— Also display the actions
evaluated at each step.
Example: 'Verbose',2
Observation weights, specified as the commaseparated pair consisting
of 'Weights'
and an nby1 vector
of nonnegative scalar values, where n is the number
of observations.
Data Types: single
 double
Output Arguments
expand all
mdl
— Linear model
LinearModel
object
Linear model representing a leastsquares fit of the response
to the data, returned as a LinearModel
object.
For the properties and methods of the linear model object, mdl
,
see the LinearModel
class
page.
Definitions
Terms Matrix
A terms matrix is a tby(p +
1) matrix specifying terms in a model, where t is
the number of terms, p is the number of predictor
variables, and plus one is for the response variable.
The value of T(i,j)
is the exponent of variable j
in
term i
. Suppose there are three predictor variables A
, B
,
and C
:
The
0
at
the end of each term represents the response variable. In general,
If you have the variables in a table or dataset array,
then 0
must represent the response variable depending
on the position of the response variable. The following example illustrates
this.
Load the sample data and define the dataset array.
Represent the linear model 'BloodPressure ~ 1 + Sex
+ Age + Smoker'
in a terms matrix. The response variable
is in the second column of the dataset array, so there must be a column
of 0s for the response variable in the second column of the terms
matrix.
T =
0 0 0 0
1 0 0 0
0 0 1 0
0 0 0 1
Redefine the dataset array.
Now, the response variable is the first term in the dataset
array. Specify the same linear model, 'BloodPressure ~ 1
+ Sex + Age + Smoker'
, using a terms matrix.
T =
0 0 0 0
0 1 0 0
0 0 1 0
0 0 0 1
If you have the predictor and response variables in
a matrix and column vector, then you must include 0
for
the response variable at the end of each term. The following example
illustrates this.
Load the sample data and define the matrix of predictors.
Specify the model 'MPG ~ Acceleration + Weight + Acceleration:Weight
+ Weight^2'
using a term matrix and fit the model to the
data. This model includes the main effect and twoway interaction
terms for the variables, Acceleration
and Weight
,
and a secondorder term for the variable, Weight
.
T =
0 0 0
1 0 0
0 1 0
1 1 0
0 2 0
Fit a linear model.
mdl =
Linear regression model:
y ~ 1 + x1*x2 + x2^2
Estimated Coefficients:
Estimate SE tStat pValue
(Intercept) 48.906 12.589 3.8847 0.00019665
x1 0.54418 0.57125 0.95261 0.34337
x2 0.012781 0.0060312 2.1192 0.036857
x1:x2 0.00010892 0.00017925 0.6076 0.545
x2^2 9.7518e07 7.5389e07 1.2935 0.19917
Number of observations: 94, Error degrees of freedom: 89
Root Mean Squared Error: 4.1
Rsquared: 0.751, Adjusted RSquared 0.739
Fstatistic vs. constant model: 67, pvalue = 4.99e26
Only the intercept and x2
term, which correspond
to the Weight
variable, are significant at the
5% significance level.
Now, perform a stepwise regression with a constant model as
the starting model and a linear model with interactions as the upper
model.
1. Adding x2, FStat = 259.3087, pValue = 1.643351e28
mdl =
Linear regression model:
y ~ 1 + x2
Estimated Coefficients:
Estimate SE tStat pValue
(Intercept) 49.238 1.6411 30.002 2.7015e49
x2 0.0086119 0.0005348 16.103 1.6434e28
Number of observations: 94, Error degrees of freedom: 92
Root Mean Squared Error: 4.13
Rsquared: 0.738, Adjusted RSquared 0.735
Fstatistic vs. constant model: 259, pvalue = 1.64e28
The results of the stepwise regression are consistent with the
results of fitlm
in the previous step.
Formula
A formula for model specification is a character
vector of the form 'Y
~ terms
'
where
Y
is the response name.
terms
contains
Variable names
+
means include the next variable

means do not include the next
variable
:
defines an interaction, a product
of terms
*
defines an interaction and all lowerorder terms
^
raises the predictor to a power,
exactly as in *
repeated, so ^
includes
lower order terms as well
()
groups terms
Note:
Formulas include a constant (intercept) term by default. To
exclude a constant term from the model, include 1 in
the formula. 
For example,
'Y ~ A + B + C'
means a threevariable
linear model with intercept.
'Y ~ A + B +
C  1'
is a threevariable linear model without intercept.
'Y ~ A + B + C + B^2'
is a threevariable
model with intercept and a B^2
term.
'Y
~ A + B^2 + C'
is the same as the previous example because B^2
includes
a B
term.
'Y ~ A + B +
C + A:B'
includes an A*B
term.
'Y
~ A*B + C'
is the same as the previous example because A*B
= A + B + A:B
.
'Y ~ A*B*C  A:B:C'
has
all interactions among A
, B
,
and C
, except the threeway interaction.
'Y
~ A*(B + C + D)'
has all linear terms, plus products of A
with
each of the other variables.
Wilkinson Notation
Wilkinson notation describes the factors present
in models. The notation relates to factors present in models, not
to the multipliers (coefficients) of those factors.
Wilkinson Notation  Factors in Standard Notation 
1  Constant (intercept) term 
A^k , where k is a positive
integer  A , A^{2} ,
..., A^{k} 
A + B  A , B 
A*B  A , B , A*B 
A:B  A*B only 
B  Do not include B 
A*B + C  A , B , C , A*B 
A + B + C + A:B  A , B , C , A*B 
A*B*C  A:B:C  A , B , C , A*B , A*C , B*C 
A*(B + C)  A , B , C , A*B , A*C 
Statistics and Machine Learning Toolbox™ notation always includes a constant term
unless you explicitly remove the term using 1
.
Examples
expand all
Fit a Linear Model Using Stepwise Regression
Load the sample data.
hald
contains hardening data for 13 different concrete compositions. heat
is the heat of hardening after 180 days. ingredients
is the percentage of each different ingredient in the cement sample.
Fit a linear model to the data. Set the criterion value to enter the model as 0.06.
1. Adding x4, FStat = 22.7985, pValue = 0.000576232
2. Adding x1, FStat = 108.2239, pValue = 1.105281e06
3. Adding x2, FStat = 5.0259, pValue = 0.051687
4. Removing x4, FStat = 1.8633, pValue = 0.2054
mdl =
Linear regression model:
y ~ 1 + x1 + x2
Estimated Coefficients:
Estimate SE tStat pValue
________ ________ ______ __________
(Intercept) 52.577 2.2862 22.998 5.4566e10
x1 1.4683 0.1213 12.105 2.6922e07
x2 0.66225 0.045855 14.442 5.029e08
Number of observations: 13, Error degrees of freedom: 10
Root Mean Squared Error: 2.41
Rsquared: 0.979, Adjusted RSquared 0.974
Fstatistic vs. constant model: 230, pvalue = 4.41e09
By default, the starting model is the constant model. stepwiselm
performs forward selection and x4
, x1
, and x2
, respectively, as the corresponding
values are less than the PEnter
value of 0.06. stepwiselm
later uses backward elimination and eliminates x4
from the model because, once x2
is in the model, the
value of x4
is higher than the default value of PRemove
, 0.1.
Stepwise Regression Using Specified Model Formula and Variables
Perform stepwise regression using variables stored in a dataset array. Specify the starting model using Wilkinson notation, and identify the response and predictor variables using optional arguments.
Load the sample data.
The hospital dataset array includes the gender, age, weight, and smoking status of patients.
Fit a linear model with a starting model of a constant term and Smoker
as the predictor variable. Specify the response variable, Weight
, and categorical predictor variables, Sex
, Age
, and Smoker
.
1. Adding Sex, FStat = 770.0158, pValue = 6.262758e48
2. Removing Smoker, FStat = 0.21224, pValue = 0.64605
mdl =
Linear regression model:
Weight ~ 1 + Sex
Estimated Coefficients:
Estimate SE tStat pValue
________ ______ ______ ___________
(Intercept) 130.47 1.1995 108.77 5.2762e104
Sex_Male 50.06 1.7496 28.612 2.2464e49
Number of observations: 100, Error degrees of freedom: 98
Root Mean Squared Error: 8.73
Rsquared: 0.893, Adjusted RSquared 0.892
Fstatistic vs. constant model: 819, pvalue = 2.25e49
At each step, stepwiselm
searches for terms to add and remove. At first step, stepwise algorithm adds Sex
to the model with a
value of 6.26e48. Then, removes Smoker from the model, since given Sex
in the model, the variable Smoker
becomes redundant. stepwiselm
only includes Sex
in the final linear model. The weight of the patients do not seem to differ significantly according to age or the status of smoking.
Related Examples
Algorithms
Stepwise regression is a systematic method
for adding and removing terms from a linear or generalized linear
model based on their statistical significance in explaining the response
variable. The method begins with an initial model, specified using modelspec
,
and then compares the explanatory power of incrementally larger and
smaller models.
MATLAB^{®} uses forward and backward stepwise regression to
determine a final model. At each step, the method searches for terms
to add to or remove from the model based on the value of the 'Criterion'
argument.
The default value of 'Criterion'
is 'sse'
,
and in this case, stepwiselm
uses the pvalue
of an Fstatistic to test models with and without
a potential term at each step. If a term is not currently in the model,
the null hypothesis is that the term would have a zero coefficient
if added to the model. If there is sufficient evidence to reject the
null hypothesis, the term is added to the model. Conversely, if a
term is currently in the model, the null hypothesis is that the term
has a zero coefficient. If there is insufficient evidence to reject
the null hypothesis, the term is removed from the model.
Here is how stepwise proceeds when 'Criterion'
is 'sse'
:
Fit the initial model.
Examine a set of available terms not in the model.
If any of these terms have pvalues less than an
entrance tolerance (that is, if it is unlikely that they would have
zero coefficient if added to the model), add the one with the smallest pvalue
and repeat this step; otherwise, go to step 3.
If any of the available terms in the model have pvalues
greater than an exit tolerance (that is, the hypothesis of a zero
coefficient cannot be rejected), remove the one with the largest pvalue
and go to step 2; otherwise, end.
At any stage, the function will not add a higherorder term
if the model does not also include all lowerorder terms that are
subsets of it. For example, it will not try to add the term X1:X2^2
unless
both X1
and X2^2
are already
in the model. Similarly, the function will not remove lowerorder
terms that are subsets of higherorder terms that remain in the model.
For example, it will not examine to remove X1
or X2^2
if X1:X2^2
stays
in the model.
The default for stepwiseglm
is 'Deviance'
and
it follows a similar procedure for adding or removing terms.
There are several other criteria available, which you can specify
using the 'Criterion'
argument. You can use the
change in the value of the Akaike information criterion, Bayesian
information criterion, Rsquared, adjusted Rsquared as a criterion
to add or remove terms.
Depending on the terms included in the initial model and the
order in which terms are moved in and out, the method might build
different models from the same set of potential terms. The method
terminates when no single step improves the model. There is no guarantee,
however, that a different initial model or a different sequence of
steps will not lead to a better fit. In this sense, stepwise models
are locally optimal, but might not be globally optimal.
References
[1] Draper, N. R., and H. Smith. Applied
Regression Analysis. Hoboken, NJ: WileyInterscience,
pp. 307–312, 1998.
Alternatives
You can also construct a stepwise linear model using stepwiselm
.
You can construct a model using fitlm
,
then manually adjust the model using step
, addTerms
, or removeTerms
.
Use fitlm
for robust regression.
You cannot use robust regression and stepwise regression together.