Documentation |
LinearModel.stepwise will be removed in a future release. Use stepwiselm instead.
mdl = LinearModel.stepwise(tbl,modelspec)
mdl = LinearModel.stepwise(X,y,modelspec)
mdl = LinearModel.stepwise(___,modelspec,Name,Value)
mdl = LinearModel.stepwise(tbl,modelspec) returns a linear model of a table or dataset array tbl, using stepwise regression to add or remove predictors. modelspec is the starting model for the stepwise procedure.
mdl = LinearModel.stepwise(X,y,modelspec) creates a linear model of the responses y to a data matrix X, using stepwise regression to add or remove predictors. modelspec is the starting model for the stepwise procedure.
mdl = LinearModel.stepwise(___,modelspec,Name,Value) creates a linear model for any of the inputs in the previous syntaxes, with additional options specified by one or more Name,Value pair arguments.
For example, you can specify the categorical variables, the smallest or largest set of terms to use in the model, the maximum number of steps to take, or the criterion LinearModel.stepwise uses to add or remove terms.
You cannot use robust regression with stepwise regression. Check your data for outliers before using LinearModel.stepwise.
For other methods or properties of the LinearModel object, see LinearModel.
A terms matrix is a t-by-(p + 1) matrix specifying terms in a model, where t is the number of terms, p is the number of predictor variables, and plus one is for the response variable.
The value of T(i,j) is the exponent of variable j in term i. Suppose there are three predictor variables A, B, and C:
[0 0 0 0] % Constant term or intercept [0 1 0 0] % B; equivalently, A^0 * B^1 * C^0 [1 0 1 0] % A*C [2 0 0 0] % A^2 [0 1 2 0] % B*(C^2)
The 0 at the end of each term represents the response variable. In general,
If you have the variables in a table or dataset array, then 0 must represent the response variable depending on the position of the response variable. The following example illustrates this.
Load the sample data and define the dataset array.
load hospital ds = dataset(hospital.Sex,hospital.BloodPressure(:,1),hospital.Age,... hospital.Smoker,'VarNames',{'Sex','BloodPressure','Age','Smoker'});
Represent the linear model 'BloodPressure ~ 1 + Sex + Age + Smoker' in a terms matrix. The response variable is in the second column of the dataset array, so there must be a column of 0s for the response variable in the second column of the terms matrix.
T = [0 0 0 0;1 0 0 0;0 0 1 0;0 0 0 1]
T = 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 1
Redefine the dataset array.
ds = dataset(hospital.BloodPressure(:,1),hospital.Sex,hospital.Age,... hospital.Smoker,'VarNames',{'BloodPressure','Sex','Age','Smoker'});
Now, the response variable is the first term in the dataset array. Specify the same linear model, 'BloodPressure ~ 1 + Sex + Age + Smoker', using a terms matrix.
T = [0 0 0 0;0 1 0 0;0 0 1 0;0 0 0 1]
T = 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1
If you have the predictor and response variables in a matrix and column vector, then you must include 0 for the response variable at the end of each term. The following example illustrates this.
Load the sample data and define the matrix of predictors.
load carsmall
X = [Acceleration,Weight];
Specify the model 'MPG ~ Acceleration + Weight + Acceleration:Weight + Weight^2' using a term matrix and fit the model to the data. This model includes the main effect and two-way interaction terms for the variables, Acceleration and Weight, and a second-order term for the variable, Weight.
T = [0 0 0;1 0 0;0 1 0;1 1 0;0 2 0]
T = 0 0 0 1 0 0 0 1 0 1 1 0 0 2 0
Fit a linear model.
mdl = fitlm(X,MPG,T)
mdl = Linear regression model: y ~ 1 + x1*x2 + x2^2 Estimated Coefficients: Estimate SE tStat pValue (Intercept) 48.906 12.589 3.8847 0.00019665 x1 0.54418 0.57125 0.95261 0.34337 x2 -0.012781 0.0060312 -2.1192 0.036857 x1:x2 -0.00010892 0.00017925 -0.6076 0.545 x2^2 9.7518e-07 7.5389e-07 1.2935 0.19917 Number of observations: 94, Error degrees of freedom: 89 Root Mean Squared Error: 4.1 R-squared: 0.751, Adjusted R-Squared 0.739 F-statistic vs. constant model: 67, p-value = 4.99e-26
Only the intercept and x2 term, which correspond to the Weight variable, are significant at the 5% significance level.
Now, perform a stepwise regression with a constant model as the starting model and a linear model with interactions as the upper model.
T = [0 0 0;1 0 0;0 1 0;1 1 0];
mdl = stepwiselm(X,MPG,[0 0 0],'upper',T)
1. Adding x2, FStat = 259.3087, pValue = 1.643351e-28 mdl = Linear regression model: y ~ 1 + x2 Estimated Coefficients: Estimate SE tStat pValue (Intercept) 49.238 1.6411 30.002 2.7015e-49 x2 -0.0086119 0.0005348 -16.103 1.6434e-28 Number of observations: 94, Error degrees of freedom: 92 Root Mean Squared Error: 4.13 R-squared: 0.738, Adjusted R-Squared 0.735 F-statistic vs. constant model: 259, p-value = 1.64e-28
The results of the stepwise regression are consistent with the results of fitlm in the previous step.
A formula for model specification is a string of the form 'Y ~ terms'
where
Y is the response name.
terms contains
Variable names
+ means include the next variable
- means do not include the next variable
: defines an interaction, a product of terms
* defines an interaction and all lower-order terms
^ raises the predictor to a power, exactly as in * repeated, so ^ includes lower order terms as well
() groups terms
Note: Formulas include a constant (intercept) term by default. To exclude a constant term from the model, include -1 in the formula. |
For example,
'Y ~ A + B + C' means a three-variable
linear model with intercept.
'Y ~ A + B +
C - 1' is a three-variable linear model without intercept.
'Y ~ A + B + C + B^2' is a three-variable
model with intercept and a B^2 term.
'Y
~ A + B^2 + C' is the same as the previous example because B^2 includes
a B term.
'Y ~ A + B +
C + A:B' includes an A*B term.
'Y
~ A*B + C' is the same as the previous example because A*B
= A + B + A:B.
'Y ~ A*B*C - A:B:C' has
all interactions among A, B,
and C, except the three-way interaction.
'Y
~ A*(B + C + D)' has all linear terms, plus products of A with
each of the other variables.
Wilkinson notation describes the factors present in models. The notation relates to factors present in models, not to the multipliers (coefficients) of those factors.
Wilkinson Notation | Factors in Standard Notation |
---|---|
1 | Constant (intercept) term |
A^k, where k is a positive integer | A, A^{2}, ..., A^{k} |
A + B | A, B |
A*B | A, B, A*B |
A:B | A*B only |
-B | Do not include B |
A*B + C | A, B, C, A*B |
A + B + C + A:B | A, B, C, A*B |
A*B*C - A:B:C | A, B, C, A*B, A*C, B*C |
A*(B + C) | A, B, C, A*B, A*C |
Statistics Toolbox™ notation always includes a constant term unless you explicitly remove the term using -1.
Stepwise regression is a systematic method for adding and removing terms from a linear or generalized linear model based on their statistical significance in explaining the response variable. The method begins with an initial model, specified using modelspec, and then compares the explanatory power of incrementally larger and smaller models.
MATLAB^{®} uses forward and backward stepwise regression to determine a final model. At each step, the method searches for terms to add to or remove from the model based on the value of the 'Criterion' argument. The default value of 'Criterion' is 'sse', and in this case, stepwiselm uses the p-value of an F-statistic to test models with and without a potential term at each step. If a term is not currently in the model, the null hypothesis is that the term would have a zero coefficient if added to the model. If there is sufficient evidence to reject the null hypothesis, the term is added to the model. Conversely, if a term is currently in the model, the null hypothesis is that the term has a zero coefficient. If there is insufficient evidence to reject the null hypothesis, the term is removed from the model.
Here is how stepwise proceeds when 'Criterion' is 'sse':
Fit the initial model.
If any terms not in the model have p-values less than an entrance tolerance (that is, if it is unlikely that they would have zero coefficient if added to the model), add the one with the smallest p-value and repeat this step; otherwise, go to step 3.
If any terms in the model have p-values greater than an exit tolerance (that is, the hypothesis of a zero coefficient can be rejected), remove the one with the largest p-value and go to step 2; otherwise, end.
The default for stepwiseglm is 'Deviance' and it follows a similar procedure for adding or removing terms.
There are several other criteria available, which you can specify using the 'Criterion' argument. You can use the change in the value of the Akaike information criterion, Bayesian information criterion, R-squared, adjusted R-squared as a criterion to add or remove terms.
Depending on the terms included in the initial model and the order in which terms are moved in and out, the method might build different models from the same set of potential terms. The method terminates when no single step improves the model. There is no guarantee, however, that a different initial model or a different sequence of steps will not lead to a better fit. In this sense, stepwise models are locally optimal, but might not be globally optimal.
[1] Draper, N. R., and H. Smith. Applied Regression Analysis. Hoboken, NJ: Wiley-Interscience, pp. 307–312, 1998.
You can also construct a stepwise linear model using stepwiselm.
You can construct a model using fitlm, then manually adjust the model using step, addTerms, or removeTerms. Use fitlm for robust regression. You cannot use robust regression and stepwise regression together.