Documentation Center

  • Trial Software
  • Product Updates

mnrfit

Multinomial logistic regression

Syntax

Description

example

B = mnrfit(X,Y) returns a matrix, B, of coefficient estimates for a multinomial logistic regression of the nominal responses in Y on the predictors in X.

example

B = mnrfit(X,Y,Name,Value) returns a matrix, B, of coefficient estimates for a multinomial model fit with additional options specified by one or more Name,Value pair arguments.

For example, you can fit a nominal, an ordinal, or a hierarchical model, or change the link function.

example

[B,dev,stats] = mnrfit(___) also returns the deviance of the fit, dev, and the structure stats for any of the previous input arguments. stats contains model statistics such as degrees of freedom, standard errors for coefficient estimates, and residuals.

Examples

expand all

Multinomial Regression for Nominal Responses

Fit a multinomial regression for nominal outcomes and interpret the results.

Load the sample data.

load('fisheriris.mat')

The column vector, species, consists of iris flowers of three different species, setosa, versicolor, virginica. The double matrix meas consists of four types of measurements on the flowers, the length and width of sepals and petals in centimeters, respectively.

Define the nominal response variable using a categorical array.

sp = categorical(species);

Fit a multinomial regression model to predict the species using the measurements.

[B,dev,stats] = mnrfit(meas,sp);
B
B =

   13.3860   15.4492
    2.4623    1.8196
    5.2948    2.5700
   -7.4916   -4.3714
   -8.9322   -7.6467

This is a nominal model for the response category relative risks, with separate slopes on all four predictors, that is, each category of meas. The first row of B contains the intercept terms for the relative risk of the first two response categories, setosa and versicolor versus the reference category, virginica. The last four rows contain the slopes for the models for the first two categories. mnrfit accepts the third category as the reference category.

The models for the relative risk of an iris flower being a setosa versus a virginica, and the relative risk of an iris flower being a versicolor species versus a virginica species are respectively

and

The coefficients express the effects of the predictor variables on the relative risk or the log odds of being in one category versus the reference category.

For example, the estimated coefficient 2.46 indicates that the probability of being species 1 (setosa) compared to the probability of being species 3 (virginica) (the relative risk of being a setosa versus a virginica) increases exp(2.46) times for each unit increase in X1, the first measurement, given all else equal.

In terms of log odds, you can say that the relative log odds of being a setosa versus a virginica increases 2.46 times with a one-unit increase in X1 given all else is equal.

Check the statistical significance of the model coefficients.

stats.p
ans =

    0.2457    0.0031
    0.4543    0.1048
    0.0773    0.0815
    0.0258    0.0007
    0.1856    0.0002

The p-value of 0.0258 indicates that the third measure is significant on the relative risk of being a setosa versus a virginica (species 1 compared to species 3). The p-values of 0.0007 and 0.0002 indicate that the third and fourth measures are significant on the relative risk of being a versicolor versus a virginica (species 2 compared to species 3).

Request the standard errors of coefficient estimates.

stats.se
ans =

   11.5316    5.2201
    3.2905    1.1218
    2.9976    1.4753
    3.3609    1.2869
    6.7474    2.0846

Calculate the 95% confidence limits for the coefficients.

LL = stats.beta - 1.96.*stats.se;
UL = stats.beta + 1.96.*stats.se;

Display the confidence intervals for the coefficients of the model for the relative risk of being a setosa versus a virginica (the first column of coefficients in B).

[LL(:,1) UL(:,1)]
ans =

   -9.2160   35.9880
   -3.9869    8.9116
   -0.5805   11.1701
  -14.0790   -0.9043
  -22.1570    4.2926

Find the confidence intervals for the coefficients of the model for the relative risk of being a versicolor versus a virginica (the second column of coefficients in B).

[LL(:,2) UL(:,2)]
ans =

    5.2177   25.6807
   -0.3791    4.0184
   -0.3216    5.4615
   -6.8938   -1.8490
  -11.7324   -3.5610

Multinomial Regression for Ordinal Responses

Fit a multinomial regression model for categorical responses with natural ordering among categories.

Load the sample data and define the predictor variables.

load('carbig.mat')
X = [Acceleration Displacement Horsepower Weight];

The predictor variables are the acceleration, engine displacement, horsepower, and weight of the cars. The response variable is miles per gallon (mpg).

Create an ordinal response variable categorizing MPG into four levels from 9 to 48 mpg by labeling the response values in the range 9–19 as 1, 20–29 as 2, 30–39 as 3, and 40–48 as 4.

miles = ordinal(MPG,{'1','2','3','4'},[],[9,19,29,39,48]);

Fit an ordinal response model for the response variable miles.

[B,dev,stats] = mnrfit(X,miles,'model','ordinal');
B
B =

  -16.6895
  -11.7208
   -8.0606
    0.1048
    0.0103
    0.0645
    0.0017

The first three elements of B are the intercept terms for the models, and the last four elements of B are the coefficients of the covariates, assumed common across all categories. This model corresponds to parallel regression, which is also called the proportional odds model, where there is a different intercept but common slopes among categories. You can specify this using the 'interactions','off' name-value pair argument, which is the default for ordinal models.

[B(1:3)'; repmat(B(4:end),1,3)]
ans =
  -16.6895  -11.7208   -8.0606
    0.1048    0.1048    0.1048
    0.0103    0.0103    0.0103
    0.0645    0.0645    0.0645
    0.0017    0.0017    0.0017

The link function in the model is logit ('link','logit'), which is the default for an ordinal model. The coefficients express the relative risk or log odds of the mpg of a car being less than or equal to one value versus greater than that value.

The proportional odds model in this example is

For example, the coefficient estimate of 0.1048 indicates that a unit change in acceleration would impact the odds of the mpg of a car being less than or equal to 19 versus more than 19, or being less than or equal to 29 versus greater than 29, or being less than or equal to 39 versus greater than 39, by a factor of exp(0.01048) given all else is equal.

Assess the significance of the coefficients.

stats.p
ans =

    0.0000
    0.0000
    0.0000
    0.1899
    0.0350
    0.0000
    0.0118

The p-values of 0.035, 0.0000, and 0.0118 for engine displacement, horsepower, and weight of a car, respectively, indicate that these factors are significant on the odds of mpg of a car being less than or equal to a certain value versus being greater than that value.

Hierarchical Multinomial Regression Model

Fit a hierarchical multinomial regression model.

Navigate to the folder containing sample data.

cd(matlabroot)
cd('help/toolbox/stats/examples')

Load the sample data.

load smoking

The data set smoking contains five variables: sex, age, weight, and systolic and diastolic blood pressure. Sex is a binary variable where 1 indicates female patients, and 0 indicates male patients.

Define the response variable.

Y = categorical(smoking.Smoker);

The data in Smoker has four categories:

  • 0: Nonsmoker, 0 cigarettes a day

  • 1: Smoker, 1–5 cigarettes a day

  • 2: Smoker, 6–10 cigarettes a day

  • 3: Smoker, 11 or more cigarettes a day

Define the predictor variables.

X = [smoking.Sex smoking.Age smoking.Weight...
    smoking.SystolicBP smoking.DiastolicBP];

Fit a hierarchical multinomial model.

[B,dev,stats] = mnrfit(X,Y,'model','hierarchical');
B
B =

   43.8148    5.9571   44.0712
    1.8709   -0.0230    0.0662
    0.0188    0.0625    0.1335
    0.0046   -0.0072   -0.0130
   -0.2170    0.0416   -0.0324
   -0.2273   -0.1449   -0.4824

The first column of B includes the intercept and the coefficient estimates for the model of the relative risk of being a nonsmoker versus a smoker. The second column includes the parameter estimates for modeling the log odds of smoking 1–5 cigarettes a day versus more than five cigarettes a day given that a person is a smoker. Finally, the third column includes the parameter estimates for modeling the log odds of a person smoking 6–10 cigarettes a day versus more than 10 cigarettes a day given he/she smokes more than 5 cigarettes a day.

The coefficients differ across categories. You can specify this using the 'interactions','on' name-value pair argument, which is the default for hierarchical models. So, the model in this example is

For example, the coefficient estimate of 1.8709 indicates that the likelihood of being a smoker versus a nonsmoker increases by exp(1.8709) = 6.49 times as the gender changes from female to male given everything else held constant.

Assess the statistical significance of the terms.

stats.p
ans =

    0.0000    0.5363    0.2149
    0.3549    0.9912    0.9835
    0.6850    0.2676    0.2313
    0.9032    0.8523    0.8514
    0.0009    0.5187    0.8165
    0.0004    0.0483    0.0545

Sex, age, or weight don't appear significant on any level. The p-values of 0.0009 and 0.0004 indicate that both types of blood pressure are significant on the relative risk of a person being a smoker versus a nonsmoker. The p-value of 0.0483 shows that only diastolic blood pressure is significant on the odds of a person smoking 0–5 cigarettes a day versus more than 5 cigarettes a day. Similarly, the p-value of 0.0545 indicates that diastolic blood pressure is significant on the odds of a person smoking 6–10 cigarettes a day versus more than 10 cigarettes a day.

Check if any nonsignificant factors are correlated to each other. Draw a scatterplot of age versus weight grouped by sex.

figure()
gscatter(smoking.Age,smoking.Weight,smoking.Sex)
legend('Male','Female')
xlabel('Age')
ylabel('Weight')

The range of weight of an individual seems to differ according to gender. Age does not seem to have any obvious correlation with sex or weight. Age is insignificant and weight seems to be correlated with sex, so you can eliminate both and reconstruct the model.

Eliminate age and weight from the model and fit a hierarchical model with sex, systolic blood pressure, and diastolic blood pressure as the predictor variables.

X = double([smoking.Sex smoking.SystolicBP...
smoking.DiastolicBP]);
[B,dev,stats] = mnrfit(X,Y,'model','hierarchical');
B
B =

    44.8456    5.3230   25.0248
    1.6045    0.2330    0.4982
   -0.2161    0.0497    0.0179
   -0.2222   -0.1358   -0.3092

Here, a coefficient estimate of 1.6045 indicates that the likelihood of being a nonsmoker versus a smoker increases by exp(1.6045) = 4.97 times as sex changes from male to female. A unit increase in the systolic blood pressure indicates an exp(–.2161) = 0.8056 decrease in the likelihood of being a nonsmoker versus a smoker. Similarly, a unit increase in the diastolic blood pressure indicates an exp(–.2222) = 0.8007 decrease in the relative rate of being a nonsmoker versus being a smoker.

Assess the statistical significance of the terms.

stats.p
ans =

    0.0000    0.4715    0.2325
    0.0210    0.7488    0.6362
    0.0010    0.4107    0.8899
    0.0003    0.0483    0.0718

The p-values of 0.0210, 0.0010, and 0.0003 indicate that the terms sex and both types of blood pressure are significant on the relative risk of a person being a nonsmoker versus a smoker, given the other terms in the model. Based on the p-value of 0.0483, diastolic blood pressure appears significant on the relative risk of a person smoking 1–5 cigarettes versus more than 5 cigarettes a day, given that this person is a smoker. Because none of the p-values on the third column are less than 0.05, you can say that none of the variables are statistically significant on the relative risk of a person smoking from 6–10 cigarettes versus more than 10 cigarettes, given that this person smokes more than 5 cigarettes a day.

Input Arguments

expand all

X — Observations on predictor variablesn-by-p matrix

Observations on predictor variables, specified as an n-by-p matrix. X contains n observations for p predictors.

    Note:   mnrfit automatically includes a constant term (intercept) in all models. Do not include a column of 1s in X.

Data Types: single | double

Y — Response valuesn-by-k matrix | n-by-1 column vector

Response values, specified as a column vector or a matrix. Y can be one of the following:

  • An n-by-k matrix, where Y(i,j) is the number of outcomes of the multinomial category j for the predictor combinations given by X(i,:). In this case, the number of observations are made at each predictor combination.

  • An n-by-1 column vector of scalar integers from 1 to k indicating the value of the response for each observation. In this case, all sample sizes are 1.

  • An n-by-1 categorical array indicating the nominal or ordinal value of the response for each observation. In this case, all sample sizes are 1.

Name-Value Pair Arguments

Specify optional comma-separated pairs of Name,Value arguments. Name is the argument name and Value is the corresponding value. Name must appear inside single quotes (' '). You can specify several name and value pair arguments in any order as Name1,Value1,...,NameN,ValueN.

Example: 'Model','ordinal','Link','probit' specifies an ordinal model with a probit link function.

'Model' — Type of model to fit'nominal' (default) | 'ordinal' | 'hierarchical'

Type of model to fit, specified as the comma-separated pair consisting of 'Model' and one of the following.

'nominal'Default. There is no ordering among the response categories.
'ordinal'There is a natural ordering among the response categories.
'hierarchical'The choice of response category is sequential/nested.

Example: 'Model','ordinal'

'Interactions' — Indicator for interaction between multinomial categories and coefficients'on' | 'off'

Indicator for an interaction between the multinomial categories and coefficients, specified as the comma-separated pair consisting of 'Interactions' and one of the following.

'on'Default for nominal and hierarchical models. Fit a model with different coefficients across categories.
'off'Default for ordinal models. Fit a model with a common set of coefficients for the predictor variables, across all multinomial categories. This is often described as parallel regression or the proportional odds model.

In all cases, the model has different intercepts across categories. The choice of 'Interactions' determines the dimensions of the output array B.

Example: 'Interactions','off'

Data Types: logical

Link function to use for ordinal and hierarchical models, specified as the comma-separated pair consisting of 'Link' and one of the following.

'logit'Default. f(γ) = ln(γ/(1 –γ))
'probit'f(γ) = Φ-1(γ) — error term is normally distributed with variance 1
'comploglog'Complementary log-log

f(γ) = ln(–ln(1 – γ))

'loglog'

f(γ) = ln(–ln(γ))

The link function defines the relationship between response probabilities and the linear combination of predictors, Xβ. The link functions might be functions of cumulative or conditional probabilities based on whether the model is for an ordinal or a sequential/nested response. For example, for an ordinal model, γ represents the cumulative probability of being in categories 1 to j and the model with a logit link function as follows:

where k represents the last category.

You cannot specify the 'Link' parameter for nominal models; these always use a multinomial logit link,

where π stands for a categorical probability, and r corresponds to the reference category. mnrfit uses the last category as the reference category for nominal models.

Example: 'Link','loglog'

'EstDisp' — Indicator for estimating dispersion parameter'off' (default) | 'on'

Indicator for estimating a dispersion parameter, specified as the comma-separated pair consisting of 'EstDisp' and one of the following.

'off'Default. Use the theoretical dispersion value of 1.
'on'Estimate a dispersion parameter for the multinomial distribution in computing standard errors.

Example: 'EstDisp','on'

Output Arguments

expand all

B — Coefficient estimatesvector | matrix

Coefficient estimates for a multinomial logistic regression of the responses in Y, returned as a vector or a matrix.

  • If 'Interaction' is 'off', then B is a k – 1 + p vector. The first k – 1 rows of B correspond to the intercept terms, one for each k – 1 multinomial categories, and the remaining p rows correspond to the predictor coefficients, which are common for all of the first k – 1 categories.

  • If 'Interaction' is 'on', then B is a (p + 1)-by-(k – 1) matrix. Each column of B corresponds to the estimated intercept term and predictor coefficients, one for each of the first k – 1 multinomial categories.

The estimates for the kth category are taken to be zero as mnrfit takes the last category as the reference category.

dev — Deviance of the fitscalar value

Deviance of the fit, returned as a scalar value. It is twice the difference between the maximum achievable log likelihood and that attained under the fitted model. This corresponds to the sum of deviance residuals,

where rdi are the deviance residuals. For deviance residuals see stats.

stats — Model statisticsstructure

Model statistics, returned as a structure that contains the following fields.

betaThe coefficient estimates. These are the same as B.
dfeDegrees of freedom for error
  • If 'Interactions' is 'off', then degrees of freedom is n*(k – 1) – (k – 1 + p).

  • If 'Interactions' is 'on', then degrees of freedom is (np + 1)*(k – 1).

sfitEstimated dispersion parameter.
sTheoretical or estimated dispersion parameter.
  • If 'Estdisp' is 'off', then s is the theoretical dispersion parameter, 1.

  • If 'Estdisp' is 'on', then s is equal to the estimated dispersion parameter, sfit.

estdispIndicator for a theoretical or estimated dispersion parameter.
seStandard errors of coefficient estimates, B.
coeffcorrEstimated correlation matrix for B.
covbEstimated covariance matrix for B.
tt statistics for B.
pp-values for B.
residRaw residuals. Observed minus fitted values,

where πij is the categorical, cumulative or conditional probability, and mi is the corresponding sample size.

residpPearson residuals, which are the raw residuals scaled by the estimated standard deviation:

where πij is the categorical, cumulative, or conditional probability, and mi is the corresponding sample size.

residdDeviance residuals:

where πij is the categorical, cumulative, or conditional probability, and mi is the corresponding sample size.

References

[1] McCullagh, P., and J. A. Nelder. Generalized Linear Models. New York: Chapman & Hall, 1990.

[2] Long, J. S. Regression Models for Categorical and Limited Dependent Variables. Sage Publications, 1997.

[3] Dobson, A. J., and A. G. Barnett. An Introduction to Generalized Linear Models. Chapman and Hall/CRC. Taylor & Francis Group, 2008.

See Also

| | |

Was this topic helpful?