Documentation Center

  • Trial Software
  • Product Updates

ANOVA

One-Way ANOVA

Introduction to One-Way ANOVA

The purpose of one-way ANOVA is to find out whether data from several groups have a common mean. That is, to determine whether the groups are actually different in the measured characteristic.

One-way ANOVA is a simple special case of the linear model. The one-way ANOVA form of the model is

where:

  • yij is a matrix of observations in which each column represents a different group.

  • α.j is a matrix whose columns are the group means. (The "dot j" notation means that α applies to all rows of column j. That is, the value αij is the same for all i.)

  • εij is a matrix of random disturbances.

The model assumes that the columns of y are a constant plus a random disturbance. You want to know if the constants are all the same.

Perform One-Way ANOVA

This example shows how to perform one-way ANOVA to determine whether data from several groups have a common mean.

Load and display the sample data.

load hogg
hogg
hogg =

    24    14    11     7    19
    15     7     9     7    24
    21    12     7     4    19
    27    17    13     7    15
    33    14    12    12    10
    23    16    18    18    20

The data comes from a study by Hogg and Ledolter (1987) of bacteria counts in shipments of milk. The columns of the matrix hogg represent different shipments. The rows are bacteria counts from cartons of milk chosen randomly from each shipment.

Test if some shipments have higher counts than others.

[p,tbl,stats] = anova1(hogg);
p
p =

   1.1971e-04

The standard ANOVA table has columns for the sums of squares, degrees of freedom, mean squares (SS/df), F -statistic, and p -value. You can use the F -statistic to do a hypothesis test to find out if the bacteria counts are the same. anova1 returns the p -value from this hypothesis test. In this case the p -value is about 0.0001, a very small value. This is a strong indication that the bacteria counts from the different shipments are not the same. An F -statistic as extreme as the observed F would occur by chance only once in 10,000 times if the counts were truly equal.

The p -value returned by anova1 depends on assumptions about the random disturbances $\epsilon_{ij}$ in the model equation. For the p -value to be correct, these disturbances need to be independent, normally distributed, and have constant variance.

You can get some graphical assurance that the means are different by looking at the box plots in the second figure window displayed by anova1 . Note, however, that the notches are used for a comparison of medians, not a comparison of means. For more information on this display, see boxplot .

Multiple Comparisons

Sometimes you need to determine not just whether there are any differences among the means, but specifically which pairs of means are significantly different. It is tempting to perform a series of t tests, one for each pair of means, but this procedure has a pitfall.

In a t test, you compute a t statistic and compare it to a critical value. The critical value is chosen so that when the means are really the same (any apparent difference is due to random chance), the probability that the t statistic will exceed the critical value is small, say 5%. When the means are different, the probability that the statistic will exceed the critical value is larger.

In this example there are five means, so there are 10 pairs of means to compare. It stands to reason that if all the means are the same, and if there is a 5% chance of incorrectly concluding that there is a difference in one pair, then the probability of making at least one incorrect conclusion among all 10 pairs is much larger than 5%.

Fortunately, there are procedures known as multiple comparison procedures that are designed to compensate for multiple tests.

Example: Multiple Comparisons

You can perform a multiple comparison test using the multcompare function and supplying it with the stats output from anova1.

load hogg
[p,tbl,stats] = anova1(hogg);
[c,m] = multcompare(stats)
c =
    1.0000    2.0000    2.4953   10.5000   18.5047
    1.0000    3.0000    4.1619   12.1667   20.1714
    1.0000    4.0000    6.6619   14.6667   22.6714
    1.0000    5.0000   -2.0047    6.0000   14.0047
    2.0000    3.0000   -6.3381    1.6667    9.6714
    2.0000    4.0000   -3.8381    4.1667   12.1714
    2.0000    5.0000  -12.5047   -4.5000    3.5047
    3.0000    4.0000   -5.5047    2.5000   10.5047
    3.0000    5.0000  -14.1714   -6.1667    1.8381
    4.0000    5.0000  -16.6714   -8.6667   -0.6619
m =
   23.8333    1.9273
   13.3333    1.9273
   11.6667    1.9273
    9.1667    1.9273
   17.8333    1.9273

The first output from multcompare has one row for each pair of groups, with an estimate of the difference in group means and a confidence interval for that group. For example, the second row has the values

1.0000    3.0000    4.1619   12.1667   20.1714

indicating that the mean of group 1 minus the mean of group 3 is estimated to be 12.1667, and a 95% confidence interval for this difference is [4.1619, 20.1714]. This interval does not contain 0, so you can conclude that the means of groups 1 and 3 are different.

The second output contains the mean and its standard error for each group.

It is easier to visualize the difference between group means by looking at the graph that multcompare produces.

There are five groups. The graph instructs you to Click on the group you want to test. Three groups have slopes significantly different from group one.

The graph shows that group 1 is significantly different from groups 2, 3, and 4. By using the mouse to select group 4, you can determine that it is also significantly different from group 5. Other pairs are not significantly different.

Two-Way ANOVA

Introduction to Two-Way ANOVA

The purpose of two-way ANOVA is to find out whether data from several groups have a common mean. One-way ANOVA and two-way ANOVA differ in that the groups in two-way ANOVA have two categories of defining characteristics instead of one.

Suppose an automobile company has two factories, and each factory makes the same three models of car. It is reasonable to ask if the gas mileage in the cars varies from factory to factory as well as from model to model. There are two predictors, factory and model, to explain differences in mileage.

There could be an overall difference in mileage due to a difference in the production methods between factories. There is probably a difference in the mileage of the different models (irrespective of the factory) due to differences in design specifications. These effects are called additive.

Finally, a factory might make high mileage cars in one model (perhaps because of a superior production line), but not be different from the other factory for other models. This effect is called an interaction. It is impossible to detect an interaction unless there are duplicate observations for some combination of factory and car model.

Two-way ANOVA is a special case of the linear model. The two-way ANOVA form of the model is

where, with respect to the automobile example above:

  • yijk is a matrix of gas mileage observations (with row index i, column index j, and repetition index k).

  • μ is a constant matrix of the overall mean gas mileage.

  • α.j is a matrix whose columns are the deviations of each car's gas mileage (from the mean gas mileage μ) that are attributable to the car's model. All values in a given column of α.j are identical, and the values in each row of α.j sum to 0.

  • βi. is a matrix whose rows are the deviations of each car's gas mileage (from the mean gas mileage μ) that are attributable to the car's factory. All values in a given row of βi. are identical, and the values in each column of βi. sum to 0.

  • γij is a matrix of interactions. The values in each row of γij sum to 0, and the values in each column of γij sum to 0.

  • εijk is a matrix of random disturbances.

Perform Two-Way ANOVA

This example shows how to perform two-way ANOVA to determine the effect of car model and factory on the mileage rating of cars.

Load and display the sample data.

load mileage
mileage
mileage =

   33.3000   34.5000   37.4000
   33.4000   34.8000   36.8000
   32.9000   33.8000   37.6000
   32.6000   33.4000   36.6000
   32.5000   33.7000   37.0000
   33.0000   33.9000   36.7000

Perform two-way ANOVA.

cars = 3;
[p,tbl,stats] = anova2(mileage,cars);
p
p =

    0.0000    0.0039    0.8411

There are three models of cars (columns) and two factories (rows). The reason there are six rows in mileage instead of two is that each factory provides three cars of each model for the study. The data from the first factory is in the first three rows, and the data from the second factory is in the last three rows.

The standard ANOVA table has columns for the sums of squares, degrees-of-freedom, mean squares (SS/df), F -statistics, and p -values.

You can use the F -statistics to do hypotheses tests to find out if the mileage is the same across models, factories, and model-factory pairs (after adjusting for the additive effects). anova2 returns the p -value from these tests.

The p -value for the model effect is zero to four decimal places. This is a strong indication that the mileage varies from one model to another. An F -statistic as extreme as the observed F would occur by chance less than once in 10,000 times if the gas mileage were truly equal from model to model. If you used the multcompare function to perform a multiple comparison test, you would find that each pair of the three models is significantly different.

The p -value for the factory effect is 0.0039, which is also highly significant. This indicates that one factory is out-performing the other in the gas mileage of the cars it produces. The observed p -value indicates that an F -statistic as extreme as the observed F would occur by chance about four out of 1000 times if the gas mileage were truly equal from factory to factory.

There does not appear to be any interaction between factories and models. The p -value, 0.8411, means that the observed result is quite likely (84 out 100 times) given that there is no interaction.

The p -values returned by anova2 depend on assumptions about the random disturbances $\epsilon_{ijk}$ in the model equation. For the p -values to be correct these disturbances need to be independent, normally distributed, and have constant variance.

In addition, anova2 requires that data be balanced, which in this case means there must be the same number of cars for each combination of model and factory.

N-Way ANOVA

Introduction to N-Way ANOVA

You can use N-way ANOVA to determine if the means in a set of data differ when grouped by multiple factors. If they do differ, you can determine which factors or combinations of factors are associated with the difference.

N-way ANOVA is a generalization of two-way ANOVA. For three factors, the model can be written

In this notation parameters with two subscripts, such as (αβ)ij., represent the interaction effect of two factors. The parameter (αβγ)ijk represents the three-way interaction. An ANOVA model can have the full set of parameters or any subset, but conventionally it does not include complex interaction terms unless it also includes all simpler terms for those factors. For example, one would generally not include the three-way interaction without also including all two-way interactions.

The anovan function performs N-way ANOVA. Unlike the anova1 and anova2 functions, anovan does not expect data in a tabular form. Instead, it expects a vector of response measurements and a separate vector (or text array) containing the values corresponding to each factor. This input data format is more convenient than matrices when there are more than two factors or when the number of measurements per factor combination is not constant.

N-Way ANOVA with a Small Data Set

Consider the following two-way example using anova2.

m = [23 15 20;27 17 63;43 3 55;41 9 90]
m =
    23    15    20
    27    17    63
    43     3    55
    41     9    90

anova2(m,2)

ans =
    0.0197    0.2234    0.2663

The factor information is implied by the shape of the matrix m and the number of measurements at each factor combination (2). Although anova2 does not actually require arrays of factor values, for illustrative purposes you could create them as follows.

cfactor = repmat(1:3,4,1)

cfactor =
     1     2     3
     1     2     3
     1     2     3
     1     2     3

rfactor = [ones(2,3); 2*ones(2,3)]

rfactor =

     1     1     1
     1     1     1
     2     2     2
     2     2     2

The cfactor matrix shows that each column of m represents a different level of the column factor. The rfactor matrix shows that the top two rows of m represent one level of the row factor, and bottom two rows of m represent a second level of the row factor. In other words, each value m(i,j) represents an observation at column factor level cfactor(i,j) and row factor level rfactor(i,j).

To solve the above problem with anovan, you need to reshape the matrices m, cfactor, and rfactor to be vectors.

m = m(:);
cfactor = cfactor(:);
rfactor = rfactor(:);

[m cfactor rfactor]

ans =

    23     1     1
    27     1     1
    43     1     2
    41     1     2
    15     2     1
    17     2     1
     3     2     2
     9     2     2
    20     3     1
    63     3     1
    55     3     2
    90     3     2

anovan(m,{cfactor rfactor},2)

ans =

    0.0197
    0.2234
    0.2663

N-Way ANOVA with a Large Data Set

This example shows how to analyze a larger set of car data with mileage and other information on 406 cars made between 1970 and 1982.

Load the sample data and display the variable names.

load carbig
whos
  Name                Size            Bytes  Class     Attributes

  Acceleration      406x1              3248  double              
  Cylinders         406x1              3248  double              
  Displacement      406x1              3248  double              
  Horsepower        406x1              3248  double              
  MPG               406x1              3248  double              
  Mfg               406x13            10556  char                
  Model             406x36            29232  char                
  Model_Year        406x1              3248  double              
  Origin            406x7              5684  char                
  Weight            406x1              3248  double              
  cyl4              406x5              4060  char                
  org               406x7              5684  char                
  when              406x5              4060  char                

The example focusses on four variables. MPG is the number of miles per gallon for each of 406 cars (though some have missing values coded as NaN ). The other three variables are factors: cyl4 (four-cylinder car or not), org (car originated in Europe, Japan, or the USA), and when (car was built early in the period, in the middle of the period, or late in the period).

Fit the full model, requesting up to three-way interactions and Type 3 sums-of-squares.

varnames = {'Origin';'4Cyl';'MfgDate'};
anovan(MPG,{org cyl4 when},3,3,varnames)
ans =

    0.0000
       NaN
    0.0000
    0.7032
    0.0001
    0.2072
    0.6990

Note that many terms are marked by a # symbol as not having full rank, and one of them has zero degrees of freedom and is missing a p -value. This can happen when there are missing factor combinations and the model has higher-order terms. In this case, the cross-tabulation below shows that there are no cars made in Europe during the early part of the period with other than four cylinders, as indicated by the 0 in table(2,1,1) .

[table, chi2, p, factorvals] = crosstab(org,when,cyl4)
table(:,:,1) =

    82    75    25
     0     4     3
     3     3     4


table(:,:,2) =

    12    22    38
    23    26    17
    12    25    32


chi2 =

  207.7689


p =

   8.0973e-38


factorvals = 

    'USA'       'Early'    'Other'
    'Europe'    'Mid'      'Four' 
    'Japan'     'Late'          []

Consequently it is impossible to estimate the three-way interaction effects, and including the three-way interaction term in the model makes the fit singular.

Using even the limited information available in the ANOVA table, you can see that the three-way interaction has a p -value of 0.699, so it is not significant.

Examine only two-way interactions.

[p,tbl,stats,terms] = anovan(MPG,{org cyl4 when},2,3,varnames);
terms
terms =

     1     0     0
     0     1     0
     0     0     1
     1     1     0
     1     0     1
     0     1     1

Now all terms are estimable. The p -values for interaction term 4 ( Origin*4Cyl ) and interaction term 6 ( 4Cyl*MfgDate ) are much larger than a typical cutoff value of 0.05, indicating these terms are not significant. You could choose to omit these terms and pool their effects into the error term. The output terms variable returns a matrix of codes, each of which is a bit pattern representing a term.

Omit terms from the model by deleting their entries from terms .

terms([4 6],:) = []
terms =

     1     0     0
     0     1     0
     0     0     1
     1     0     1

Run anovan again, this time supplying the resulting vector as the model argument.

anovan(MPG,{org cyl4 when},terms,3,varnames)
ans =

   1.0e-03 *

    0.0000
    0.0000
    0.0000
    0.1140

Now you have a more parsimonious model indicating that the mileage of these cars seems to be related to all three factors, and that the effect of the manufacturing date depends on where the car was made.

ANOVA with Random Effects

This example shows how to use anovan to fit models where a factor's levels represent a random selection from a larger (infinite) set of possible levels.

In an ordinary ANOVA model, each grouping variable represents a fixed factor. The levels of that factor are a fixed set of values. The goal is to determine whether different factor levels lead to different response values.

Set Up the Model

Load the sample data.

load mileage

The anova2 function works only with balanced data, and it infers the values of the grouping variables from the row and column numbers of the input matrix. The anovan function, on the other hand, requires you to explicitly create vectors of grouping variable values. Create these vectors in the following way.

Create an array indicating the factory for each value in mileage. This array is 1 for the first column, 2 for the second, and 3 for the third.

factory  = repmat(1:3,6,1);

Create an array indicating the car model for each mileage value. This array is 1 for the first three rows of mileage, and 2 for the remaining three rows.

carmod = [ones(3,3); 2*ones(3,3)];

Turn these matrices into vectors and display them.

mileage = mileage(:);
factory = factory(:);
carmod = carmod(:);
[mileage factory carmod]
ans =

   33.3000    1.0000    1.0000
   33.4000    1.0000    1.0000
   32.9000    1.0000    1.0000
   32.6000    1.0000    2.0000
   32.5000    1.0000    2.0000
   33.0000    1.0000    2.0000
   34.5000    2.0000    1.0000
   34.8000    2.0000    1.0000
   33.8000    2.0000    1.0000
   33.4000    2.0000    2.0000
   33.7000    2.0000    2.0000
   33.9000    2.0000    2.0000
   37.4000    3.0000    1.0000
   36.8000    3.0000    1.0000
   37.6000    3.0000    1.0000
   36.6000    3.0000    2.0000
   37.0000    3.0000    2.0000
   36.7000    3.0000    2.0000

Fit a Random Effects Model

Suppose you are studying a few factories but you want information about what would happen if you build these same car models in a different factory, either one that you already have or another that you might construct. To get this information, fit the analysis of variance model, specifying a model that includes an interaction term and that the factory factor is random.

[pvals,tbl,stats] = anovan(mileage, {factory carmod}, ...
'model',2, 'random',1,'varnames',{'Factory' 'Car Model'});

In the fixed effects version of this fit, which you get by omitting the inputs 'random',1 in the preceding code, the effect of car model is significant, with a p -value of 0.0039. But in this example, which takes into account the random variation of the effect of the variable 'Car Model' from one factory to another, the effect is still significant, but with a higher p -value of 0.0136.

F-Statistics for Models with Random Effects

The F -statistic in a model having random effects is defined differently than in a model having all fixed effects. In the fixed effects model, you compute the F -statistic for any term by taking the ratio of the mean square for that term with the mean square for error. In a random effects model, however, some F -statistics use a different mean square in the denominator.

In the example described in Setting Up the Model , the effect of the variable 'Factory' could vary across car models. In this case, the interaction mean square takes the place of the error mean square in the F -statistic.

Find the F -statistic.

F = 26.6756 / 0.02
F =

   1.3338e+03

The degrees of freedom for the statistic are the degrees of freedom for the numerator (2) and denominator (2) mean squares.

Find the p -value.

pval = 1 - fcdf(F,2,2)
pval =

   7.4919e-04

With random effects, the expected value of each mean square depends not only on the variance of the error term, but also on the variances contributed by the random effects. You can see these dependencies by writing the expected values as linear combinations of contributions from the various model terms.

Find the coefficients of these linear combinations.

stats.ems
ans =

    6.0000    0.0000    3.0000    1.0000
    0.0000    9.0000    3.0000    1.0000
    0.0000    0.0000    3.0000    1.0000
         0         0         0    1.0000

This returns the |ems| field of the |stats| structure.

Display text representations of the linear combinations.

stats.txtems
ans = 

    '6*V(Factory)+3*V(Factory*Car Model)+V(Error)'
    '9*Q(Car Model)+3*V(Factory*Car Model)+V(Error)'
    '3*V(Factory*Car Model)+V(Error)'
    'V(Error)'

The expected value for the mean square due to car model (second term) includes contributions from a quadratic function of the car model effects, plus three times the variance of the interaction term's effect, plus the variance of the error term. Notice that if the car model effects were all zero, the expression would reduce to the expected mean square for the third term (the interaction term). That is why the F -statistic for the car model effect uses the interaction mean square in the denominator.

In some cases there is no single term whose expected value matches the one required for the denominator of the F -statistic. In that case, the denominator is a linear combination of mean squares. The stats structure contains fields giving the definitions of the denominators for each F -statistic. The txtdenom field, stats.txtdenom , contains a text representation, and the denom field contains a matrix that defines a linear combination of the variances of terms in the model. For balanced models like this one, the denom matrix, stats.denom , contains zeros and ones, because the denominator is just a single term's mean square.

Display the txtdenom field.

stats.txtdenom
ans = 

    'MS(Factory*Car Model)'
    'MS(Factory*Car Model)'
    'MS(Error)'

Display the denom field.

stats.denom
ans =

   -0.0000    1.0000    0.0000
   -0.0000    1.0000         0
    0.0000         0    1.0000

Variance Components

For the model described in Setting Up the Model , consider the mileage for a particular car of a particular model made at a random factory. The variance of that car is the sum of components, or contributions, one from each of the random terms.

Display the names of the random terms.

stats.rtnames
ans = 

    'Factory'
    'Factory*Car Model'
    'Error'

You do not know the variances, but you can estimate them from the data. Recall that the ems field of the stats structure expresses the expected value of each term's mean square as a linear combination of unknown variances for random terms, and unknown quadratic forms for fixed terms. If you take the expected mean square expressions for the random terms, and equate those expected values to the computed mean squares, you get a system of equations that you can solve for the unknown variances. These solutions are the variance component estimates.

Display the variance component estimate for each term.

stats.varest
ans =

    4.4426
   -0.0313
    0.1139

Under some conditions, the variability attributed to a term is unusually low, and that term's variance component estimate is negative. In those cases it is common to set the estimate to zero, which you might do, for example, to create a bar graph of the components.

Create a bar graph of the components.

bar(max(0,stats.varest))
set(gca,'xtick',1:3,'xticklabel',stats.rtnames)

You can also compute confidence bounds for the variance estimate. The anovan function does this by computing confidence bounds for the variance expected mean squares, and finding lower and upper limits on each variance component containing all of these bounds. This procedure leads to a set of bounds that is conservative for balanced data. (That is, 95% confidence bounds will have a probability of at least 95% of containing the true variances if the number of observations for each combination of grouping variables is the same.) For unbalanced data, these are approximations that are not guaranteed to be conservative.

Display the variance estimates and the confidence limits for the variance estimates of each component.

[{'Term' 'Estimate' 'Lower' 'Upper'};
 stats.rtnames, num2cell([stats.varest stats.varci])]
ans = 

    'Term'                 'Estimate'    'Lower'     'Upper'   
    'Factory'              [  4.4426]    [1.0736]    [175.6038]
    'Factory*Car Model'    [ -0.0313]    [   NaN]    [     NaN]
    'Error'                [  0.1139]    [0.0586]    [  0.3103]

Other ANOVA Models

The anovan function also has arguments that enable you to specify two other types of model terms. First, the 'nested' argument specifies a matrix that indicates which factors are nested within other factors. A nested factor is one that takes different values within each level its nested factor.

For example, the mileage data from the previous section assumed that the two car models produced in each factory were the same. Suppose instead, each factory produced two distinct car models for a total of six car models, and we numbered them 1 and 2 for each factory for convenience. Then, the car model is nested in factory. A more accurate and less ambiguous numbering of car model would be as follows:

Factory

Car Model

1

1

1

2

2

3

2

4

3

5

3

6

However, it is common with nested models to number the nested factor the same way in each nested factor.

Second, the 'continuous' argument specifies that some factors are to be treated as continuous variables. The remaining factors are categorical variables. Although the anovan function can fit models with multiple continuous and categorical predictors, the simplest model that combines one predictor of each type is known as an analysis of covariance model. The next section describes a specialized tool for fitting this model.

Analysis of Covariance

Introduction to Analysis of Covariance

Analysis of covariance is a technique for analyzing grouped data having a response (y, the variable to be predicted) and a predictor (x, the variable used to do the prediction). Using analysis of covariance, you can model y as a linear function of x, with the coefficients of the line possibly varying from group to group.

Analysis of Covariance Tool

The aoctool function opens an interactive graphical environment for fitting and prediction with analysis of covariance (ANOCOVA) models. It fits the following models for the ith group:

Same mean

y = α + ε

Separate means

y = (α + αi) + ε

Same line

y = α + βx + ε

Parallel lines

y = (α + αi) + βx + ε

Separate lines

y = (α + αi) + (β + βi)x + ε

For example, in the parallel lines model the intercept varies from one group to the next, but the slope is the same for each group. In the same mean model, there is a common intercept and no slope. In order to make the group coefficients well determined, the tool imposes the constraints

The following steps describe the use of aoctool.

  1. Load the data. The Statistics Toolbox™ data set carsmall.mat contains information on cars from the years 1970, 1976, and 1982. This example studies the relationship between the weight of a car and its mileage, and whether this relationship has changed over the years. To start the demonstration, load the data set.

    load carsmall

    The Workspace Browser shows the variables in the data set.

    You can also use aoctool with your own data.

  2. Start the tool. The following command calls aoctool to fit a separate line to the column vectors Weight and MPG for each of the three model group defined in Model_Year. The initial fit models the y variable, MPG, as a linear function of the x variable, Weight.

    [h,atab,ctab,stats] = aoctool(Weight,MPG,Model_Year);
    

    See the aoctool function reference page for detailed information about calling aoctool.

  3. Examine the output. The graphical output consists of a main window with a plot, a table of coefficient estimates, and an analysis of variance table. In the plot, each Model_Year group has a separate line. The data points for each group are coded with the same color and symbol, and the fit for each group has the same color as the data points.

    The coefficients of the three lines appear in the figure titled ANOCOVA Coefficients. You can see that the slopes are roughly –0.0078, with a small deviation for each group:

    • Model year 1970: y = (45.9798 – 8.5805) + (–0.0078 + 0.002)x + ε

    • Model year 1976: y = (45.9798 – 3.8902) + (–0.0078 + 0.0011)x + ε

    • Model year 1982: y = (45.9798 + 12.4707) + (–0.0078 – 0.0031)x + ε

    Because the three fitted lines have slopes that are roughly similar, you may wonder if they really are the same. The Model_Year*Weight interaction expresses the difference in slopes, and the ANOVA table shows a test for the significance of this term. With an F statistic of 5.23 and a p value of 0.0072, the slopes are significantly different.

  4. Constrain the slopes to be the same. To examine the fits when the slopes are constrained to be the same, return to the ANOCOVA Prediction Plot window and use the Model pop-up menu to select a Parallel Lines model. The window updates to show the following graph.

    Though this fit looks reasonable, it is significantly worse than the Separate Lines model. Use the Model pop-up menu again to return to the original model.

Confidence Bounds

The example in Analysis of Covariance Tool provides estimates of the relationship between MPG and Weight for each Model_Year, but how accurate are these estimates? To find out, you can superimpose confidence bounds on the fits by examining them one group at a time.

  1. In the Model_Year menu at the lower right of the figure, change the setting from All Groups to 82. The data and fits for the other groups are dimmed, and confidence bounds appear around the 82 fit.

    The dashed lines form an envelope around the fitted line for model year 82. Under the assumption that the true relationship is linear, these bounds provide a 95% confidence region for the true line. Note that the fits for the other model years are well outside these confidence bounds for Weight values between 2000 and 3000.

  2. Sometimes it is more valuable to be able to predict the response value for a new observation, not just estimate the average response value. Use the aoctool function Bounds menu to change the definition of the confidence bounds from Line to Observation. The resulting wider intervals reflect the uncertainty in the parameter estimates as well as the randomness of a new observation.

    Like the polytool function, the aoctool function has cross hairs that you can use to manipulate the Weight and watch the estimate and confidence bounds along the y-axis update. These values appear only when a single group is selected, not when All Groups is selected.

Multiple Comparisons

You can perform a multiple comparison test by using the stats output structure from aoctool as input to the multcompare function. The multcompare function can test either slopes, intercepts, or population marginal means (the predicted MPG of the mean weight for each group). The example in Analysis of Covariance Tool shows that the slopes are not all the same, but could it be that two are the same and only the other one is different? You can test that hypothesis.

multcompare(stats,0.05,'on','','s')

ans =
    1.0000    2.0000   -0.0012    0.0008    0.0029
    1.0000    3.0000    0.0013    0.0051    0.0088
    2.0000    3.0000    0.0005    0.0042    0.0079

This matrix shows that the estimated difference between the intercepts of groups 1 and 2 (1970 and 1976) is 0.0008, and a confidence interval for the difference is [–0.0012, 0.0029]. There is no significant difference between the two. There are significant differences, however, between the intercept for 1982 and each of the other two. The graph shows the same information.

Note that the stats structure was created in the initial call to the aoctool function, so it is based on the initial model fit (typically a separate-lines model). If you change the model interactively and want to base your multiple comparisons on the new model, you need to run aoctool again to get another stats structure, this time specifying your new model as the initial model.

Nonparametric Methods

Introduction to Nonparametric Methods

Statistics Toolbox functions include nonparametric versions of one-way and two-way analysis of variance. Unlike classical tests, nonparametric tests make only mild assumptions about the data, and are appropriate when the distribution of the data is non-normal. On the other hand, they are less powerful than classical methods for normally distributed data.

Both of the nonparametric functions described here will return a stats structure that can be used as an input to the multcompare function for multiple comparisons.

Kruskal-Wallis Test

The example Perform One-Way ANOVA uses one-way analysis of variance to determine if the bacteria counts of milk varied from shipment to shipment. The one-way analysis rests on the assumption that the measurements are independent, and that each has a normal distribution with a common variance and with a mean that was constant in each column. You can conclude that the column means were not all the same. The following example repeats that analysis using a nonparametric procedure.

The Kruskal-Wallis test is a nonparametric version of one-way analysis of variance. The assumption behind this test is that the measurements come from a continuous distribution, but not necessarily a normal distribution. The test is based on an analysis of variance using the ranks of the data values, not the data values themselves. Output includes a table similar to an ANOVA table, and a box plot.

You can run this test as follows:

load hogg

p = kruskalwallis(hogg)
p =
    0.0020

The low p value means the Kruskal-Wallis test results agree with the one-way analysis of variance results.

Friedman's Test

Perform Two-Way ANOVA uses two-way analysis of variance to study the effect of car model and factory on car mileage. The example tests whether either of these factors has a significant effect on mileage, and whether there is an interaction between these factors. The conclusion of the example is there is no interaction, but that each individual factor has a significant effect. The next example examines whether a nonparametric analysis leads to the same conclusion.

Friedman's test is a nonparametric test for data having a two-way layout (data grouped by two categorical factors). Unlike two-way analysis of variance, Friedman's test does not treat the two factors symmetrically and it does not test for an interaction between them. Instead, it is a test for whether the columns are different after adjusting for possible row differences. The test is based on an analysis of variance using the ranks of the data across categories of the row factor. Output includes a table similar to an ANOVA table.

You can run Friedman's test as follows.

load mileage
p = friedman(mileage,3)
p =
  7.4659e-004

Recall the classical analysis of variance gave a p value to test column effects, row effects, and interaction effects. This p value is for column effects. Using either this p value or the p value from ANOVA (p < 0.0001), you conclude that there are significant column effects.

In order to test for row effects, you need to rearrange the data to swap the roles of the rows in columns. For a data matrix x with no replications, you could simply transpose the data and type

p = friedman(x')

With replicated data it is slightly more complicated. A simple way is to transform the matrix into a three-dimensional array with the first dimension representing the replicates, swapping the other two dimensions, and restoring the two-dimensional shape.

x = reshape(mileage, [3 2 3]);
x = permute(x,[1 3 2]);
x = reshape(x,[9 2])
x =
   33.3000   32.6000
   33.4000   32.5000
   32.9000   33.0000
   34.5000   33.4000
   34.8000   33.7000
   33.8000   33.9000
   37.4000   36.6000
   36.8000   37.0000
   37.6000   36.7000

friedman(x,3)
ans =
    0.0082

Again, the conclusion is similar to that of the classical analysis of variance. Both this p value and the one from ANOVA (p = 0.0039) lead you to conclude that there are significant row effects.

You cannot use Friedman's test to test for interactions between the row and column factors.

Was this topic helpful?