`LinearMixedModel`

computes the deviance of
model *M* as minus two times the loglikelihood of
that model. Let *L*_{M} denote
the maximum value of the likelihood function for model *M*.
Then, the deviance of model *M* is

$$-2*\mathrm{log}{L}_{M}.$$

A lower value of deviance indicates a better fit. Suppose *M*_{1} and *M*_{2} are
two different models, where *M*_{1} is
nested in *M*_{2}. Then, the
fit of the models can be assessed by comparing the deviances *Dev*_{1} and *Dev*_{2} of
these models. The difference of the deviances is

$$Dev=De{v}_{1}-De{v}_{2}=2\left(\mathrm{log}L{M}_{2}-\mathrm{log}L{M}_{1}\right).$$

Usually, the asymptotic distribution of this difference has
a chi-square distribution with degrees of freedom *v* equal
to the number of parameters that are estimated in one model but fixed
(typically at 0) in the other. That is, it is equal to the difference
in the number of parameters estimated in M_{1} and
M_{2}. You can get the *p*-value
for this test using `1 – chi2cdf(Dev,V)`

,
where *Dev* = *Dev*_{2} – *Dev*_{1}.

However, in mixed-effects models, when some variance components
fall on the boundary of the parameter space, the asymptotic distribution
of this difference is more complicated. For example, consider the
hypotheses

*H*_{0}: $$D=\left(\begin{array}{cc}{D}_{11}& 0\\ 0& 0\end{array}\right),$$ *D* is a *q*-by-*q* symmetric
positive semidefinite matrix.

*H*_{1}: *D* is
a (*q*+1)-by-(*q*+1) symmetric positive
semidefinite matrix.

That is, *H*_{1} states
that the last row and column of *D* are different
from zero. Here, the bigger model *M*_{2} has *q* +
1 parameters and the smaller model *M*_{1} has *q* parameters.
And *Dev* has a 50:50 mixture of *χ*^{2}_{q} and *χ*^{2}_{(q +
1)} distributions (Stram and Lee, 1994).