Documentation Center

  • Trial Software
  • Product Updates

Mixed-Effects Models

Introduction to Mixed-Effects Models

In statistics, an effect is anything that influences the value of a response variable at a particular setting of the predictor variables. Effects are translated into model parameters. In linear models, effects become coefficients, representing the proportional contributions of model terms. In nonlinear models, effects often have specific physical interpretations, and appear in more general nonlinear combinations.

Fixed effects represent population parameters, assumed to be the same each time data is collected. Estimating fixed effects is the traditional domain of regression modeling. Random effects, by comparison, are sample-dependent random variables. In modeling, random effects act like additional error terms, and their distributions and covariances must be specified.

For example, consider a model of the elimination of a drug from the bloodstream. The model uses time t as a predictor and the concentration of the drug C as the response. The nonlinear model term C0ert combines parameters C0 and r, representing, respectively, an initial concentration and an elimination rate. If data is collected across multiple individuals, it is reasonable to assume that the elimination rate is a random variable ri depending on individual i, varying around a population mean . The term C0ert becomes

where β = is a fixed effect and bi = is a random effect.

Random effects are useful when data falls into natural groups. In the drug elimination model, the groups are simply the individuals under study. More sophisticated models might group data by an individual's age, weight, diet, etc. Although the groups are not the focus of the study, adding random effects to a model extends the reliability of inferences beyond the specific sample of individuals.

Mixed-effects models account for both fixed and random effects. As with all regression models, their purpose is to describe a response variable as a function of the predictor variables. Mixed-effects models, however, recognize correlations within sample subgroups. In this way, they provide a compromise between ignoring data groups entirely and fitting each group with a separate model.

Mixed-Effects Model Hierarchy

Suppose data for a nonlinear regression model falls into one of m distinct groups i = 1, ..., m. To account for the groups in a model, write response j in group i as:

yij is the response, xij is a vector of predictors, φ is a vector of model parameters, and εij is the measurement or process error. The index j ranges from 1 to ni, where ni is the number of observations in group i. The function f specifies the form of the model. Often, xij is simply an observation time tij. The errors are usually assumed to be independent and identically, normally distributed, with constant variance.

Estimates of the parameters in φ describe the population, assuming those estimates are the same for all groups. If, however, the estimates vary by group, the model becomes

In a mixed-effects model, φi may be a combination of a fixed and a random effect:

The random effects bi are usually described as multivariate normally distributed, with mean zero and covariance Ψ. Estimating the fixed effects β and the covariance of the random effects Ψ provides a description of the population that does not assume the parameters φi are the same across groups. Estimating the random effects bi also gives a description of specific groups within the data.

Model parameters do not have to be identified with individual effects. In general, design matrices A and B are used to identify parameters with linear combinations of fixed and random effects:

If the design matrices differ among groups, the model becomes

If the design matrices also differ among observations, the model becomes

Some of the group-specific predictors in xij may not change with observation j. Calling those vi, the model becomes

Specifying Mixed-Effects Models

Suppose data for a nonlinear regression model falls into one of m distinct groups i = 1, ..., m. (Specifically, suppose that the groups are not nested.) To specify a general nonlinear mixed-effects model for this data:

  1. Define group-specific model parameters φi as linear combinations of fixed effects β and random effects bi.

  2. Define response values yi as a nonlinear function f of the parameters and group-specific predictor variables Xi.

The model is:

This formulation of the nonlinear mixed-effects model uses the following notation:

φiA vector of group-specific model parameters
βA vector of fixed effects, modeling population parameters
biA vector of multivariate normally distributed group-specific random effects
AiA group-specific design matrix for combining fixed effects
BiA group-specific design matrix for combining random effects
XiA data matrix of group-specific predictor values
yiA data vector of group-specific response values
fA general, real-valued function of φi and Xi
εiA vector of group-specific errors, assumed to be independent, identically, normally distributed, and independent of bi
ΨA covariance matrix for the random effects
σ2The error variance, assumed to be constant across observations

For example, consider a model of the elimination of a drug from the bloodstream. The model incorporates two overlapping phases:

  • An initial phase p during which drug concentrations reach equilibrium with surrounding tissues

  • A second phase q during which the drug is eliminated from the bloodstream

For data on multiple individuals i, the model is

where yij is the observed concentration in individual i at time tij. The model allows for different sampling times and different numbers of observations for different individuals.

The elimination rates rpi and rqi must be positive to be physically meaningful. Enforce this by introducing the log rates Rpi = log(rpi) and Rqi = log(rqi) and reparametrizing the model:

Choosing which parameters to model with random effects is an important consideration when building a mixed-effects model. One technique is to add random effects to all parameters, and use estimates of their variances to determine their significance in the model. An alternative is to fit the model separately to each group, without random effects, and look at the variation of the parameter estimates. If an estimate varies widely across groups, or if confidence intervals for each group have minimal overlap, the parameter is a good candidate for a random effect.

To introduce fixed effects β and random effects bi for all model parameters, reexpress the model as follows:

In the notation of the general model:

where ni is the number of observations of individual i. In this case, the design matrices Ai and Bi are, at least initially, 4-by-4 identity matrices. Design matrices may be altered, as necessary, to introduce weighting of individual effects, or time dependency.

Fitting the model and estimating the covariance matrix Ψ often leads to further refinements. A relatively small estimate for the variance of a random effect suggests that it can be removed from the model. Likewise, relatively small estimates for covariances among certain random effects suggests that a full covariance matrix is unnecessary. Since random effects are unobserved, Ψ must be estimated indirectly. Specifying a diagonal or block-diagonal covariance pattern for Ψ can improve convergence and efficiency of the fitting algorithm.

Statistics Toolbox™ functions nlmefit and nlmefitsa fit the general nonlinear mixed-effects model to data, estimating the fixed and random effects. The functions also estimate the covariance matrix Ψ for the random effects. Additional diagnostic outputs allow you to assess tradeoffs between the number of model parameters and the goodness of fit.

Specifying Covariate Models

If the model in Specifying Mixed-Effects Models assumes a group-dependent covariate such as weight (w) the model becomes:

Thus, the parameter φi for any individual in the ith group is:

To specify a covariate model, use the 'FEGroupDesign' option.

'FEGroupDesign' is a p-by-q-by-m array specifying a different p-by-q fixed-effects design matrix for each of the m groups. Using the previous example, the array resembles the following:

  1. Create the array.

    % Number of parameters in the model (Phi)
    num_params = 3;
    % Number of covariates
    num_cov = 1;
    % Assuming number of groups in the data set is 7
    num_groups = 7;
    % Array of covariate values
    covariates = [75; 52; 66; 55; 70; 58; 62 ];
    A = repmat(eye(num_params, num_params+num_cov),...
    [1,1,num_groups]);
    A(1,num_params+1,1:num_groups) = covariates(:,1)
  2. Create a struct with the specified design matrix.

    options.FEGroupDesign = A; 
    
  3. Specify the arguments for nlmefit (or nlmefitsa) as shown in Mixed-Effects Models Using nlmefit and nlmefitsa.

Choosing nlmefit or nlmefitsa

Statistics Toolbox provides two functions, nlmefit and nlmefitsa for fitting nonlinear mixed-effects models. Each function provides different capabilities, which may help you decide which to use.

Approximation Methods

nlmefit provides the following four approximation methods for fitting nonlinear mixed-effects models:

  • 'LME' — Use the likelihood for the linear mixed-effects model at the current conditional estimates of beta and B. This is the default.

  • 'RELME' — Use the restricted likelihood for the linear mixed-effects model at the current conditional estimates of beta and B.

  • 'FO' — First-order Laplacian approximation without random effects.

  • 'FOCE' — First-order Laplacian approximation at the conditional estimates of B.

nlmefitsa provides an additional approximation method, Stochastic Approximation Expectation-Maximization (SAEM) [24] with three steps :

  1. Simulation: Generate simulated values of the random effects b from the posterior density p(b|Σ) given the current parameter estimates.

  2. Stochastic approximation: Update the expected value of the log likelihood function by taking its value from the previous step, and moving part way toward the average value of the log likelihood calculated from the simulated random effects.

  3. Maximization step: Choose new parameter estimates to maximize the log likelihood function given the simulated values of the random effects.

Both nlmefit and nlmefitsa attempt to find parameter estimates to maximize a likelihood function, which is difficult to compute. nlmefit deals with the problem by approximating the likelihood function in various ways, and maximizing the approximate function. It uses traditional optimization techniques that depend on things like convergence criteria and iteration limits.

nlmefitsa, on the other hand, simulates random values of the parameters in such a way that in the long run they converge to the values that maximize the exact likelihood function. The results are random, and traditional convergence tests don't apply. Therefore nlmefitsa provides options to plot the results as the simulation progresses, and to restart the simulation multiple times. You can use these features to judge whether the results have converged to the accuracy you desire.

Parameters Specific to nlmefitsa

The following parameters are specific to nlmefitsa. Most control the stochastic algorithm.

  • Cov0 — Initial value for the covariance matrix PSI. Must be an r-by-r positive definite matrix. If empty, the default value depends on the values of BETA0.

  • ComputeStdErrorstrue to compute standard errors for the coefficient estimates and store them in the output STATS structure, or false (default) to omit this computation.

  • LogLikMethod — Specifies the method for approximating the log likelihood.

  • NBurnIn — Number of initial burn-in iterations during which the parameter estimates are not recomputed. Default is 5.

  • NIterations — Controls how many iterations are performed for each of three phases of the algorithm.

  • NMCMCIterations — Number of Markov Chain Monte Carlo (MCMC) iterations.

Model and Data Requirements

There are some differences in the capabilities of nlmefit and nlmefitsa. Therefore some data and models are usable with either function, but some may require you to choose just one of them.

  • Error modelsnlmefitsa supports a variety of error models. For example, the standard deviation of the response can be constant, proportional to the function value, or a combination of the two. nlmefit fits models under the assumption that the standard deviation of the response is constant. One of the error models, 'exponential', specifies that the log of the response has a constant standard deviation. You can fit such models using nlmefit by providing the log response as input, and by rewriting the model function to produce the log of the nonlinear function value.

  • Random effects — Both functions fit data to a nonlinear function with parameters, and the parameters may be simple scalar values or linear functions of covariates. nlmefit allows any coefficients of the linear functions to have both fixed and random effects. nlmefitsa supports random effects only for the constant (intercept) coefficient of the linear functions, but not for slope coefficients. So in the example in Specifying Covariate Models, nlmefitsa can treat only the first three beta values as random effects.

  • Model formnlmefit supports a very general model specification, with few restrictions on the design matrices that relate the fixed coefficients and the random effects to the model parameters. nlmefitsa is more restrictive:

    • The fixed effect design must be constant in every group (for every individual), so an observation-dependent design is not supported.

    • The random effect design must be constant for the entire data set, so neither an observation-dependent design nor a group-dependent design is supported.

    • As mentioned under Random Effects, the random effect design must not specify random effects for slope coefficients. This implies that the design must consist of zeros and ones.

    • The random effect design must not use the same random effect for multiple coefficients, and cannot use more than one random effect for any single coefficient.

    • The fixed effect design must not use the same coefficient for multiple parameters. This implies that it can have at most one nonzero value in each column.

    If you want to use nlmefitsa for data in which the covariate effects are random, include the covariates directly in the nonlinear model expression. Don't include the covariates in the fixed or random effect design matrices.

  • Convergence — As described in the Model form, nlmefit and nlmefitsa have different approaches to measuring convergence. nlmefit uses traditional optimization measures, and nlmefitsa provides diagnostics to help you judge the convergence of a random simulation.

In practice, nlmefitsa tends to be more robust, and less likely to fail on difficult problems. However, nlmefit may converge faster on problems where it converges at all. Some problems may benefit from a combined strategy, for example by running nlmefitsa for a while to get reasonable parameter estimates, and using those as a starting point for additional iterations using nlmefit.

Using Output Functions with Mixed-Effects Models

The Outputfcn field of the options structure specifies one or more functions that the solver calls after each iteration. Typically, you might use an output function to plot points at each iteration or to display optimization quantities from the algorithm. To set up an output function:

  1. Write the output function as a MATLAB® file function or local function.

  2. Use statset to set the value of Outputfcn to be a function handle, that is, the name of the function preceded by the @ sign. For example, if the output function is outfun.m, the command

     options = statset('OutputFcn', @outfun);

    specifies OutputFcn to be the handle to outfun. To specify multiple output functions, use the syntax:

     options = statset('OutputFcn',{@outfun, @outfun2});
  3. Call the optimization function with options as an input argument.

For an example of an output function, see Sample Output Function.

Structure of the Output Function

The function definition line of the output function has the following form:

stop = outfun(beta,status,state)

where

  • beta is the current fixed effects.

  • status is a structure containing data from the current iteration. Fields in status describes the structure in detail.

  • state is the current state of the algorithm. States of the Algorithm lists the possible values.

  • stop is a flag that is true or false depending on whether the optimization routine should quit or continue. See Stop Flag for more information.

The solver passes the values of the input arguments to outfun at each iteration.

Fields in status

The following table lists the fields of the status structure:

FieldDescription
procedure
  • 'ALT' — alternating algorithm for the optimization of the linear mixed effects or restricted linear mixed effects approximations

  • 'LAP' — optimization of the Laplacian approximation for first order or first order conditional estimation

iterationAn integer starting from 0.
innerA structure describing the status of the inner iterations within the ALT and LAP procedures, with the fields:
  • procedure — When procedure is 'ALT':

    • 'PNLS' (penalized nonlinear least squares)

    • 'LME' (linear mixed-effects estimation)

    • 'none'

    When procedure is 'LAP',

    • 'PNLS' (penalized nonlinear least squares)

    • 'PLM' (profiled likelihood maximization)

    • 'none'

  • state — one of the following:

    • 'init'

    • 'iter'

    • 'done'

    • 'none'

  • iteration — an integer starting from 0, or NaN. For nlmefitsa with burn-in iterations, the output function is called after each of those iterations with a negative value for STATUS.iteration.

fvalThe current log likelihood
PsiThe current random-effects covariance matrix
thetaThe current parameterization of Psi
mseThe current error variance

States of the Algorithm

The following table lists the possible values for state:

stateDescription

'init'

The algorithm is in the initial state before the first iteration.

'iter'

The algorithm is at the end of an iteration.

'done'

The algorithm is in the final state after the last iteration.

The following code illustrates how the output function might use the value of state to decide which tasks to perform at the current iteration:

switch state
    case 'iter'
          % Make updates to plot or guis as needed
    case 'init'
          % Setup for plots or guis
    case 'done'
          % Cleanup of plots, guis, or final plot
otherwise
end

Stop Flag

The output argument stop is a flag that is true or false. The flag tells the solver whether it should quit or continue. The following examples show typical ways to use the stop flag.

Stopping an Optimization Based on Intermediate Results.  The output function can stop the estimation at any iteration based on the values of arguments passed into it. For example, the following code sets stop to true based on the value of the log likelihood stored in the 'fval'field of the status structure:

stop = outfun(beta,status,state)
stop = false;
% Check if loglikelihood is more than 132.
if status.fval > -132
    stop = true;
end

Stopping an Iteration Based on GUI Input.  If you design a GUI to perform nlmefit iterations, you can make the output function stop when a user clicks a Stop button on the GUI. For example, the following code implements a dialog to cancel calculations:

function retval = stop_outfcn(beta,str,status)
persistent h stop;
if isequal(str.inner.state,'none')
    switch(status)
        case 'init'
            % Initialize dialog
            stop = false;
            h = msgbox('Press STOP to cancel calculations.',...
                'NLMEFIT: Iteration 0 ');
            button = findobj(h,'type','uicontrol');
            set(button,'String','STOP','Callback',@stopper)
            pos = get(h,'Position');
            pos(3) = 1.1 * pos(3);
            set(h,'Position',pos)
            drawnow
        case 'iter'
            % Display iteration number in the dialog title
            set(h,'Name',sprintf('NLMEFIT: Iteration %d',...
                str.iteration))
            drawnow;
        case 'done'
            % Delete dialog
            delete(h);
    end
end
if stop
    % Stop if the dialog button has been pressed
    delete(h)
end
retval = stop;
 
    function stopper(varargin)
        % Set flag to stop when button is pressed
        stop = true;
        disp('Calculation stopped.')
    end
end

Sample Output Function

nmlefitoutputfcn is the sample Statistics Toolbox output function for nlmefit and nlmefitsa. It initializes or updates a plot with the fixed-effects (BETA) and variance of the random effects (diag(STATUS.Psi)). For nlmefit, the plot also includes the log-likelihood (STATUS.fval).

nlmefitoutputfcn is the default output function for nlmefitsa. To use it with nlmefit, specify a function handle for it in the options structure:

opt = statset('OutputFcn', @nlmefitoutputfcn, …)
beta = nlmefit(…, 'Options', opt, …)

To prevent nlmefitsa from using of this function, specify an empty value for the output function:

opt = statset('OutputFcn', [], …)
beta = nlmefitsa(…, 'Options', opt, …)

nlmefitoutputfcn stops nlmefit or nlmefitsa if you close the figure that it produces.

Mixed-Effects Models Using nlmefit and nlmefitsa

Load the sample data.

load indomethacin

The data in indomethacin.mat records concentrations of the drug indomethacin in the bloodstream of six subjects over eight hours.

Plot the scatter plot of indomethacin in the bloodstream grouped by subject.

gscatter(time,concentration,subject)
xlabel('Time (hours)')
ylabel('Concentration (mcg/ml)')
title('{\bf Indomethacin Elimination}')
hold on

Specifying Mixed-Effects Models page discusses a useful model for this type of data.

Construct the model via an anonymous function.

model = @(phi,t)(phi(1)*exp(-exp(phi(2))*t) + ...
                 phi(3)*exp(-exp(phi(4))*t));

Use the nlinfit function to fit the model to all of the data, ignoring subject-specific effects.

phi0 = [1 2 1 1];
[phi,res] = nlinfit(time,concentration,model,phi0);

Compute the mean squared error.

numObs = length(time);
numParams = 4;
df = numObs-numParams;
mse = (res'*res)/df
mse =

    0.0304

Super impose the model on the scatter plot of data.

tplot = 0:0.01:8;
plot(tplot,model(phi,tplot),'k','LineWidth',2)
hold off

Draw the box-plot of residuals by subject.

colors = 'rygcbm';
h = boxplot(res,subject,'colors',colors,'symbol','o');
set(h(~isnan(h)),'LineWidth',2)
hold on
boxplot(res,subject,'colors','k','symbol','ko')
grid on
xlabel('Subject')
ylabel('Residual')
hold off

The box plot of residuals by subject shows that the boxes are mostly above or below zero, indicating that the model has failed to account for subject-specific effects.

To account for subject-specific effects, fit the model separately to the data for each subject.

phi0 = [1 2 1 1];
PHI = zeros(4,6);
RES = zeros(11,6);
for I = 1:6
    tI = time(subject == I);
    cI = concentration(subject == I);
    [PHI(:,I),RES(:,I)] = nlinfit(tI,cI,model,phi0);
end
PHI
PHI =

    2.0293    2.8277    5.4683    2.1981    3.5661    3.0023
    0.5794    0.8013    1.7498    0.2423    1.0408    1.0882
    0.1915    0.4989    1.6757    0.2545    0.2915    0.9685
   -1.7878   -1.6354   -0.4122   -1.6026   -1.5069   -0.8731

Compute the mean squared error.

numParams = 24;
df = numObs-numParams;
mse = (RES(:)'*RES(:))/df
mse =

    0.0057

Plot the scatter plot of the data and superimpose the model for each subject.

gscatter(time,concentration,subject)
xlabel('Time (hours)')
ylabel('Concentration (mcg/ml)')
title('{\bf Indomethacin Elimination}')
hold on
for I = 1:6
    plot(tplot,model(PHI(:,I),tplot),'Color',colors(I))
end
axis([0 8 0 3.5])
hold off

PHI gives estimates of the four model parameters for each of the six subjects. The estimates vary considerably, but taken as a 24-parameter model of the data, the mean-squared error of 0.0057 is a significant reduction from 0.0304 in the original four-parameter model.

Draw the box plot of residuals by subject.

h = boxplot(RES,'colors',colors,'symbol','o');
set(h(~isnan(h)),'LineWidth',2)
hold on
boxplot(RES,'colors','k','symbol','ko')
grid on
xlabel('Subject')
ylabel('Residual')
hold off

Now the box plot shows that the larger model accounts for most of the subject-specific effects. The spread of the residuals (the vertical scale of the box plot) is much smaller than in the previous box plot, and the boxes are now mostly centered on zero.

While the 24-parameter model successfully accounts for variations due to the specific subjects in the study, it does not consider the subjects as representatives of a larger population. The sampling distribution from which the subjects are drawn is likely more interesting than the sample itself. The purpose of mixed-effects models is to account for subject-specific variations more broadly, as random effects varying around population means.

Use the nlmefit function to fit a mixed-effects model to the data. You can also use nlmefitsa in place of nlmefit .

The following anonymous function, nlme_model , adapts the four-parameter model used by nlinfit to the calling syntax of nlmefit by allowing separate parameters for each individual. By default, nlmefit assigns random effects to all the model parameters. Also by default, nlmefit assumes a diagonal covariance matrix (no covariance among the random effects) to avoid overparametrization and related convergence issues.

nlme_model = @(PHI,t)(PHI(:,1).*exp(-exp(PHI(:,2)).*t) + ...
                      PHI(:,3).*exp(-exp(PHI(:,4)).*t));
phi0 = [1 2 1 1];
[phi,PSI,stats] = nlmefit(time,concentration,subject, ...
                          [],nlme_model,phi0)
phi =

    2.8277
    0.7729
    0.4606
   -1.3459


PSI =

    0.3264         0         0         0
         0    0.0250         0         0
         0         0    0.0124         0
         0         0         0    0.0000


stats = 

           dfe: 57
          logl: 54.5882
           mse: 0.0066
          rmse: 0.0787
    errorparam: 0.0815
           aic: -91.1765
           bic: -93.0506
          covb: [4x4 double]
        sebeta: [0.2558 0.1066 0.1092 0.2244]
          ires: [66x1 double]
          pres: [66x1 double]
         iwres: [66x1 double]
         pwres: [66x1 double]
         cwres: [66x1 double]

The mean-squared error of 0.0066 is comparable to the 0.0057 of the 24-parameter model without random effects, and significantly better than the 0.0304 of the four-parameter model without random effects.

The estimated covariance matrix PSI shows that the variance of the fourth random effect is essentially zero, suggesting that you can remove it to simplify the model. To do this, use the 'REParamsSelect' name-value pair to specify the indices of the parameters to be modeled with random effects in nlmefit .

[phi,PSI,stats] = nlmefit(time,concentration,subject, ...
                          [],nlme_model,phi0, ...
                          'REParamsSelect',[1 2 3])
phi =

    2.8277
    0.7728
    0.4605
   -1.3460


PSI =

    0.3270         0         0
         0    0.0250         0
         0         0    0.0124


stats = 

           dfe: 58
          logl: 54.5875
           mse: 0.0066
          rmse: 0.0780
    errorparam: 0.0815
           aic: -93.1750
           bic: -94.8410
          covb: [4x4 double]
        sebeta: [0.2560 0.1066 0.1092 0.2244]
          ires: [66x1 double]
          pres: [66x1 double]
         iwres: [66x1 double]
         pwres: [66x1 double]
         cwres: [66x1 double]

The log-likelihood logl is almost identical to what it was with random effects for all of the parameters, the Akaike information criterion aic is reduced from -91.1765 to -93.1750, and the Bayesian information criterion bic is reduced from -93.0506 to -94.8410. These measures support the decision to drop the fourth random effect.

Refitting the simplified model with a full covariance matrix allows for identification of correlations among the random effects. To do this, use the CovPattern parameter to specify the pattern of nonzero elements in the covariance matrix.

[phi,PSI,stats] = nlmefit(time,concentration,subject, ...
                          [],nlme_model,phi0, ...
                          'REParamsSelect',[1 2 3], ...
                          'CovPattern',ones(3))
phi =

    2.8164
    0.8254
    0.5555
   -1.1505


PSI =

    0.4745    0.1144    0.0493
    0.1144    0.0323    0.0029
    0.0493    0.0029    0.0224


stats = 

           dfe: 55
          logl: 58.4306
           mse: 0.0061
          rmse: 0.0783
    errorparam: 0.0781
           aic: -94.8611
           bic: -97.1518
          covb: [4x4 double]
        sebeta: [0.3020 0.1103 0.1167 0.1680]
          ires: [66x1 double]
          pres: [66x1 double]
         iwres: [66x1 double]
         pwres: [66x1 double]
         cwres: [66x1 double]

The estimated covariance matrix PSI shows that the random effects on the first two parameters have a relatively strong correlation, and both have a relatively weak correlation with the last random effect. This structure in the covariance matrix is more apparent if you convert PSI to a correlation matrix using corrcov .

RHO = corrcov(PSI)
clf;
imagesc(RHO)
set(gca,'XTick',[1 2 3],'YTick',[1 2 3])
title('{\bf Random Effect Correlation}')
h = colorbar;
set(get(h,'YLabel'),'String','Correlation');
RHO =

    1.0000    0.9243    0.4777
    0.9243    1.0000    0.1063
    0.4777    0.1063    1.0000

Incorporate this structure into the model by changing the specification of the covariance pattern to block-diagonal.

P = [1 1 0;1 1 0;0 0 1] % Covariance pattern
[phi,PSI,stats,b] = nlmefit(time,concentration,subject, ...
                            [],nlme_model,phi0, ...
                            'REParamsSelect',[1 2 3], ...
                            'CovPattern',P)
P =

     1     1     0
     1     1     0
     0     0     1


phi =

    2.7830
    0.8981
    0.6581
   -1.0000


PSI =

    0.5180    0.1069         0
    0.1069    0.0221         0
         0         0    0.0454


stats = 

           dfe: 57
          logl: 58.0804
           mse: 0.0061
          rmse: 0.0768
    errorparam: 0.0782
           aic: -98.1608
           bic: -100.0350
          covb: [4x4 double]
        sebeta: [0.3171 0.1073 0.1384 0.1453]
          ires: [66x1 double]
          pres: [66x1 double]
         iwres: [66x1 double]
         pwres: [66x1 double]
         cwres: [66x1 double]


b =

   -0.8507   -0.1563    1.0427   -0.7559    0.5652    0.1550
   -0.1756   -0.0323    0.2152   -0.1560    0.1167    0.0320
   -0.2756    0.0519    0.2620    0.1064   -0.2835    0.1389

The block-diagonal covariance structure reduces aic from -94.9462 to -98.1608 and bic from -97.2368 to -100.0350 without significantly affecting the log-likelihood. These measures support the covariance structure used in the final model. The output b gives predictions of the three random effects for each of the six subjects. These are combined with the estimates of the fixed effects in phi to produce the mixed-effects model.

Plot the mixed-effects model for each of the six subjects. For comparison, the model without random effects is also shown.

PHI = repmat(phi,1,6) + ...                 % Fixed effects
      [b(1,:);b(2,:);b(3,:);zeros(1,6)];    % Random effects
RES = zeros(11,6); % Residuals
colors = 'rygcbm';
for I = 1:6
    fitted_model = @(t)(PHI(1,I)*exp(-exp(PHI(2,I))*t) + ...
                        PHI(3,I)*exp(-exp(PHI(4,I))*t));
    tI = time(subject == I);
    cI = concentration(subject == I);
    RES(:,I) = cI - fitted_model(tI);

    subplot(2,3,I)
    scatter(tI,cI,20,colors(I),'filled')
    hold on
    plot(tplot,fitted_model(tplot),'Color',colors(I))
    plot(tplot,model(phi,tplot),'k')
    axis([0 8 0 3.5])
    xlabel('Time (hours)')
    ylabel('Concentration (mcg/ml)')
    legend(num2str(I),'Subject','Fixed')
end

If obvious outliers in the data (visible in previous box plots) are ignored, a normal probability plot of the residuals shows reasonable agreement with model assumptions on the errors.

clf; normplot(RES(:))

Examining Residuals for Model Verification

You can examine the stats structure, which is returned by both nlmefit and nlmefitsa, to determine the quality of your model. The stats structure contains fields with conditional weighted residuals (cwres field) and individual weighted residuals (iwres field). Since the model assumes that residuals are normally distributed, you can examine the residuals to see how well this assumption holds.

This example generates synthetic data using normal distributions. It shows how the fit statistics look:

  • Good when testing against the same type of model as generates the data

  • Poor when tested against incorrect data models

  1. Initialize a 2-D model with 100 individuals:

    nGroups = 100; % 100 Individuals
    nlmefun = @(PHI,t)(PHI(:,1)*5 + PHI(:,2)^2.*t); % Regression fcn
    REParamSelect = [1  2]; % Both Parameters have random effect
    errorParam = .03; 
    beta0 = [ 1.5  5]; % Parameter means
    psi = [ 0.35  0; ...  % Covariance Matrix
           0   0.51 ];
    time =[0.25;0.5;0.75;1;1.25;2;3;4;5;6];
    nParameters = 2;
    rng(0,'twister') % for reproducibility
  2. Generate the data for fitting with a proportional error model:

    b_i = mvnrnd(zeros(1, numel(REParamSelect)), psi, nGroups);
    individualParameters = zeros(nGroups,nParameters);
    individualParameters(:, REParamSelect) = ...
         bsxfun(@plus,beta0(REParamSelect), b_i);
    
    groups = repmat(1:nGroups,numel(time),1);
    groups = vertcat(groups(:));
    
    y = zeros(numel(time)*nGroups,1);
    x = zeros(numel(time)*nGroups,1);
    for i = 1:nGroups
        idx = groups == i;
        f = nlmefun(individualParameters(i,:), time);
        % Make a proportional error model for y:
        y(idx) = f + errorParam*f.*randn(numel(f),1);
        x(idx) = time;
    end
    
    P = [ 1 0 ; 0 1 ];
  3. Fit the data using the same regression function and error model as the model generator:

    [~,~,stats] = nlmefit(x,y,groups, ...
        [],nlmefun,[1 1],'REParamsSelect',REParamSelect,...
        'ErrorModel','Proportional','CovPattern',P);
  4. Create a plotting routine by copying the following function definition, and creating a file plotResiduals.m on your MATLAB path:

    function plotResiduals(stats)
    pwres = stats.pwres;
    iwres = stats.iwres;
    cwres = stats.cwres;
    figure
    subplot(2,3,1);
    normplot(pwres); title('PWRES')
    subplot(2,3,4);
    createhistplot(pwres);
    
    subplot(2,3,2);
    normplot(cwres); title('CWRES')
    subplot(2,3,5);
    createhistplot(cwres);
    
    subplot(2,3,3);
    normplot(iwres); title('IWRES')
    subplot(2,3,6);
    createhistplot(iwres); title('IWRES')
    
    function createhistplot(pwres)
    [x, n] = hist(pwres);
    d = n(2)- n(1);
    x = x/sum(x*d);
    bar(n,x);
    ylim([0 max(x)*1.05]);
    hold on;
    x2 = -4:0.1:4;
    f2 = normpdf(x2,0,1);
    plot(x2,f2,'r');
    end
    
    end
  5. Plot the residuals using the plotResiduals function:

    plotResiduals(stats);

    The upper probability plots look straight, meaning the residuals are normally distributed. The bottom histogram plots match the superimposed normal density plot. So you can conclude that the error model matches the data.

  6. For comparison, fit the data using a constant error model, instead of the proportional model that created the data:

    [~,~,stats] = nlmefit(x,y,groups, ...
        [],nlmefun,[0 0],'REParamsSelect',REParamSelect,...
        'ErrorModel','Constant','CovPattern',P);
    plotResiduals(stats);

    The upper probability plots are not straight, indicating the residuals are not normally distributed. The bottom histogram plots are fairly close to the superimposed normal density plots.

  7. For another comparison, fit the data to a different structural model than created the data:

    nlmefun2 = @(PHI,t)(PHI(:,1)*5 + PHI(:,2).*t.^4);
    [~,~,stats] = nlmefit(x,y,groups, ...
        [],nlmefun2,[0 0],'REParamsSelect',REParamSelect,...
        'ErrorModel','constant', 'CovPattern',P);
    plotResiduals(stats);

    Not only are the upper probability plots not straight, but the histogram plot is quite skewed compared to the superimposed normal density. These residuals are not normally distributed, and do not match the model.

Was this topic helpful?