This is machine translation

Translated by Microsoft
Mouseover text to see original. Click the button below to return to the English version of the page.

Note: This page has been translated by MathWorks. Click here to see
To view all translated materials including this page, select Country from the country navigator on the bottom of this page.

Information Criteria

Model comparison tests—such as the likelihood ratio, Lagrange multiplier, or Wald test—are only appropriate for comparing nested models. In contrast, information criteria are model selection tools that you can use to compare any models fit to the same data. That is, the models being compared do not need to be nested.

Basically, information criteria are likelihood-based measures of model fit that include a penalty for complexity (specifically, the number of parameters). Different information criteria are distinguished by the form of the penalty, and can prefer different models.

Let logL(θ^) denote the value of the maximized loglikelihood objective function for a model with k parameters fit to N data points. Two commonly used information criteria are:

  • Akaike information criterion (AIC). The AIC compares models from the perspective of information entropy, as measured by Kullback-Leibler divergence. The AIC for a given model is


    When comparing AIC values for multiple models, smaller values of the criterion are better.

  • Bayesian information criterion (BIC). The BIC, also known as Schwarz information criterion, compares models from the perspective of decision theory, as measured by expected loss. The BIC for a given model is


    When comparing BIC values for multiple models, smaller values of the criterion are better.


Some references scale information criteria values by the number of observations (N). Econometrics Toolbox™ does not do this scaling. As a result, the absolute value of measures the toolbox returns might differ from other sources by a factor of N.

See Also

Related Examples

More About