This is machine translation

Translated by Microsoft
Mouseover text to see original. Click the button below to return to the English verison of the page.

Note: This page has been translated by MathWorks. Please click here
To view all translated materals including this page, select Japan from the country navigator on the bottom of this page.

Maximum Likelihood Estimation for Conditional Variance Models

Innovation Distribution

For conditional variance models, the innovation process is εt=σtzt, where zt follows a standardized Gaussian or Student’s t distribution with ν>2 degrees of freedom. Specify your distribution choice in the model property Distribution.

The innovation variance, σt2, can follow a GARCH, EGARCH, or GJR conditional variance process.

If the model includes a mean offset term, then


The estimate function for garch, egarch, and gjr models estimates parameters using maximum likelihood estimation. estimate returns fitted values for any parameters in the input model equal to NaN. estimate honors any equality constraints in the input model, and does not return estimates for parameters with equality constraints.

Loglikelihood Functions

Given the history of a process, innovations are conditionally independent. Let Ht denote the history of a process available at time t, t = 1,...,N. The likelihood function for the innovation series is given by


where f is a standardized Gaussian or t density function.

The exact form of the loglikelihood objective function depends on the parametric form of the innovation distribution.

  • If zt has a standard Gaussian distribution, then the loglikelihood function is


  • If zt has a standardized Student’s t distribution with ν>2 degrees of freedom, then the loglikelihood function is


estimate performs covariance matrix estimation for maximum likelihood estimates using the outer product of gradients (OPG) method.

See Also

Related Examples

More About

Was this topic helpful?