Akaike's Final Prediction Error (FPE) criterion provides a measure of model quality by simulating the situation where the model is tested on a different data set. After computing several different models, you can compare them using this criterion. According to Akaike's theory, the most accurate model has the smallest FPE.
Note: If you use the same data set for both model estimation and validation, the fit always improves as you increase the model order and, therefore, the flexibility of the model structure. |
Akaike's Final Prediction Error (FPE) is defined by the following equation:
$$FPE=V\left(\frac{1+{\scriptscriptstyle \raisebox{1ex}{$d$}\!\left/ \!\raisebox{-1ex}{$N$}\right.}}{1-{\scriptscriptstyle \raisebox{1ex}{$d$}\!\left/ \!\raisebox{-1ex}{$N$}\right.}}\right)$$
where V is the loss function, d is the number of estimated parameters, and N is the number of values in the estimation data set.
The toolbox assumes that the final prediction error is asymptotic for d<<N and uses the following approximation to compute FPE:
$$FPE=V\left(1+{\scriptscriptstyle \raisebox{1ex}{$2d$}\!\left/ \!\raisebox{-1ex}{$N$}\right.}\right)$$
The loss function V is defined by the following equation:
$$V=\mathrm{det}\left({\scriptscriptstyle \frac{1}{N}}{\displaystyle \sum _{1}^{N}\epsilon \left(t,{\widehat{\theta}}_{N}\right){\left(\epsilon \left(t,{\widehat{\theta}}_{N}\right)\right)}^{T}}\right)$$
where $${\theta}_{N}$$ represents the estimated parameters.
You can compute Akaike's Final Prediction Error (FPE) criterion for linear and nonlinear models.
Note: FPE for nonlinear ARX models that include a tree partition nonlinearity is not supported. |
To compute FPE, use the fpe
command,
as follows:
FPE = fpe(m1,m2,m3,...,mN)
According to Akaike's theory, the most accurate model has the smallest FPE.
To access the FPE value of an estimated model, m
,
type m.Report.Fit.FPE
.
Akaike's Information Criterion (AIC) provides a measure of model quality by simulating the situation where the model is tested on a different data set. After computing several different models, you can compare them using this criterion. According to Akaike's theory, the most accurate model has the smallest AIC.
Note: If you use the same data set for both model estimation and validation, the fit always improves as you increase the model order and, therefore, the flexibility of the model structure. |
Akaike's Information Criterion (AIC) is defined by the following equation:
$$AIC=\mathrm{log}V+\frac{2d}{N}$$
where V is the loss function, d is the number of estimated parameters, and N is the number of values in the estimation data set.
The loss function V is defined by the following equation:
$$V=\mathrm{det}\left({\scriptscriptstyle \frac{1}{N}}{\displaystyle \sum _{1}^{N}\epsilon \left(t,{\widehat{\theta}}_{N}\right){\left(\epsilon \left(t,{\widehat{\theta}}_{N}\right)\right)}^{T}}\right)$$
where $${\theta}_{N}$$ represents the estimated parameters.
For d<<N:
$$AIC=\mathrm{log}\left(V\left(1+\frac{2d}{N}\right)\right)$$
Note: AIC is approximately equal to log(FPE). |
Use the aic
command to
compute Akaike's Information Criterion (AIC) for one or more linear
or nonlinear models, as follows:
AIC = aic(m1,m2,m3,...,mN)
According to Akaike's theory, the most accurate model has the smallest AIC.