Akaike's Final Prediction Error (FPE) criterion provides a measure
of model quality by simulating the situation where the model is tested
on a different data set. After computing several different models,
you can compare them using this criterion. According to Akaike's theory,
the most accurate model has the smallest FPE.

If you use the same data set for both model estimation and validation,
the fit always improves as you increase the model order and, therefore,
the flexibility of the model structure.

Akaike's Final Prediction Error (FPE) is defined by the following
equation:

$$FPE=\mathrm{det}\left({\scriptscriptstyle \frac{1}{N}}{\displaystyle \sum _{1}^{N}e\left(t,{\widehat{\theta}}_{N}\right){\left(e\left(t,{\widehat{\theta}}_{N}\right)\right)}^{T}}\right)\left(\frac{1+{\scriptscriptstyle \raisebox{1ex}{$d$}\!\left/ \!\raisebox{-1ex}{$N$}\right.}}{1-{\scriptscriptstyle \raisebox{1ex}{$d$}\!\left/ \!\raisebox{-1ex}{$N$}\right.}}\right)$$

where:

*N* is the number of values in
the estimation data set.

*e*(*t*) is a *ny*-by-1
vector of prediction errors.

$${\theta}_{N}$$ represents the
estimated parameters.

*d* is the number of estimated
parameters.

If number of parameters exceeds the number of samples, FPE is
not computed when model estimation is performed (`model.Report.FPE`

is
empty). The `fpe`

command returns `NaN`

.