Documentation |
On this page… |
---|
Model Order Is Too High or Too Low Nonlinearity Estimator Produces a Poor Fit |
During validation, you might find that your model output fits the validation data poorly. You might also find some unexpected or undesirable model characteristics.
If the tips suggested in these sections do not help improve your models, then a good model might not be possible for this data. For example, your data might have poor signal-to-noise ratio, large and nonstationary disturbances, or varying system properties.
When the Model Output plot does not show a good fit, there is a good chance that you need to try a different model order. System identification is largely a trial-and-error process when selecting model structure and model order. Ideally, you want the lowest-order model that adequately captures the system dynamics.
You can estimate the model order as described in Preliminary Step – Estimating Model Orders and Input Delays. Typically, you use the suggested order as a starting point to estimate the lowest possible order with different model structures. After each estimation, you monitor the Model Output and the Residual Analysis plots, and then adjust your settings for the next estimation.
When a low-order model fits the validation data poorly, try estimating a higher-order model to see if the fit improves. For example, if a Model Output plot shows that a fourth-order model gives poor results, try estimating an eighth-order model. When a higher-order model improves the fit, you can conclude that higher-order models might be required and linear models might be sufficient.
You should use an independent data set to validate your models. If you use the same data set to both estimate and validate a model, the fit always improves as you increase model order, and you risk overfitting. However, if you use an independent data set to validate your models, the fit eventually deteriorates if your model orders are too high.
High-order models are more expensive to compute and result in greater parameter uncertainty.
In the case of nonlinear ARX and Hammerstein-Wiener models, the Model Output plot does not show a good fit when the nonlinearity estimator has incorrect complexity.
You specify the complexity of piece-wise-linear, wavelet, sigmoid, and custom networks using the number of units (NumberOfUnits nonlinear estimator property). A high number of units indicates a complex nonlinearity estimator. In the case of neural networks, you specify the complexity using the parameters of the network object. For more information, see the Neural Network Toolbox™ documentation.
To select the appropriate complexity of the nonlinearity estimator, start with a low complexity and validate the model output. Next, increate the complexity and validate the model output again. The model fit degrades when the nonlinearity estimator becomes too complex.
There are a couple of indications that you might have substantial noise in your system and might need to use linear model structures that are better equipped to model noise.
One indication of noise is when a state-space model is better than an ARX model at reproducing the measured output; whereas the state-space structure has sufficient flexibility to model noise, the ARX model structure is less able to model noise because the A polynomial must account for both the system dynamics and the noise. The following equation represents the ARX model and shows that A couples the dynamics and the noise by appearing in the denominator of both the dynamics term and the noise terms:
$$y=\frac{B}{A}u+\frac{1}{A}e$$
Another indication that a noise model is needed appears in residual analysis plots when you see significant autocorrelation of residuals at nonzero lags. For more information about residual analysis, see Residual Analysis.
To model noise more carefully, use the ARMAX or the Box-Jenkins model structure, where the dynamics term and the noise term are modeled by different polynomials.
You can test whether a linear model is unstable is by examining the pole-zero plot of the model, which is described in Pole and Zero Plots. The stability threshold for pole values differs for discrete-time and continuous-time models, as follows:
For stable continuous-time models, the real part of the pole is less than 0.
For stable discrete-time models, the magnitude of the pole is less than 1.
Note: Linear trends might cause linear models to be unstable. However, detrending the model does not guarantee stability. |
When an unstable model is OK: In some cases, an unstable model is still a useful model. For example, your system might be unstable without a controller, and you plan to use your model for control design. In this case, you can import your unstable model into Simulink^{®} or Control System Toolbox™ products.
Forcing stability during estimation: If you believe that your system is stable, but your model is unstable, then you can estimate the model with the Focus estimation option set to Stability. This setting might result in a reduced model quality. For more information about Focus, see the various estimation option configuration commands such as tfestOptions, ssestOptions, procestOptions etc..
Allowing for some instability: A more advanced approach to achieving a stable model is by setting the stability threshold estimation option to allow a margin of error. The threshold estimation options are advanced properties of an estimation option set.
For continuous-time models, set the value of opt.Advanced.StabilityThreshold.s. The model is considered stable if the pole on the far right is to the left of s stability threshold.
For discrete-time models, set the value of opt.Advanced.StabilityThreshold.z. The model is considered stable if all poles inside the circle centered at the origin and with a radius of equal to the z stability threshold.
To test if a nonlinear model is unstable is to plot the simulated model output on top of the validation data. If the simulated output diverges from measured output, the model is unstable. However, agreement between model output and measured output does not guarantee stability.
If the Model Output plot and Residual Analysis plot shows a poor fit and you have already tried different structures and orders and modeled noise, it might be that there are one or more missing inputs that have a significant effect on the output.
Try including other measured signals in your input data, and then estimating the models again.
Inputs need not be control signals. Any measurable signal can be considered an input, including measurable disturbances.
If the Model Output plot and Residual Analysis plot shows a poor fit, consider if nonlinear effects are present in the system.
You can model the nonlinearities by performing a simple transformation on the signals to make the problem linear in the new variables. For example, if electrical power is the driving stimulus in a heating process and temperature is the output, you can form a simple product of voltage and current measurements.
If your problem is sufficiently complex and you do not have physical insight into the problem, you might try fitting nonlinear black-box models.