When you identify a model, you can simulate or predict the model response, and compare that response with measured input/output data. This comparison helps you choose among candidate models, and also aids you in validating the identified model you selected. You can plot model response alongside measured output data, while using initial conditions that can be estimated from the data. You can also compute a metric that quantifies how well your model response matches the measured output data.
|Compare identified model output and measured output|
|Option set for compare|
|Goodness of fit between test and reference data|
|Predict state and state estimation error covariance at next time step using extended or unscented Kalman filter, or particle filter|
|Option set for predict|
|Simulate response of identified model|
|Option set for sim|
This example shows how to validate an estimated model by comparing the simulated model output with measured data that was not used for the original estimation.
To create one or more plots of your models, select the corresponding check box in the Model Views area of the System Identification app.
Understand the difference between simulated and predicted output and when to use each.
Estimate initial conditions for use in simulations executed from the command line, from Simulink®, and from the System Identification app,
Resolve differences between simulation results when comparing command-line or System Identification model outputs with Simulink outputs.
Examine available plot types and corresponding supported models.
Compute model parameter uncertainty of linear models.
Set plot preferences that persist from session to session.