**Class: **ssm

Maximum likelihood parameter estimation of state-space models

`EstMdl = estimate(Mdl,Y,params0)`

`EstMdl = estimate(Mdl,Y,params0,Name,Value)`

```
[EstMdl,estParams,EstParamCov,logL,Output]
= estimate(___)
```

estimates
the state-space model with additional options specified by one or
more `EstMdl`

= estimate(`Mdl`

,`Y`

,`params0`

,`Name,Value`

)`Name,Value`

pair arguments. For example,
you can specify to deflate the observations by a linear regression
using predictor data, control how the results appear in the Command
Window, and indicate which estimation method to use for the parameter
covariance matrix.

`[`

additionally returns:`EstMdl`

,`estParams`

,`EstParamCov`

,`logL`

,`Output`

]
= estimate(___)

`estParams`

, a vector containing the estimated parameters`EstParamCov`

, the estimated variance-covariance matrix of the estimated parameters`logL`

, the optimized loglikelihood value`Output`

, optimization diagnostic information structure

using any of the input arguments in the previous syntaxes.

If the model is time varying with respect the observed responses, then the software does not support including predictors. If the observation vectors among different periods vary in length, then the software cannot determine which coefficients to use to deflate the observed responses.

**Constrained likelihood objective function
maximization**

You can specify any combination of linear inequality, linear equality, and upper and lower bound constraints on the parameters.

Good practice is to avoid equality and inequality constraints during optimization. For example, to constrain the parameter

*w*to be positive, implicitly specify the state-space model using a parameter-to-matrix mapping function. Within the function, set*w*= exp(*s*) within the function. Then, use unconstrained optimization to estimate*s*. Consequently,*s*can assume any real value, but*w*must be positive.

**Predictors and corresponding coefficients**

To include an overall mean to the observation model, include a column of

`1`

s in*Z*._{t}To account for predictor effects when you simulate, you must deflate the observations manually. To deflate the observations, use $${W}_{t}={Y}_{t}-{Z}_{t}\widehat{\beta}.$$

If the regression model is complex, then consider implicitly defining the state space model. For example, define the parameter-to-matrix mapping function using the following syntax pattern.

In this example,function [A,B,C,D,Mean0,Cov0,StateType,DeflateY] = ParamMap(params,Y,Z) ... DeflateY = Y - exp(params(9) + params(10)*Z); ... end

`Y`

is the matrix of observations and`Z`

is the matrix of predictors. The function returns`DeflateY`

, which is the matrix of deflated observations. Specify`Y`

and`Z`

in the MATLAB Workspace before, and then pass`ParamMap`

to`ssm`

using the following syntax pattern.Mdl = ssm(@(params)ParamMap(params,Y,Z))

This is also useful if each response series requires a distinct set of predictors.

If the state equation requires predictors, then include the predictors as additional state variables. Since predictor data varies with time, a state-space model with predictors as states is time varying.

**Additional Tips**

The software accommodates missing data. Indicate missing data using

`NaN`

values in the observed responses (`Y`

).Good practice is to check the convergence status of the optimization routine by displaying

`Output.ExitFlag`

.If the optimization algorithm does not converge, then you can increase the number of iterations using the

`'Options'`

name-value pair argument.If the optimization algorithm does not converge, then consider using

`refine`

, which might help you obtain better initial parameter values for optimization.

The Kalman filter accommodates missing data by not updating filtered state estimates corresponding to missing observations. In other words, suppose there is a missing observation at period

*t*. Then, the state forecast for period*t*based on the previous*t*– 1 observations and filtered state for period*t*are equivalent.For explicitly created state-space models,

`estimate`

applies all predictors to each response series. However, each response series has its own set of regression coefficients.If you do not specify optimization constraints, then

`estimate`

uses`fminunc`

for unconstrained numerical estimation. If you specify any pair of optimization constraints, then`estimate`

uses`fmincon`

for constrained numerical estimation. For either type of optimization, optimization options you set using the name-value pair argument`Options`

must be consistent with the options of the optimization algorithm.`estimate`

passes the name-value pair arguments`Options`

,`Aineq`

,`bineq`

,`Aeq`

,`beq`

,`lb`

, and`ub`

directly to the optimizer`fmincon`

or`fminunc`

.`estimate`

fits regression coefficients along with all other state-space model parameters. The software is flexible enough to allow applying constraints to the regression coefficients using constrained optimization options. For more details, see the`Name,Value`

pair arguments and`fmincon`

.If you set

`'Univariate',true`

then, during the filtering algorithm, the software sequentially updates rather then updating all at once. This practice might accelerate parameter estimation, especially for a low-dimensional, time-invariant model.Suppose that you want to create a state-space model using a parameter-to-matrix mapping function with this signature

and you specify the model using an anonymous function[A,B,C,D,Mean0,Cov0,StateType,DeflateY] = paramMap(params,Y,Z)

The observed responsesMdl = ssm(@(params)paramMap(params,Y,Z))

`Y`

and predictor data`Z`

are not input arguments in the anonymous function. If`Y`

and`Z`

exist in the MATLAB Workspace before creating`Mdl`

, then the software establishes a link to them. Otherwise, if you pass`Mdl`

to`estimate`

, the software throws an error.The link to the data established by the anonymous function overrides all other corresponding input argument values of

`estimate`

. This distinction is important particularly when conducting a rolling window analysis. For details, see Rolling-Window Analysis of Time-Series Models.

[1] Durbin J., and S. J. Koopman. *Time Series
Analysis by State Space Methods*. 2nd ed. Oxford: Oxford
University Press, 2012.

`filter`

| `fmincon`

| `fminunc`

| `forecast`

| `optimoptions`

| `refine`

| `simulate`

| `smooth`

| `ssm`

- Estimate Time-Varying State-Space Model
- Estimate Random Parameter of State-Space Model
- Assess State-Space Model Stability Using Rolling Window Analysis
- Choose State-Space Model Specification Using Backtesting
- What Are State-Space Models?
- What Is the Kalman Filter?
- Rolling-Window Analysis of Time-Series Models