## Documentation |

On this page… |
---|

The following table summarizes the adaptive filter properties and provides a brief description of each. Full descriptions of each property, in alphabetical order, are given in the subsequent section.

Property | Description |
---|---|

| Reports the algorithm the object uses for adaptation.
When you construct your adaptive filter object, this property is set
automatically by the constructor, such as |

| Averaging factor used to compute the exponentially-windowed
estimates of the powers in the transformed signal bins for the coefficient
updates. |

| Returns the minimum mean-squared prediction error. Refer to [3] in the bibliography for details about linear prediction. |

| Returns the predicted samples generated during adaptation. Refer to [3] in the bibliography for details about linear prediction. |

| Block length for the coefficient updates. This must be
a positive integer such that ( |

| Vector containing the initial filter coefficients. It
must be a length |

| Conversion factor defaults to the matrix [1 -1] that
specifies soft-constrained initialization. This is the |

| Update delay given in time samples. This scalar should
be a positive integer—negative delays do not work. |

| Desired signal states of the adaptive filter. |

| Vector of the epsilon values of the adaptive filter. |

| Vector of the adaptive filter error states. |

| Stores the discrete Fourier transform of the filter coefficients
in |

| Stores the states of the FFT of the filter coefficients during adaptation. |

| Vector of filtered input states with length equal to |

| Contains the length of the filter. Note that this is not the filter order. Filter length is 1 greater than filter order. Thus a filter with length equal to 10 has filter order equal to 9. |

| Determines how the RLS adaptive filter uses past data in each iteration. You use the forgetting factor to specify whether old data carries the same weight in the algorithm as more recent data. |

| Returns the minimum mean-squared prediction error in the forward direction. Refer to [3] in the bibliography for details about linear prediction. |

| Contains the predicted values for samples during adaptation. Compare these to the actual samples to get the error and power. |

| Soft-constrained initialization factor. This scalar should
be positive and sufficiently large to prevent an excessive number
of Kalman gain rescues. Called |

| Upper-triangular Cholesky (square root) factor of the
input covariance matrix. Initialize this matrix with a positive definite
upper triangular matrix. Dimensions are |

| Empty when you construct the object, this gets populated after you run the filter. |

| Contains the states of the Kalman gain updates during adaptation. |

| Contains the setting for leakage in the adaptive filter algorithm. Using a leakage factor that is not 1 forces the weights to adapt even when they have found the minimum error solution. Forcing the adaptation can improve the numerical performance of the LMS algorithm. |

| Contains the offset covariance matrix. |

| Specifies an optional offset for the denominator of the step size normalization term. You must specify offset to be a scalar greater than or equal to zero. Nonzero offsets can help avoid a divide-by-near-zero condition that causes errors. |

| A vector of 2* |

| Projection order of the affine projection algorithm. |

| Coefficients determined for the reflection portion of the filter during adaptation. |

| Size of the steps used to determine the reflection coefficients. |

| Specifies whether to reset the filter states and memory before each filtering operation. Lets you decide whether your filter retains states and coefficients from previous filtering runs. |

| A vector that contains the coefficient values of your secondary path from the output actuator to the error sensor. |

| An estimate of the secondary path filter model. |

| The states of the secondary path filter, the unknown system. |

| Upper-triangular Cholesky (square root) factor of the input covariance matrix. Initialize this matrix with a positive definite upper triangular matrix. |

| Square root of the inverse of the sliding window input signal covariance matrix. This square matrix should be full-ranked. |

| Vector of the adaptive filter states. |

| Reports the size of the step taken between iterations
of the adaptive filter process. Each |

| Block length of the sliding window. This integer must
be at least as large as the filter length. |

Like `dfilt` objects, `adaptfilt` objects
have properties that govern their behavior and store some of the results
of filtering operations. The following pages list, in alphabetical
order, the name of every property associated with `adaptfilt` objects.
Note that not all `adaptfilt` objects have all of
these properties. To view the properties of a particular adaptive
filter, such as an `adaptfilt.bap` filter, use `get` with the object handle, like this:

ha = adaptfilt.bap(32,0.5,4,1.0); get(ha) PersistentMemory: false Algorithm: 'Block Affine Projection FIR Adaptive Filter' FilterLength: 32 Coefficients: [1x32 double] States: [35x1 double] StepSize: 0.5000 ProjectionOrder: 4 OffsetCov: [4x4 double]

`get` shows you the properties for `ha` and
the values for the properties. Entering the object handle returns
the same values and properties without the formatting of the list
and the more familiar property names.

Reports the algorithm the object uses for adaptation. When you construct you adaptive filter object, this property is set automatically. You cannot change the value—it is read only.

Averaging factor used to compute the exponentially-windowed
estimates of the powers in the transformed signal bins for the coefficient
updates. `AvgFactor` should lie between zero and
one. For default filter objects, `AvgFactor` equals
(1 - `step`). `lambda` is the input
argument that represent `AvgFactor`

Returns the minimum mean-squared prediction error in the backward direction. Refer to [3] in the bibliography for details about linear prediction.

When you use an adaptive filter that does backward prediction,
such as `adaptfilt.ftf`, one property
of the filter contains the backward prediction coefficients for the
adapted filter. With these coefficient, the forward coefficients,
and the system under test, you have the full set of knowledge of how
the adaptation occurred. Two values stored in properties compose the `BkwdPrediction` property:

Coefficients, which contains the coefficients of the system under test, as determined using backward predictions process.

Error, which is the difference between the filter coefficients determined by backward prediction and the actual coefficients of the sample filter. In this example,

`adaptfilt.ftf`identifies the coefficients of an unknown FIR system.x = randn(1,500); % Input to the filter b = fir1(31,0.5); % FIR system to be identified n = 0.1*randn(1,500); % Observation noise signal d = filter(b,1,x)+n; % Desired signal N = 31; % Adaptive filter order lam = 0.99; % RLS forgetting factor del = 0.1; % Soft-constrained initialization factor ha = adaptfilt.ftf(32,lam,del); [y,e] = filter(ha,x,d); ha ha = Algorithm: 'Fast Transversal Least-Squares Adaptive Filter' FilterLength: 32 Coefficients: [1x32 double] States: [31x1 double] ForgettingFactor: 0.9900 InitFactor: 0.1000 FwdPrediction: [1x1 struct] BkwdPrediction: [1x1 struct] KalmanGain: [32x1 double] ConversionFactor: 0.7338 KalmanGainStates: [32x1 double] PersistentMemory: false ha.coefficients ans = Columns 1 through 8 -0.0055 0.0048 0.0045 0.0146 -0.0009 0.0002 -0.0019 0.0008 Columns 9 through 16 -0.0142 -0.0226 0.0234 0.0421 -0.0571 -0.0807 0.1434 0.4620 Columns 17 through 24 0.4564 0.1532 -0.0879 -0.0501 0.0331 0.0361 -0.0266 -0.0220 Columns 25 through 32 0.0231 0.0026 -0.0063 -0.0079 0.0032 0.0082 0.0033 0.0065 ha.bkwdprediction ans = Coeffs: [1x32 double] Error: 82.3394 >> ha.bkwdprediction.coeffs ans = Columns 1 through 8 0.0067 0.0186 0.1114 -0.0150 -0.0239 -0.0610 -0.1120 -0.1026 Columns 9 through 16 0.0093 -0.0399 -0.0045 0.0622 0.0997 0.0778 0.0646 -0.0564 Columns 17 through 24 0.0775 0.0814 0.0057 0.0078 0.1271 -0.0576 0.0037 -0.0200 Columns 25 through 32 -0.0246 0.0180 -0.0033 0.1222 0.0302 -0.0197 -0.1162 0.0285

Block length for the coefficient updates. This must be a positive
integer such that (`l/blocklen`) is also an integer.
For faster execution, `blocklen` should be a power
of two. `blocklen` defaults to two.

Vector containing the initial filter coefficients. It must be
a length `l` vector where `l` is
the number of filter coefficients. `coeffs` defaults
to length `l` vector of zeros when you do not provide
the argument for input.

Conversion factor defaults to the matrix [1 -1] that specifies
soft-constrained initialization. This is the `gamma` input
argument for some of the fast transversal algorithms.

Update delay given in time samples. This scalar should be a
positive integer — negative delays do not work. `delay` defaults
to 1 for most algorithms.

Desired signal states of the adaptive filter. `dstates` defaults
to a zero vector with length equal to (`blocklen` -
1) or (`swblocklen` - 1) depending on the algorithm.

Vector of the epsilon values of the adaptive filter. `EpsilonStates` defaults
to a vector of zeros with (`projectord` - 1) elements.

Vector of the adaptive filter error states. `ErrorStates` defaults
to a zero vector with length equal to (`projectord` -
1).

Stores the discrete Fourier transform of the filter coefficients
in `coeffs`.

Stores the states of the FFT of the filter coefficients during adaptation.

Vector of filtered input states with length equal to `l` -
1.

Contains the length of the filter. Note that this is not the filter order. Filter length is 1 greater than filter order. Thus a filter with length equal to 10 has filter order equal to 9.

Determines how the RLS adaptive filter uses past data in each iteration. You use the forgetting factor to specify whether old data carries the same weight in the algorithm as more recent data.

This is a scalar and should lie in the range (0, 1]. It defaults
to 1. Setting `forgetting factor = `1 denotes infinite
memory while adapting to find the new filter. Note that this is the `lambda` input
argument.

Returns the minimum mean-squared prediction error in the forward direction. Refer to [3] in the bibliography for details about linear prediction.

Contains the predicted values for samples during adaptation. Compare these to the actual samples to get the error and power.

Returns the soft-constrained initialization factor. This scalar
should be positive and sufficiently large to prevent an excessive
number of Kalman gain rescues. `delta` defaults to
one.

Upper-triangular Cholesky (square root) factor of the input
covariance matrix. Initialize this matrix with a positive definite
upper triangular matrix. Dimensions are `l`-by-`l`,
where `l` is the filter length.

Empty when you construct the object, this gets populated after you run the filter.

Contains the states of the Kalman gain updates during adaptation.

Contains the setting for leakage in the adaptive filter algorithm. Using a leakage factor that is not 1 forces the weights to adapt even when they have found the minimum error solution. Forcing the adaptation can improve the numerical performance of the LMS algorithm.

Contains the offset covariance matrix.

Specifies an optional offset for the denominator of the step size normalization term. You must specify offset to be a scalar greater than or equal to zero. Nonzero offsets can help avoid a divide-by-near-zero condition that causes errors.

Use this to avoid dividing by zero or by very small numbers
when input signal amplitude becomes very small, or dividing by very
small numbers when any of the FFT input signal powers become very
small. `offset` defaults to one.

A vector of 2*`l` elements, each initialized
with the value `delta` from the input arguments.
As you filter data, `Power` gets updated by the
filter process.

Projection order of the affine projection algorithm. `projectord` defines
the size of the input signal covariance matrix and defaults to two.

For adaptive filters that use reflection coefficients, this property stores them.

As the adaptive filter changes coefficient values during adaptation, the step size used between runs is stored here.

Determines whether the filter states and coefficients get restored to their starting values for each filtering operation. The starting values are the values in place when you create the filter.

`PersistentMemory` returns to zero any property
value that the filter changes during processing. Property values that
the filter does not change are not affected. Defaults to `false`.

A vector that contains the coefficient values of your secondary path from the output actuator to the error sensor.

An estimate of the secondary path filter model.

The states of the secondary path filter, the unknown system.

Upper-triangular Cholesky (square root) factor of the input covariance matrix. Initialize this matrix with a positive definite upper triangular matrix.

Square root of the inverse of the sliding window input signal covariance matrix. This square matrix should be full-ranked.

Vector of the adaptive filter states. `states` defaults
to a vector of zeros whose length depends on the chosen algorithm.
Usually the length is a function of the filter length `l` and
another input argument to the filter object, such as projectord.

Reports the size of the step taken between iterations of the
adaptive filter process. Each `adaptfilt` object
has a default value that best meets the needs of the algorithm.

Block length of the sliding window. This integer must be at
least as large as the filter length. `swblocklength` defaults
to 16.

[1] Griffiths, L.J., *A Continuously
Adaptive Filter Implemented as a Lattice Structure*, Proc.
IEEE Int. Conf. on Acoustics, Speech, and Signal Processing, Hartford,
CT, pp. 683-686, 1977.

[2] Hayes, M.H., *Statistical Digital
Signal Processing and Modeling*, John Wiley and Sons, 1996.

[3] Haykin, S., *Adaptive Filter
Theory*, Third Edition, Prentice-Hall, Inc., 1996.

Was this topic helpful?