Accelerating the pace of engineering and science

# Documentation

### LMS Methods for adaptfilt Objects

This section provides introductory examples using some of the least mean squares (LMS) adaptive filter functions in the toolbox.

The toolbox provides many adaptive filter design functions that use the LMS algorithms to search for the optimal solution to the adaptive filter, including

• adaptfilt.lms — Implement the LMS algorithm to solve the Wiener-Hopf equation and find the filter coefficients for an adaptive filter.

• adaptfilt.nlms — Implement the normalized variation of the LMS algorithm to solve the Wiener-Hopf equation and determine the filter coefficients of an adaptive filter.

• adaptfilt.sd — Implement the sign-data variation of the LMS algorithm to solve the Wiener-Hopf equation and determine the filter coefficients of an adaptive filter. The correction to the filter weights at each iteration depends on the sign of the input x(k).

• adaptfilt.se — Implement the sign-error variation of the LMS algorithm to solve the Wiener-Hopf equation and determine the filter coefficients of an adaptive filter. The correction applied to the current filter weights for each successive iteration depends on the sign of the error, e(k).

• adaptfilt.ss — Implement the sign-sign variation of the LMS algorithm to solve the Wiener-Hopf equation and determine the filter coefficients of an adaptive filter. The correction applied to the current filter weights for each successive iteration depends on both the sign of x(k) and the sign of e(k).

To demonstrate the differences and similarities among the various LMS algorithms supplied in the toolbox, the LMS and NLMS adaptive filter examples use the same filter for the unknown system. The unknown filter is the constrained lowpass filter from firgr and fircband examples.

```[b,err,res]=firgr(12,[0 0.4 0.5 1], [1 1 0 0], [1 0.2],...
{'w' 'c'});```

From the figure you see that the filter is indeed lowpass and constrained to 0.2 ripple in the stopband. With this as the baseline, the adaptive LMS filter examples use the adaptive LMS algorithms and their initialization functions to identify this filter in a system identification role.

To review the general model for system ID mode, look at System Identification for the layout.

For the sign variations of the LMS algorithm, the examples use noise cancellation as the demonstration application, as opposed to the system identification application used in the LMS examples.

To use the adaptive filter functions in the toolbox you need to provide three things:

• An unknown system or process to adapt to. In this example, the filter designed by firgr is the unknown system.

• Appropriate input data to exercise the adaptation process. In terms of the generic LMS model, these are the desired signal d(k) and the input signal x(k).

Start by defining an input signal x.

`x = 0.1*randn(1,250);`

The input is broadband noise. For the unknown system filter, use firgr to create a twelfth-order lowpass filter:

```[b,err,res] = fircband(12,[0 0.4 0.5 1],[1 1 0 0],[1 0.2],{'w','c'});
```

Although you do not need them here, include the err and res output arguments.

Now filter the signal through the unknown system to get the desired signal.

`d = filter(b,1,x);`

With the unknown filter designed and the desired signal in place you construct and apply the adaptive LMS filter object to identify the unknown.

Preparing the adaptive filter object requires that you provide starting values for estimates of the filter coefficients and the LMS step size. You could start with estimated coefficients of some set of nonzero values; this example uses zeros for the 12 initial filter weights.

For the step size, 0.8 is a reasonable value — a good compromise between being large enough to converge well within the 250 iterations (250 input sample points) and small enough to create an accurate estimate of the unknown filter.

```mu = 0.8;
```

Finally, using the adaptfilt object ha, desired signal, d, and the input to the filter, x, run the adaptive filter to determine the unknown system and plot the results, comparing the actual coefficients from firgr to the coefficients found by adaptfilt.lms.

```[y,e] = filter(ha,x,d);
stem([b.' ha.coefficients.'])
title('System Identification by Adaptive LMS Algorithm')
legend('Actual Filter Weights', 'Estimated Filter Weights',...
'Location', 'NorthEast')```

In the stem plot the actual and estimated filter weights are the same. As an experiment, try changing the step size to 0.2. Repeating the example with mu = 0.2 results in the following stem plot. The estimated weights fail to approximate the actual weights closely.

Since this may be because you did not iterate over the LMS algorithm enough times, try using 1000 samples. With 1000 samples, the stem plot, shown in the next figure, looks much better, albeit at the expense of much more computation. Clearly you should take care to select the step size with both the computation required and the fidelity of the estimated filter in mind.

To improve the convergence performance of the LMS algorithm, the normalized variant (NLMS) uses an adaptive step size based on the signal power. As the input signal power changes, the algorithm calculates the input power and adjusts the step size to maintain an appropriate value. Thus the step size changes with time.

As a result, the normalized algorithm converges more quickly with fewer samples in many cases. For input signals that change slowly over time, the normalized LMS can represent a more efficient LMS approach.

In the adaptfilt.nlms example, you used firgr to create the filter that you would identify. So you can compare the results, you use the same filter, and replace adaptfilt.lms with adaptfilt.nlms, to use the normalized LMS algorithm variation. You should see better convergence with similar fidelity.

First, generate the input signal and the unknown filter.

```x = 0.1*randn(1,500);
[b,err,res] = fircband(12,[0 0.4 0.5 1], [1 1 0 0], [1 0.2],...
{'w' 'c'});
d = filter(b,1,x);```

Again d represents the desired signal d(x) as you defined it earlier and b contains the filter coefficients for your unknown filter.

```mu = 0.8;

You use the preceding code to initialize the normalized LMS algorithm. For more information about the optional input arguments, refer to adaptfilt.nlms in the reference section of this User's Guide.

Running the system identification process is a matter of using adaptfilt.nlms with the desired signal, the input signal, and the initial filter coefficients and conditions specified in s as input arguments. Then plot the results to compare the adapted filter to the actual filter.

```[y,e] = filter(ha,x,d);
stem([b.' ha.coefficients.'])
title('System Identification by Normalized LMS Algorithm')
legend('Actual Filter Weights', 'Estimated Filter Weights',...
'Location', 'NorthEast')```

As shown in the following stem plot (a convenient way to compare the estimated and actual filter coefficients), the two are nearly identical.

If you compare the convergence performance of the regular LMS algorithm to the normalized LMS variant, you see the normalized version adapts in far fewer iterations to a result almost as good as the nonnormalized version.

```plot(e);
title('Comparing the LMS and NLMS Conversion Performance');
legend('NLMS Derived Filter Weights', ...
'LMS Derived Filter Weights', 'Location', 'NorthEast');```

When the amount of computation required to derive an adaptive filter drives your development process, the sign-data variant of the LMS (SDLMS) algorithm may be a very good choice as demonstrated in this example.

Fortunately, the current state of digital signal processor (DSP) design has relaxed the need to minimize the operations count by making DSPs whose multiply and shift operations are as fast as add operations. Thus some of the impetus for the sign-data algorithm (and the sign-error and sign-sign variations) has been lost to DSP technology improvements.

In the standard and normalized variations of the LMS adaptive filter, coefficients for the adapting filter arise from the mean square error between the desired signal and the output signal from the unknown system. Using the sign-data algorithm changes the mean square error calculation by using the sign of the input data to change the filter coefficients.

When the error is positive, the new coefficients are the previous coefficients plus the error multiplied by the step size µ. If the error is negative, the new coefficients are again the previous coefficients minus the error multiplied by µ — note the sign change.

When the input is zero, the new coefficients are the same as the previous set.

In vector form, the sign-data LMS algorithm is

$\begin{array}{cc}w\left(k+1\right)=w\left(k\right)+\mu e\left(k\right)\mathrm{sgn}\left[x\left(k\right)\right],& \\ & \mathrm{sgn}\left[x\left(k\right)\right]=\left\{\begin{array}{l}1,x\left(k\right)>0\\ 0,x\left(k\right)=0\\ -1,x\left(k\right)<0\end{array}\end{array}$

with vector w containing the weights applied to the filter coefficients and vector x containing the input data. e(k) (equal to desired signal - filtered signal) is the error at time k and is the quantity the SDLMS algorithm seeks to minimize. µ (mu) is the step size.

As you specify mu smaller, the correction to the filter weights gets smaller for each sample and the SDLMS error falls more slowly. Larger mu changes the weights more for each step so the error falls more rapidly, but the resulting error does not approach the ideal solution as closely. To ensure good convergence rate and stability, select mu within the following practical bounds

$0<\mu <\frac{1}{N\left\{InputSignalPower\right\}}$

where N is the number of samples in the signal. Also, define mu as a power of two for efficient computing.

 Note   How you set the initial conditions of the sign-data algorithm profoundly influences the effectiveness of the adaptation. Because the algorithm essentially quantizes the input signal, the algorithm can become unstable easily. A series of large input values, coupled with the quantization process may result in the error growing beyond all bounds. You restrain the tendency of the sign-data algorithm to get out of control by choosing a small step size (µ<< 1) and setting the initial conditions for the algorithm to nonzero positive and negative values.

In this noise cancellation example, adaptfilt.sd requires two input data sets:

For the signal, use a sine wave. Note that signal is a column vector of 1000 elements.

`signal = sin(2*pi*0.055*[0:1000-1]');`

Now, add correlated white noise to signal. To ensure that the noise is correlated, pass the noise through a lowpass FIR filter, and then add the filtered noise to the signal.

```noise=randn(1,1000);
nfilt=fir1(11,0.4); % Eleventh order lowpass filter
fnoise=filter(nfilt,1,noise); % Correlated noise data
d=signal.'+fnoise;```

fnoise is the correlated noise and d is now the desired input to the sign-data algorithm.

To prepare the adaptfilt object for processing, set the input conditions coeffs and mu for the object. As noted earlier in this section, the values you set for coeffs and mu determine whether the adaptive filter can remove the noise from the signal path.

In System Identification Using adaptfilt.lms, you constructed a default filter that sets the filter coefficients to zeros. In most cases that approach does not work for the sign-data algorithm. The closer you set your initial filter coefficients to the expected values, the more likely it is that the algorithm remains well behaved and converges to a filter solution that removes the noise effectively.

For this example, start with the coefficients in the filter you used to filter the noise (nfilt), and modify them slightly so the algorithm has to adapt.

```coeffs = nfilt.' -0.01; % Set the filter initial conditions.
mu = 0.05;          % Set the step size for algorithm updating.```

With the required input arguments for adaptfilt.sd prepared, construct the adaptfilt object, run the adaptation, and view the results.

```ha = adaptfilt.sd(12,mu)
set(ha,'coefficients',coeffs);
[y,e] = filter(ha,noise,d);
plot(0:199,signal(1:200),0:199,e(1:200));
title('Noise Cancellation by the Sign-Data Algorithm');
legend('Actual Signal', 'Result of Noise Cancellation',...
'Location', 'NorthEast');```

When adaptfilt.sd runs, it uses far fewer multiply operations than either of the LMS algorithms. Also, performing the sign-data adaptation requires only bit shifting multiplies when the step size is a power of two.

Although the performance of the sign-data algorithm as shown in the next figure is quite good, the sign-data algorithm is much less stable than the standard LMS variations. In this noise cancellation example, the signal after processing is a very good match to the input signal, but the algorithm could very easily grow without bound rather than achieve good performance.

Changing coeffs, mu, or even the lowpass filter you used to create the correlated noise can cause noise cancellation to fail and the algorithm to become useless.

In some cases, the sign-error variant of the LMS algorithm (SELMS) may be a very good choice for an adaptive filter application.

In the standard and normalized variations of the LMS adaptive filter, the coefficients for the adapting filter arise from calculating the mean square error between the desired signal and the output signal from the unknown system, and applying the result to the current filter coefficients. Using the sign-error algorithm replaces the mean square error calculation by using the sign of the error to modify the filter coefficients.

When the error is positive, the new coefficients are the previous coefficients plus the error multiplied by the step size µ. If the error is negative, the new coefficients are again the previous coefficients minus the error multiplied by µ — note the sign change. When the input is zero, the new coefficients are the same as the previous set.

In vector form, the sign-error LMS algorithm is

$\begin{array}{cc}w\left(k+1\right)=w\left(k\right)+\mu \mathrm{sgn}\left[e\left(k\right)\right]\left[x\left(k\right)\right],& \\ & \mathrm{sgn}\left[e\left(k\right)\right]=\left\{\begin{array}{l}1,e\left(k\right)>0\\ 0,e\left(k\right)=0\\ -1,e\left(k\right)<0\end{array}\right\}\end{array}$

with vector w containing the weights applied to the filter coefficients and vector x containing the input data. e(k) (equal to desired signal - filtered signal) is the error at time k and is the quantity the SELMS algorithm seeks to minimize. µ (mu) is the step size. As you specify mu smaller, the correction to the filter weights gets smaller for each sample and the SELMS error falls more slowly.

Larger mu changes the weights more for each step so the error falls more rapidly, but the resulting error does not approach the ideal solution as closely. To ensure good convergence rate and stability, select mu within the following practical bounds

$0<\mu <\frac{1}{N\left\{InputSignalPower\right\}}$

where N is the number of samples in the signal. Also, define mu as a power of two for efficient computation.

 Note   How you set the initial conditions of the sign-data algorithm profoundly influences the effectiveness of the adaptation. Because the algorithm essentially quantizes the error signal, the algorithm can become unstable easily. A series of large error values, coupled with the quantization process may result in the error growing beyond all bounds. You restrain the tendency of the sign-error algorithm to get out of control by choosing a small step size (µ<< 1) and setting the initial conditions for the algorithm to nonzero positive and negative values.

In this noise cancellation example, adaptfilt.se requires two input data sets:

For the signal, use a sine wave. Note that signal is a column vector of 1000 elements.

`signal = sin(2*pi*0.055*[0:1000-1]');`

Now, add correlated white noise to signal. To ensure that the noise is correlated, pass the noise through a lowpass FIR filter, then add the filtered noise to the signal.

```noise=randn(1,1000);
nfilt=fir1(11,0.4); % Eleventh order lowpass filter.
fnoise=filter(nfilt,1,noise); % Correlated noise data.
d=signal.'+fnoise;```

fnoise is the correlated noise and d is now the desired input to the sign-data algorithm.

To prepare the adaptfilt object for processing, set the input conditions coeffs and mu for the object. As noted earlier in this section, the values you set for coeffs and mu determine whether the adaptive filter can remove the noise from the signal path. In System Identification Using adaptfilt.lms, you constructed a default filter that sets the filter coefficients to zeros.

Setting the coefficients to zero often does not work for the sign-error algorithm. The closer you set your initial filter coefficients to the expected values, the more likely it is that the algorithm remains well behaved and converges to a filter solution that removes the noise effectively.

For this example, you start with the coefficients in the filter you used to filter the noise (nfilt), and modify them slightly so the algorithm has to adapt.

```coeffs = nfilt.' -0.01; % Set the filter initial conditions.
mu = 0.05;            % Set step size for algorithm update.```

With the required input arguments for adaptfilt.se prepared, run the adaptation and view the results.

```ha = adaptfilt.se(12,mu)
set(ha,'coefficients',coeffs);
set(ha,'persistentmemory',true); % Prevent filter reset.
[y,e] = filter(ha,noise,d);
plot(0:199,signal(1:200),0:199,e(1:200));
title('Noise Cancellation Performance by the Sign-Error LMS Algorithm');
legend('Actual Signal','Error After Noise Reduction',...
'Location','NorthEast')```

Notice that you have to set the property PersistentMemory to true when you manually change the settings of object ha.

If PersistentMemory is left to false, the default, when you try to apply ha with the method filter, the filtering process starts by resetting the object properties to their initial conditions at construction. To preserve the customized coefficients in this example, you set PersistentMemory to true so the coefficients do not get reset automatically back to zero.

When adaptfilt.se runs, it uses far fewer multiply operations than either of the LMS algorithms. Also, performing the sign-error adaptation requires only bit shifting multiplies when the step size is a power of two.

Although the performance of the sign-data algorithm as shown in the next figure is quite good, the sign-data algorithm is much less stable than the standard LMS variations. In this noise cancellation example, the signal after processing is a very good match to the input signal, but the algorithm could very easily become unstable rather than achieve good performance.

Changing coeffs, mu, or even the lowpass filter you used to create the correlated noise can cause noise cancellation to fail and the algorithm to become useless.

One more example of a variation of the LMS algorithm in the toolbox is the sign-sign variant (SSLMS). The rationale for this version matches those for the sign-data and sign-error algorithms presented in preceding sections. For more details, refer to Noise Cancellation Using adaptfilt.sd.

The sign-sign algorithm (SSLMS) replaces the mean square error calculation with using the sign of the input data to change the filter coefficients. When the error is positive, the new coefficients are the previous coefficients plus the error multiplied by the step size µ.

If the error is negative, the new coefficients are again the previous coefficients minus the error multiplied by µ — note the sign change. When the input is zero, the new coefficients are the same as the previous set.

In essence, the algorithm quantizes both the error and the input by applying the sign operator to them.

In vector form, the sign-sign LMS algorithm is

$\begin{array}{cc}w\left(k+1\right)=w\left(k\right)+\mu \mathrm{sgn}\left[e\left(k\right)\right]\mathrm{sgn}\left[x\left(k\right)\right],& \\ & \mathrm{sgn}\left[z\left(k\right)\right]=\left\{\begin{array}{l}1,z\left(k\right)>0\\ 0,z\left(k\right)=0\\ -1,z\left(k\right)<0\end{array}\end{array}$

where

$z\left(k\right)=\left[e\left(k\right)\right]\mathrm{sgn}\left[x\left(k\right)\right]$

Vector w contains the weights applied to the filter coefficients and vector x contains the input data. e(k) ( = desired signal - filtered signal) is the error at time k and is the quantity the SSLMS algorithm seeks to minimize. µ(mu) is the step size. As you specify mu smaller, the correction to the filter weights gets smaller for each sample and the SSLMS error falls more slowly.

Larger mu changes the weights more for each step so the error falls more rapidly, but the resulting error does not approach the ideal solution as closely. To ensure good convergence rate and stability, select mu within the following practical bounds

$0<\mu <\frac{1}{N\left\{InputSignalPower\right\}}$

where N is the number of samples in the signal. Also, define mu as a power of two for efficient computation.

 Note   How you set the initial conditions of the sign-sign algorithm profoundly influences the effectiveness of the adaptation. Because the algorithm essentially quantizes the input signal and the error signal, the algorithm can become unstable easily. A series of large error values, coupled with the quantization process may result in the error growing beyond all bounds. You restrain the tendency of the sign-sign algorithm to get out of control by choosing a small step size (µ<< 1) and setting the initial conditions for the algorithm to nonzero positive and negative values.

In this noise cancellation example, adaptfilt.ss requires two input data sets:

For the signal, use a sine wave. Note that signal is a column vector of 1000 elements.

`signal = sin(2*pi*0.055*[0:1000-1]');`

Now, add correlated white noise to signal. To ensure that the noise is correlated, pass the noise through a lowpass FIR filter, then add the filtered noise to the signal.

```noise=randn(1,1000);
nfilt=fir1(11,0.4); % Eleventh order lowpass filter
fnoise=filter(nfilt,1,noise); % Correlated noise data
d=signal.'+fnoise;```

fnoise is the correlated noise and d is now the desired input to the sign-data algorithm.

To prepare the adaptfilt object for processing, set the input conditions coeffs and mu for the object. As noted earlier in this section, the values you set for coeffs and mu determine whether the adaptive filter can remove the noise from the signal path. In System Identification Using adaptfilt.lms, you constructed a default filter that sets the filter coefficients to zeros. Usually that approach does not work for the sign-sign algorithm.

The closer you set your initial filter coefficients to the expected values, the more likely it is that the algorithm remains well behaved and converges to a filter solution that removes the noise effectively. For this example, you start with the coefficients in the filter you used to filter the noise (nfilt), and modify them slightly so the algorithm has to adapt.

```coeffs = nfilt.' -0.01; % Set the filter initial conditions.
mu = 0.05;          % Set the step size for algorithm updating.```

With the required input arguments for adaptfilt.ss prepared, run the adaptation and view the results.

```ha = adaptfilt.ss(12,mu)
set(ha,'coefficients',coeffs);
set(ha,'persistentmemory',true); % Prevent filter reset.
[y,e] = filter(ha,noise,d);
plot(0:199,signal(1:200),0:199,e(1:200));
title('Noise Cancellation Performance of the Sign-Sign LMS Algorithm');
legend('Actual Signal', 'Error After Noise Reduction', ...
'Location', 'NorthEast');```

Notice that you have to set the property PersistentMemory to true when you manually change the settings of object ha.

If PersistentMemory is left to false, when you try to apply ha with the method filter the filtering process starts by resetting the object properties to their initial conditions at construction. To preserve the customized coefficients in this example, you set PersistentMemory to true so the coefficients do not get reset automatically back to zero.

When adaptfilt.ss runs, it uses far fewer multiply operations than either of the LMS algorithms. Also, performing the sign-sign adaptation requires only bit shifting multiplies when the step size is a power of two.

Although the performance of the sign-sign algorithm as shown in the next figure is quite good, the sign-sign algorithm is much less stable than the standard LMS variations. In this noise cancellation example, the signal after processing is a very good match to the input signal, but the algorithm could very easily become unstable rather than achieve good performance.

Changing coeffs, mu, or even the lowpass filter you used to create the correlated noise can cause noise cancellation to fail and the algorithm to become useless.

As an aside, the sign-sign LMS algorithm is part of the international CCITT standard for 32 Kb/s ADPCM telephony.