## Noise Cancellation Using Sign-Data LMS Algorithm

When the amount of computation required to derive an adaptive filter drives your development process, the sign-data variant of the LMS (SDLMS) algorithm might be a very good choice as demonstrated in this example.

In the standard and normalized variations of the LMS adaptive filter, coefficients for the adapting filter arise from the mean square error between the desired signal and the output signal from the unknown system. Using the sign-data algorithm changes the mean square error calculation by using the sign of the input data to change the filter coefficients.

When the error is positive, the new coefficients are the previous coefficients plus the error multiplied by the step size µ. If the error is negative, the new coefficients are again the previous coefficients minus the error multiplied by µ — note the sign change.

When the input is zero, the new coefficients are the same as the previous set.

In vector form, the sign-data LMS algorithm is:

$\mathit{w}\left(\mathit{k}+1\right)=\mathit{w}\left(\mathit{k}\right)+\mu \mathit{e}\left(\mathit{k}\right)\mathrm{sgn}\left(\mathit{x}\left(\mathit{k}\right)\right),$

where,

$\mathrm{sgn}\left(\mathit{x}\left(\mathit{k}\right)\right)=\left\{\begin{array}{c}1,\text{\hspace{0.17em}}\mathit{x}\left(\mathit{k}\right)>0\\ 0,\text{\hspace{0.17em}}\mathit{x}\left(\mathit{k}\right)=0\\ -1,\text{\hspace{0.17em}}\mathit{x}\left(\mathit{k}\right)<0\end{array}$

with vector $\mathit{w}$ containing the weights applied to the filter coefficients and vector $\mathbit{x}$ containing the input data. The vector $\mathbit{e}$ is the error between the desired signal and the filtered signal. The objective of the SDLMS algorithm is to minimize this error. Step size is represented by $\mu$.

As you specify $\mu$ smaller, the correction to the filter weights gets smaller for each sample and the SDLMS error falls more slowly. Larger $\mu$ changes the weights more for each step so the error falls more rapidly, but the resulting error does not approach the ideal solution as closely. To ensure good convergence rate and stability, select $\mu$ within the following practical bounds.

$0<\mu <\frac{1}{\mathit{N}\left\{\mathrm{InputSignalPower}\right\}},$

where, $\mathit{N}$ is the number of samples in the signal. Also, define $\mu$ as a power of two for efficient computing.

Note: How you set the initial conditions of the sign-data algorithm profoundly influences the effectiveness of the adaptation. Because the algorithm essentially quantizes the input signal, the algorithm can become unstable easily.

A series of large input values, coupled with the quantization process may result in the error growing beyond all bounds. You restrain the tendency of the sign-data algorithm to get out of control by choosing a small step size $\left(\mu \ll 1\right)$ and setting the initial conditions for the algorithm to nonzero positive and negative values.

In this noise cancellation example, set the Method property of dsp.LMSFilter to 'Sign-Data LMS'. This example requires two input data sets:

For the signal, use a sine wave. Note that signal is a column vector of 1000 elements.

signal = sin(2*pi*0.055*(0:1000-1)');

Now, add correlated white noise to signal. To ensure that the noise is correlated, pass the noise through a lowpass FIR filter, and then add the filtered noise to the signal.

noise = randn(1000,1);
filt = dsp.FIRFilter;
filt.Numerator = fir1(11,0.4);
fnoise = filt(noise);
d = signal+fnoise;

fnoise is the correlated noise and d is now the desired input to the sign-data algorithm.

To prepare the dsp.LMSFilter object for processing, set the weight initial conditions (InitialConditions) and mu (StepSize) for the object. As noted earlier in this section, the values you set for coeffs and mu determine whether the adaptive filter can remove the noise from the signal path.

In System Identification of FIR Filter Using LMS Algorithm, you constructed a default filter that sets the filter coefficients to zeros. In most cases that approach does not work for the sign-data algorithm. The closer you set your initial filter coefficients to the expected values, the more likely it is that the algorithm remains well behaved and converges to a filter solution that removes the noise effectively.

For this example, start with the coefficients in the filter you used to filter the noise (filt.Numerator), and modify them slightly so the algorithm has to adapt.

coeffs = (filt.Numerator).'-0.01; % Set the filter initial conditions.
mu = 0.05;          % Set the step size for algorithm updating.

With the required input arguments for dsp.LMSFilter prepared, construct the LMS filter object, run the adaptation, and view the results.

lms = dsp.LMSFilter(12,'Method','Sign-Data LMS',...
'StepSize',mu,'InitialConditions',coeffs);
[~,e] = lms(noise,d);
L = 200;
plot(0:L-1,signal(1:L),0:L-1,e(1:L));
title('Noise Cancellation by the Sign-Data Algorithm');
legend('Actual Signal','Result of Noise Cancellation',...
'Location','NorthEast');

When dsp.LMSFilter runs, it uses far fewer multiply operations than either of the LMS algorithms. Also, performing the sign-data adaptation requires only bit shifting multiplies when the step size is a power of two.

Although the performance of the sign-data algorithm as shown in the next figure is quite good, the sign-data algorithm is much less stable than the standard LMS variations. In this noise cancellation example, the signal after processing is a very good match to the input signal, but the algorithm could very easily grow without bound rather than achieve good performance.

Changing the weight initial conditions (InitialConditions) and mu (StepSize), or even the lowpass filter you used to create the correlated noise can cause noise cancellation to fail and the algorithm to become useless.

## References

[1] Hayes, Monson H., Statistical Digital Signal Processing and Modeling, John Wiley & Sons, 1996, 493–552.

[2] Haykin, Simon, Adaptive Filter Theory, Prentice-Hall, Inc., 1996

Watch now