Denoising and Nonparametric Function Estimation

The Wavelet Toolbox™ provides a number of functions for the estimation of an unknown function (signal or image) in noise.

The most general 1-D model for this is

s(n) = f(n) + σe(n)

where n = 0,1,2,...N-1. The e(n) are Gaussian random variables distributed as N(0,1). The variance of the σe(n) is σ2.

In practice, s(n) is often a discrete-time signal with equal time steps corrupted by additive noise and you are attempting to recover that signal.

More generally, you can view s(n) as an N-dimensional random vector

(f(0)+σe(0)f(1)+σe(1)f(2)+σe(2)...f(N1)+σe(N1))=(f(0)f(1)f(2)...f(N1))+(σe(0)σe(1)σe(2)...σe(N1))

In this general context, the relationship between denoising and regression is clear.

You can replace the N-by-1 random vector by N-by-M random matrices to obtain the problem of recovering an image corrupted by additive noise.

You can obtain a 1-D example of this model with the following code.

load cuspamax;
y = cuspamax+0.5*randn(size(cuspamax));
plot(y); hold on;
plot(cuspamax,'r','linewidth',2);
axis tight;
legend('f(n)+\sigma e(n)','f(n)', 'Location', 'NorthWest');

For a broad class of functions (signals, images) that possess certain smoothness properties, wavelet techniques are optimal or near optimal for function recovery.

Specifically, the method is efficient for families of functions f that have only a few nonzero wavelet coefficients. These functions have a sparse wavelet representation. For example, a smooth function almost everywhere, with only a few abrupt changes, has such a property.

The general wavelet–based method for denoising and nonparametric function estimation is to transform the data into the wavelet domain, threshold the wavelet coefficients, and invert the transform.

You can summarize these steps as:

  1. Decompose

    Choose a wavelet and a level N. Compute the wavelet decomposition of the signal s down to level N.

  2. Threshold detail coefficients

    For each level from 1 to N, threshold the detail coefficients.

  3. Reconstruct

    Compute wavelet reconstruction using the original approximation coefficients of level N and the modified detail coefficients of levels from 1 to N.

Threshold Selection Rules

The Wavelet Toolbox supports a number of threshold selection rules. Four threshold selection rules are implemented in the thselect. Each rule corresponds to a tptr option in the command

thr = thselect(y,tptr)

which returns the threshold value.

Option

Threshold Selection Rule

'rigrsure'

Selection using principle of Stein's Unbiased Risk Estimate (SURE)

'sqtwolog'

Fixed form (universal) threshold equal to

2ln(N)

with N the length of the signal.

'heursure'

Selection using a mixture of the first two options

'minimaxi'

Selection using minimax principle

  • Option 'rigrsure' uses for the soft threshold estimator a threshold selection rule based on Stein's Unbiased Estimate of Risk (quadratic loss function). You get an estimate of the risk for a particular threshold value t. Minimizing the risks in t gives a selection of the threshold value.

  • Option 'sqtwolog' uses a fixed form threshold yielding minimax performance multiplied by a small factor proportional to log(length(s)).

  • Option 'heursure' is a mixture of the two previous options. As a result, if the signal-to-noise ratio is very small, the SURE estimate is very noisy. So if such a situation is detected, the fixed form threshold is used.

  • Option 'minimaxi' uses a fixed threshold chosen to yield minimax performance for mean square error against an ideal procedure. The minimax principle is used in statistics to design estimators. Since the denoised signal can be assimilated to the estimator of the unknown regression function, the minimax estimator is the option that realizes the minimum, over a given set of functions, of the maximum mean square error.

The following example shows the threshold rules for a 1000-by-1 N(0,1) vector. The signal here is

f(n)+e(n)e(n)~N(0,1)

with f(n) = 0.

rng default;
sig = randn(1e3,1);
thr_rigrsure = thselect(sig,'rigrsure')
thr_univthresh = thselect(sig,'sqtwolog')
thr_heursure = thselect(sig,'heursure')
thr_minimaxi = thselect(sig,'minimaxi')
hist(sig);
h = findobj(gca,'Type','patch');
set(h,'FaceColor',[0.7 0.7 0.7],'EdgeColor','w');
hold on;
plot([thr_rigrsure thr_rigrsure], [0 300],'linewidth',2);
plot([thr_univthresh thr_univthresh], [0 300],'r','linewidth',2);
plot([thr_minimaxi thr_minimaxi], [0 300],'k','linewidth',2);
plot([-thr_rigrsure -thr_rigrsure], [0 300],'linewidth',2);
plot([-thr_univthresh -thr_univthresh], [0 300],'r','linewidth',2);
plot([-thr_minimaxi -thr_minimaxi], [0 300],'k','linewidth',2);

For Stein's Unbiased Risk Estimate (SURE) and minimax thresholds, approximately 3% of coefficients are retained. In the case of the universal threshold, all values are rejected.

We know that the detail coefficients vector is the superposition of the coefficients of f and the coefficients of e, and that the decomposition of e leads to detail coefficients, which are standard Gaussian white noises.

After you use thselect to determine a threshold, you can threshold each level of a . This second step can be done using wthcoef, directly handling the wavelet decomposition structure of the original signal s.

Soft or Hard Thresholding

Hard and soft thresholding are examples of shrinkage rules. After you have determined your threshold, you have to decide how to apply that threshold to your data.

The simplest scheme is hard thresholding. Let T denote the threshold and x your data. The hard thresholding is

η(x)={x|x|T0|x|<T

The soft thresholding is

η(x)={xTx>T0|x|Tx+Tx<T

You can apply your threshold using the hard or soft rule with wthresh.

y = linspace(-1,1,100);
thr = 0.4;
ythard = wthresh(y,'h',thr);
ytsoft = wthresh(y,'s',thr);
subplot(131);
plot(y); title('Original Data');
subplot(132);
plot(ythard,'*'); title('Hard Thresholding');
subplot(133);
plot(ytsoft,'*'); title('Soft Thresholding');

Dealing with Unscaled Noise and Nonwhite Noise

Usually in practice the basic model cannot be used directly. We examine here the options available to deal with model deviations in the main de-noising function wden.

The simplest use of wden is

sd = wden(s,tptr,sorh,scal,n,wav)

which returns the denoised version sd of the original signal s obtained using the tptr threshold selection rule. Other parameters needed are sorh, scal, n, and wav. The parameter sorh specifies the thresholding of details coefficients of the decomposition at level n of s by the wavelet called wav. The remaining parameter scal is to be specified. It corresponds to threshold's rescaling methods.

Option

Corresponding Model

'one'

Basic model

'sln'

Basic model with unscaled noise

'mln'

Basic model with nonwhite noise

  • Option scal = 'one' corresponds to the basic model.

  • In general, you can ignore the noise level and it must be estimated. The detail coefficients cD1 (the finest scale) are essentially noise coefficients with standard deviation equal to σ. The median absolute deviation of the coefficients is a robust estimate of σ. The use of a robust estimate is crucial for two reasons. The first one is that if level 1 coefficients contain f details, then these details are concentrated in a few coefficients if the function f is sufficiently regular. The second reason is to avoid signal end effects, which are pure artifacts due to computations on the edges.

    Option scal = 'sln' handles threshold rescaling using a single estimation of level noise based on the first-level coefficients.

  • When you suspect a nonwhite noise e, thresholds must be rescaled by a level-dependent estimation of the level noise. The same kind of strategy as in the previous option is used by estimating σlev level by level.

    This estimation is implemented in the file wnoisest, directly handling the wavelet decomposition structure of the original signal s.

    Option scal = 'mln' handles threshold rescaling using a level-dependent estimation of the level noise.

For a more general procedure, the wdencmp function performs wavelet coefficients thresholding for both de-noising and compression purposes, while directly handling one-dimensional and two-dimensional data. It allows you to define your own thresholding strategy selecting in

 xd = wdencmp(opt,x,wav,n,thr,sorh,keepapp);

where

  • opt = 'gbl' and thr is a positive real number for uniform threshold.

  • opt = 'lvd' and thr is a vector for level dependent threshold.

  • keepapp = 1 to keep approximation coefficients, as previously and

  • keepapp = 0 to allow approximation coefficients thresholding.

  • x is the signal to be denoised and wav, n, sorh are the same as above.

Denoising in Action

We begin with examples of one-dimensional de-noising methods with the first example credited to Donoho and Johnstone. You can use the following file to get the first test function using wnoise.

% Set signal to noise ratio and set rand seed. 
sqrt_snr = 4; init = 2055615866;

% Generate original signal xref and a noisy version x adding 
% a standard Gaussian white noise. 
[xref,x] = wnoise(1,11,sqrt_snr,init);

% De-noise noisy signal using soft heuristic SURE thresholding 
% and scaled noise option, on detail coefficients obtained 
% from the decomposition of x, at level 3 by sym8 wavelet. 
xd = wden(x,'heursure','s','one',3,'sym8');

Blocks Signal Denoising

Since only a small number of large coefficients characterize the original signal, the method performs very well.

As a second example, let us try the method on the highly perturbed part of the electrical signal studied above.

According to this previous analysis, let us use db3 wavelet and decompose at level 3.

To deal with the composite noise nature, let us try a level-dependent noise size estimation.

% Load electrical signal and select part of it. 
load leleccum; indx = 2000:3450; 
x = leleccum(indx);

% Find first value in order to avoid edge effects. 
deb = x(1);

% De-noise signal using soft fixed form thresholding 
% and unknown noise option. 
xd = wden(x-deb,'sqtwolog','s','mln',3,'db3')+deb;

Electrical Signal De-Noising

The result is quite good in spite of the time heterogeneity of the nature of the noise after and before the beginning of the sensor failure around time 2450.

Extension to Image De-Noising

The de-noising method described for the one-dimensional case applies also to images and applies well to geometrical images. A direct translation of the one-dimensional model is

s(i,j) = f(i,j) + σe(i,j)

where e is a white Gaussian noise with unit variance.

The two-dimensional de-noising procedure has the same three steps and uses two-dimensional wavelet tools instead of one-dimensional ones. For the threshold selection, prod(size(s)) is used instead of length(s) if the fixed form threshold is used.

Note that except for the "automatic" one-dimensional de-noising case, de-noising and compression are performed using wdencmp. As an example, you can use the following file illustrating the de-noising of a real image.

% Load original image. 
load  woman

% Generate noisy image.
x = X + 15*randn(size(X));

% Find default values. In this case fixed form threshold
% is used with estimation of level noise, thresholding
% mode is soft and the approximation coefficients are 
% kept.
[thr,sorh,keepapp] = ddencmp('den','wv',x);

% thr is equal to estimated_sigma*sqrt(log(prod(size(X))))

% De-noise image using global thresholding option.
xd = wdencmp('gbl',x,'sym4',2,thr,sorh,keepapp);
% Plots.
colormap(pink(255)), sm = size(map,1);
subplot(221), image(wcodemat(X,sm)), title('Original Image')
subplot(222), image(wcodemat(x,sm)), title('Noisy Image')
subplot(223), image(wcodemat(xd,sm)), title('denoised Image')

The result shown below is acceptable.

Image De-Noising

One-Dimensional Wavelet Variance Adaptive Thresholding

The idea is to define level by level time-dependent thresholds, and then increase the capability of the de-noising strategies to handle nonstationary variance noise models.

More precisely, the model assumes (as previously) that the observation is equal to the interesting signal superimposed on a noise (see Denoising and Nonparametric Function Estimation).

s(n) = f(n) + σe(n)

But the noise variance can vary with time. There are several different variance values on several time intervals. The values as well as the intervals are unknown.

Let us focus on the problem of estimating the change points or equivalently the intervals. The algorithm used is based on an original work of Marc Lavielle about detection of change points using dynamic programming (see [Lav99] in References).

Let us generate a signal from a fixed-design regression model with two noise variance change points located at positions 200 and 600.

% Generate blocks test signal.
x = wnoise(1,10);       

% Generate noisy blocks with change points.
bb = randn(1,length(x));
cp1 = 200; cp2 = 600;
x = x + [bb(1:cp1),bb(cp1+1:cp2)/4,bb(cp2+1:end)];

The aim of this example is to recover the two change points from the signal x. In addition, this example illustrates how the GUI tools locate the change points for interval dependent thresholding.

Step 1. Recover a noisy signal by suppressing an approximation.

% Perform a single-level wavelet decomposition 
% of the signal using db3.
wname = 'db3'; lev = 1;
[c,l] = wavedec(x,lev,wname);

% Reconstruct detail at level 1.
det = wrcoef('d',c,l,wname,1);

The reconstructed detail at level 1 recovered at this stage is almost signal free. It captures the main features of the noise from a change points detection viewpoint if the interesting part of the signal has a sparse wavelet representation. To remove almost all the signal, we replace the biggest values by the mean.

Step 2. To remove almost all the signal, replace 2% of biggest values by the mean.

x = sort(abs(det));
v2p100 = x(fix(length(x)*0.98));
ind = find(abs(det)>v2p100);
det(ind) = mean(det);

Step 3. Use the wvarchg function to estimate the change points with the following parameters:

  • The minimum delay between two change points is d = 10.

  • The maximum number of change points is 5.

    [cp_est,kopt,t_est] = wvarchg(det,5)
    

Two change points and three intervals are proposed. Since the three interval variances for the noise are very different the optimization program detects easily the correct structure.

The estimated change points are close to the true change points: 200 and 600.

Step 4. (Optional) Replace the estimated change points.

For 2 i 6, t_est(i,1:i-1) contains the i-1 instants of the variance change points, and since kopt is the proposed number of change points; then

cp_est = t_est(kopt+1,1:kopt);

You can replace the estimated change points by computing

% cp_New = t_est(knew+1,1:knew); % where 1 ≤ knew ≤ 5
Was this topic helpful?