The quantization in the section Quantizing a Signal requires no a priori knowledge about the transmitted signal. In practice, you can often make educated guesses about the present signal based on past signal transmissions. Using such educated guesses to help quantize a signal is known as predictive quantization. The most common predictive quantization method is differential pulse code modulation (DPCM).
dpcmopt can help you implement a
DPCM predictive quantizer with a linear predictor.
To determine an encoder for such a quantizer, you must supply not only a partition and codebook as described in Represent Partitions and Represent Codebooks, but also a predictor. The predictor is a function that the DPCM encoder uses to produce the educated guess at each step. A linear predictor has the form
y(k) = p(1)x(k-1) + p(2)x(k-2) + ... + p(m-1)x(k-m+1) + p(m)x(k-m)
where x is the original signal,
y(k) attempts to predict the
p is an
m-tuple of real numbers. Instead of quantizing
x itself, the DPCM encoder quantizes the predictive
error, x-y. The integer
m above is called the
predictive order. The special case when
m = 1 is called delta
If the guess for the
kth value of the signal
x, based on earlier values of
y(k) = p(1)x(k-1) + p(2)x(k-2) +...+ p(m-1)x(k-m+1) + p(m)x(k-m)
then the corresponding predictor vector for toolbox functions is
predictor = [0, p(1), p(2), p(3),..., p(m-1), p(m)]
The initial zero in the predictor vector makes sense if you view the vector as the polynomial transfer function of a finite impulse response (FIR) filter.
A simple special case of DPCM quantizes the difference between the signal's
current value and its value at the previous step. Thus the predictor is just
y(k) = x (k - 1). The code below
implements this scheme. It encodes a sawtooth signal, decodes it, and plots both the
original and decoded signals. The solid line is the original signal, while the
dashed line is the recovered signals. The example also computes the mean square
error between the original and decoded signals.
predictor = [0 1]; % y(k)=x(k-1) partition = [-1:.1:.9]; codebook = [-1:.1:1]; t = [0:pi/50:2*pi]; x = sawtooth(3*t); % Original signal % Quantize x using DPCM. encodedx = dpcmenco(x,codebook,partition,predictor); % Try to recover x from the modulated signal. decodedx = dpcmdeco(encodedx,codebook,predictor); plot(t,x,t,decodedx,'--') legend('Original signal','Decoded signal','Location','NorthOutside'); distor = sum((x-decodedx).^2)/length(x) % Mean square error
The output is
distor = 0.0327
The section Optimize Quantization Parameters describes how to use training
data with the
lloyds function to help find quantization
parameters that will minimize signal distortion.
This section describes similar procedures for using the
dpcmopt function in conjunction with the two functions
dpcmdeco, which first
appear in the previous section.
The training data you use with
dpcmopt should be
typical of the kinds of signals you will actually be quantizing with
This example is similar to the one in the last section. However, where the
last example created
codebook in a straightforward but haphazard way, this
example uses the same codebook (now called
an initial guess for a new optimized codebook parameter.
This example also uses the predictive order, 1, as the desired order of the new
optimized predictor. The
dpcmopt function creates these
optimized parameters, using the sawtooth signal
x as training
data. The example goes on to quantize the training data itself; in theory, the
optimized parameters are suitable for quantizing other data that is similar to
x. Notice that the mean square distortion here is much
less than the distortion in the previous example.
t = [0:pi/50:2*pi]; x = sawtooth(3*t); % Original signal initcodebook = [-1:.1:1]; % Initial guess at codebook % Optimize parameters, using initial codebook and order 1. [predictor,codebook,partition] = dpcmopt(x,1,initcodebook); % Quantize x using DPCM. encodedx = dpcmenco(x,codebook,partition,predictor); % Try to recover x from the modulated signal. decodedx = dpcmdeco(encodedx,codebook,predictor); distor = sum((x-decodedx).^2)/length(x) % Mean square error
The output is
distor = 0.0063