This is machine translation

Translated by Microsoft
Mouse over text to see original. Click the button below to return to the English verison of the page.


Convolutionally encode binary data


code = convenc(msg,trellis)
code = convenc(msg,trellis,puncpat)
code = convenc(msg,trellis,...,init_state)
[code,final_state] = convenc(...)


code = convenc(msg,trellis) encodes the binary vector msg using the convolutional encoder whose MATLAB trellis structure is trellis. For details about MATLAB trellis structures, see Trellis Description of a Convolutional Code. Each symbol in msg consists of log2(trellis.numInputSymbols) bits. The vector msg contains one or more symbols. The output vector code contains one or more symbols, each of which consists of log2(trellis.numOutputSymbols) bits.

code = convenc(msg,trellis,puncpat) is the same as the syntax above, except that it specifies a puncture pattern, puncpat, to allow higher rate encoding. puncpat must be a vector of 1s and 0s, where the 0s indicate the punctured bits. puncpat must have a length of at least log2(trellis.numOutputSymbols) bits.

code = convenc(msg,trellis,...,init_state) allows the encoder registers to start at a state specified by init_state. init_state is an integer between 0 and trellis.numStates-1 and must be the last input parameter.

[code,final_state] = convenc(...) encodes the input message and also returns the encoder's state in final_state. final_state has the same format as init_state.


collapse all

Encode five two-bit symbols using a rate 2/3 convolutional code.

data = randi([0 1],10,1);
trellis1 = poly2trellis([5 4],[23 35 0; 0 5 13]);
code1 = convenc(data,poly2trellis([5 4],[23 35 0; 0 5 13]));

Verify that the encoded output is 15 bits, 3/2 times the length of the input sequence, data.

ans =


Define the encoder's trellis structure explicitly and then use convenc to encode 10 one-bit symbols.

trellis2 = struct('numInputSymbols',2,'numOutputSymbols',4,...
'numStates',4,'nextStates',[0 2;0 2;1 3;1 3],...
'outputs',[0 3;1 2;3 0;2 1]);
code2 = convenc(randi([0 1],10,1),trellis2);

Use the final and initial state arguments when invoking convenc. Encode part of data , recording final state for later use.

[code3,fstate] = convenc(data(1:6),trellis1);

Encode the rest of data, using fstate as an input argument.

code4 = convenc(data(7:10),trellis1,fstate);

Verify that the [code3; code4] matches code1.

isequal(code1,[code3; code4])
ans =



Create a trellis structure for a rate 1/2 feedforward convolutional code and use it to encode and decode a random bit stream.

Create a trellis in which the constraint length is 7 and the code generator is specified as a cell array of polynomial character vectors.

trellis = poly2trellis(7,{'1 + x^3 + x^4 + x^5 + x^6', ...
    '1 + x + x^3 + x^4 + x^6'})
trellis = 

  struct with fields:

     numInputSymbols: 2
    numOutputSymbols: 4
           numStates: 64
          nextStates: [64×2 double]
             outputs: [64×2 double]

Generate random binary data, convolutionally encode the data, and decode the data using the Viterbi algorithm.

data = randi([0 1],70,1);
codedData = convenc(data,trellis);
decodedData = vitdec(codedData,trellis,34,'trunc','hard');

Verify that there are no bit errors in the decoded data.

ans =


Estimate bit error rate (BER) performance for hard-decision and soft-decision Viterbi decoders in AWGN. Compare the performance to that of an uncoded 64-QAM link.

Set the simulation parameters.

clear; close all
rng default
M = 64;                 % Modulation order
k = log2(M);            % Bits per symbol
EbNoVec = (4:10)';       % Eb/No values (dB)
numSymPerFrame = 1000;   % Number of QAM symbols per frame

Initialize the BER results vectors.

berEstSoft = zeros(size(EbNoVec));
berEstHard = zeros(size(EbNoVec));

Set the trellis structure and traceback length for a rate 1/2, constraint length 7, convolutional code.

trellis = poly2trellis(7,[171 133]);
tbl = 32;
rate = 1/2;

The main processing loops performs these steps:

  • Generate binary data.

  • Convolutinally encode the data.

  • Apply QAM modulation to the data symbols.

  • Pass the modulated signal through an AWGN channel.

  • Demodulate the received signal using hard decision and approximate LLR methods.

  • Viterbi decode the signals using hard and unquantized methods.

  • Calculate the number of bit errors.

The while loop continues to process data until either 100 errors are encountered or 1e7 bits are transmitted.

for n = 1:length(EbNoVec)
    % Convert Eb/No to SNR
    snrdB = EbNoVec(n) + 10*log10(k*rate);
    % Reset the error and bit counters
    [numErrsSoft,numErrsHard,numBits] = deal(0);

    while numErrsSoft < 100 && numBits < 1e7
        % Generate binary data and convert to symbols
        dataIn = randi([0 1],numSymPerFrame*k,1);

        % Convolutionally encode the data
        dataEnc = convenc(dataIn,trellis);

        % QAM modulate
        txSig = qammod(dataEnc,M,'InputType','bit');

        % Pass through AWGN channel
        rxSig = awgn(txSig,snrdB,'measured');

        % Demodulate the noisy signal using hard decision (bit) and
        % approximate LLR approaches
        rxDataHard = qamdemod(rxSig,M,'OutputType','bit');
        rxDataSoft = qamdemod(rxSig,M,'OutputType','approxllr', ...

        % Viterbi decode the demodulated data
        dataHard = vitdec(rxDataHard,trellis,tbl,'cont','hard');
        dataSoft = vitdec(rxDataSoft,trellis,tbl,'cont','unquant');

        % Calculate the number of bit errors in the frame. Adjust for the
        % decoding delay, which is equal to the traceback depth.
        numErrsInFrameHard = biterr(dataIn(1:end-tbl),dataHard(tbl+1:end));
        numErrsInFrameSoft = biterr(dataIn(1:end-tbl),dataSoft(tbl+1:end));

        % Increment the error and bit counters
        numErrsHard = numErrsHard + numErrsInFrameHard;
        numErrsSoft = numErrsSoft + numErrsInFrameSoft;
        numBits = numBits + numSymPerFrame*k;


    % Estimate the BER for both methods
    berEstSoft(n) = numErrsSoft/numBits;
    berEstHard(n) = numErrsHard/numBits;

Plot the estimated hard and soft BER data. Plot the theoretical performance for an uncoded 64-QAM channel.

semilogy(EbNoVec,[berEstSoft berEstHard],'-*')
hold on
xlabel('Eb/No (dB)')
ylabel('Bit Error Rate')

As expected, the soft decision decoding produces the best results.


For some commonly used puncture patterns for specific rates and polynomials, see the last three references.


[1] Clark, G. C. Jr. and J. Bibb Cain., Error-Correction Coding for Digital Communications, New York, Plenum Press, 1981.

[2] Gitlin, Richard D., Jeremiah F. Hayes, and Stephen B. Weinstein, Data Communications Principles, New York, Plenum, 1992.

[3] Yasuda, Y., et. al., "High rate punctured convolutional codes for soft decision Viterbi decoding," IEEE Transactions on Communications, vol. COM-32, No. 3, pp 315–319, Mar. 1984.

[4] Haccoun, D., and G. Begin, "High-rate punctured convolutional codes for Viterbi and sequential decoding," IEEE Transactions on Communications, vol. 37, No. 11, pp 1113–1125, Nov. 1989.

[5] Begin, G.,, "Further results on high-rate punctured convolutional codes for Viterbi and sequential decoding," IEEE Transactions on Communications, vol. 38, No. 11, pp 1922–1928, Nov. 1990.

Introduced before R2006a

Was this topic helpful?