Iterative Decoding of a Serially Concatenated Convolutional Code

This model shows how to use an iterative process to decode a serially concatenated convolutional code (SCCC).

    Note:   This example presents technology covered under U.S. Patent Number 6,023,783, "Hybrid concatenated codes and iterative decoding," assigned to the California Institute of Technology. The end user of this product is hereby granted a limited license to use this example solely for the purpose of assessing possible commercial and educational applications of the technology. Any other use or modification of this example may constitute a violation of this and/or other patents.

Exploring the Example

The simulation generates information bits, encodes them using a serially concatenated convolutional code, and transmits the coded information along a noisy channel. The simulation then decodes the received coded information, using an iterative decoding process, and computes error statistics based on different numbers of iterations. Throughout the simulation, the error rates appear in a Display block.

Open the model, doc_iterative_decoding_scccdoc_iterative_decoding_sccc, by entering the following on the MATLAB® command line.

doc_iterative_decoding_sccc

Variables in the Example

The Model Parameters block lets you vary the values of some quantities that the model uses. The table below indicates their names and meanings.

NameMeaning
Eb/NoEb/N0 in channel noise, measured in dB; used to compute the variance of the channel noise
Block sizeThe number of bits in each frame of uncoded data
Number of iterationsThe number of iterations to use when decoding
SeedThe initial seed in the Random Interleaver and Random Deinterleaver blocks

Creating a Serially Concatenated Code

The encoding portion of the example uses a Convolutional Encoder block to encode a data frame, a Random Interleaver block to shuffle the bits in the codewords, and another Convolutional Encoder block to encode the interleaved bits. Because these blocks are connected in series with each other, the resulting code is called a serially concatenated code.

Together, these blocks encode the 1024-bit data frame into a 3072-bit frame representing a concatenated code. These sizes depend on the model's Block size parameter (see the Model Parameters block ). The code rate of the concatenated code is 1/3.

In general, the purpose of interleaving is to protect codewords from burst errors in a noisy channel. A burst error that corrupts interleaved data actually has a small effect on each of several codewords, rather than a large effect on any one codeword. The smaller the error in an individual codeword, the greater the chance that the decoder can recover the information correctly.

Convolutional Encoding Details

The two instances of the Convolutional Encoder block use their Trellis structure parameters to specify the convolutional codes. The table below lists the polynomials that define each of the two convolutional codes. The second encoder has two inputs and uses two rows of memory registers.

 Outer Convolutional CodeInner Convolutional Code
Generator Polynomials1+D+D2 and 1+D2First row: 1+D+D2, 0, and 1+D2
Second row: 0, 1+D+D2, and 1+D
Feedback Polynomials1+D+D21+D+D2 for each row
Constraint Lengths33 for each row
Code rate1/22/3

Decoding Using an Iterative Process

The decoding portion of this example consists of two APP Decoder blocks, a Random Deinterleaver block, and several other blocks. Together, these blocks form a loop and operate at a rate six times that of the encoding portion of the example. The loop structure and higher rate combine to make the decoding portion an iterative process. Using multiple iterations improves the decoding performance. You can control the number of iterations by setting the Number of iterations parameter in the model's Model Parameters block. The default number of iterations is six.

Computations in Each Iteration

In each iteration, the decoding portion of the example decodes the inner convolutional code, deinterleaves the result, and decodes the outer convolutional code. The outer decoder's L(u) output signal represents the updated likelihoods of original message bits (that is, input bits to the outer encoder).

The looping strategy in this example enables the inner decoder to benefit in the next iteration from the outer decoder's work. To understand how the loop works, first recall the meanings of these signals:

  • The outer decoder's L(c) output signal represents the updated likelihoods of code bits from the outer encoder.

  • The inner decoder's L(u) input represents the likelihoods of input bits to the inner encoder.

The feedback loop recognizes that the primary distinction between these two signals is in the interleaving operation that occurs between the outer and inner encoders. Therefore, the loop interleaves the L(c) output of the outer decoder to replicate that interleaving operation, delays the interleaved data to ensure that the inner decoder's two input ports represent data from the same time steps, and resets the L(u) input to the inner decoder to zero after every six iterations.

Results of the Iterative Loop

The result of decoding is a 1024-element frame whose elements indicate the likelihood that each of the 1024 message bits was a 0 or a 1. A nonnegative element indicates that the message bit was probably a 1, and a negative element indicates that the message bit was probably a 0. The Hard Decision block converts nonnegative and negative values to 1's and 0's, respectively, so that the results have the same form as the original uncoded binary data.

Results and Displays

The example includes a large Display block that shows error rates after comparing the received data with the transmitted data. The number of error rates in the display is the number of iterations in the decoding process. The first error rate reflects the performance of a decoding process that uses one iteration, the second error rate reflects the performance of a decoding process that uses two iterations, and so on. The series of error rates shows that the error rate generally decreases as the number of iterations increases.

Change the Eb/No to 1 dB and run the simulation. Observe that the bit error rates decrease with each iteration.

References

[1] Benedetto, S., D. Divsalar, G. Montorsi, and F. Pollara, "Serial Concatenation of Interleaved Codes: Performance Analysis, Design, and Iterative Decoding," JPL TDA Progress Report, Vol. 42-126, August 1996.

[2] Divsalar, Dariush, and Fabrizio Pollara, Hybrid Concatenated Codes and Iterative Decoding, U. S. Patent No. 6,023,783, Feb. 8, 2000.

[3] Heegard, Chris, and Stephen B. Wicker, Turbo Coding, Boston, Kluwer Academic Publishers, 1999.

Was this topic helpful?