Implementing Turbo Codes in Python __ Cybersecurity, Programming, Signal Processing
Implementing Turbo Codes in Python __ Cybersecurity, Programming, Signal Processing
id / posts / turbo-codes
Overview
Turbo codes are a class of error correction codes built so that the receiver can
attempt to reconstruct the original message if it was corrupted over a noisy
channel. As an example, they are used in 3GPP LTE (4G) for cellular
communications.
Turbo codes have become essential to certain wireless communication systems due to
their performance. It is possible to operate near the channel capacity with long
enough codes and maintain reasonable decoding complexity.
I do not delve into puncturing, which optional from the decoder’s point of view,
and helps increase the rate of the code (e.g. from R=1/3 to R=1/2).
Turbo Encoders
In this example, a rate 1/3 turbo code encoder consists of two recursive
systematic convolutional (RSC) encoders, and an interleaver before the second
encoder. The purpose of the interleaver is to ensure the parity bits are mostly
independent. Figure 1 provides a simple picture of the encoder setup.
Figure 1: Turbo Encoder Diagram
The left most numbers represent the state of the encoder, while the binary numbers
on each transition represent the input bit / output bit.
It is extremely important that the first component RSC encoder terminate at its
original state (usually zero), otherwise the turbo decoder will introduce bit
errors near the end of the input message and mess up any BER curves.
0 0 0 0 1 1
γ k (s m , s n ) = c k L a (c k ) + c k L c r k + c k L c r k
0
sm and s n correspond to the state transition that occurred on the trellis. c
k
1
and c k correspond to the input / output bits of the trellis shown in the previous
figure 2. L a corresponds to the prior (extrinsic) values, which are initialized
to zero when the first decoder executes on the first iteration.
α k (s n ) = max (α k−1 (s m ) + γ (s m , s n ))
0, sn = 0
α 0 (s n ) = {
−∞, sn ≠ 0
Compute the backward path metrics in the opposite direction on the trellis:
β k (s m ) = max (β k+1 (s n ) + γ (s m , s n ))
0, sm = 0
β K (s m ) = {
−∞, sm ≠ 0
− max (α k−1 (s n ) + γ k (s n , s m ) + β k (s m ))
x k =0
The input vector can be interpreted as a list of tuples with the first value
corresponding to the systematic (original) bits, and the other values
corresponding to the output codeword bits. In the MAP algorithm, we are trying to
determine the likelihood of receiving a particular bit at time k by using the
branch and path metrics to score transitions on the trellis. Only the branch
metric calculation requires the input vector, after which the path metrics are
calculated recursively using the branch metrics as input.
To further show how a turbo decoder works, figure 4 displays the soft bit outputs
after each iteration. One can see how the turbo decoder gains confidence after
each computation of the log-likelihood ratios (LLRs). The distance between the
soft bit values and the origin grow with every iteration, making the hard decision
(x t > 0 → 1, x t < 0 → 0) easier.
Figure 4
In order to test the turbo decoder over a wide range of signal-to-noise (SNR)
ratios, I produced a bit error rate (BER) curve comparing a coded BPSK performance
versus an uncoded BPSK signal. One can measure a very large coding gain if a
required BER is specified, as shown in figure 5. The number of iterations depends
on if the outputs of the SISO decoders agree with each other. This is called the
early-exit criterion.
Figure 5
References
Todd K. Moon. Error Correction Coding: Mathematical Methods and Algorithms.
Wiley 2005.
Haesik Kim. Wireless Communications Systems Design. Wiley 2015.
Wikipedia Article on Turbo Codes.
$ socials: