0% found this document useful (0 votes)
7 views

Turbo Code

Turbo codes, introduced in 1993, are advanced error-correcting codes that achieve near-Shannon-limit performance through iterative decoding and the use of soft information. They revolutionized digital communications by combining multiple convolutional codes with interleaving, allowing for improved error correction and reliability. Although superseded by LDPC and Polar Codes in 5G, Turbo codes remain significant in legacy systems and are a foundation for ongoing research in coding theory.

Uploaded by

reka
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views

Turbo Code

Turbo codes, introduced in 1993, are advanced error-correcting codes that achieve near-Shannon-limit performance through iterative decoding and the use of soft information. They revolutionized digital communications by combining multiple convolutional codes with interleaving, allowing for improved error correction and reliability. Although superseded by LDPC and Polar Codes in 5G, Turbo codes remain significant in legacy systems and are a foundation for ongoing research in coding theory.

Uploaded by

reka
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

Turbo Code

Turbo codes, introduced in 1993 by Claude Berrou, Alain Glavieux, and Punya
Thitimajshima, revolutionized error-correcting codes by achieving near-Shannon-limit
performance with practical complexity. Turbo codes are based on parallel
concatenation of two or more convolutional codes, combined with an interleaver and
decoded via iterative decoding.

Shannon's Limit
According to Claude Shannon's channel capacity theorem

Turbo codes were among the first practical codes to perform within 0.5 dB of this
theoretical limit.

Turbo Codes demonstrated that practical coding could approach the theoretical
maximum capacity of a channel, as established by Shannon, while maintaining low
error rates — revolutionizing modern digital communications.

www.techedgewireless.com
Before Turbo Codes
▪ Block Codes (e.g., Hamming, BCH):
These codes added structured redundancy to detect and correct errors.
However, they had limited error-correcting capabilities and did not approach
Shannon’s theoretical capacity limit.

▪ Convolutional Codes + Viterbi Decoding:


Widely adopted in telecommunication standards like GSM and satellite
communication.
These codes offered better performance than block codes and worked well for
real-time decoding, but they remained 1–2 dB away from Shannon's limit.

Turbo Codes Revolutionized Error Correction


Turbo codes, introduced in the 1990s, were the first practical codes that came within
0.5 dB of the Shannon limit, as shown in the plot.

They brought in three major innovations:

▪ Iterative Decoding:
Unlike Viterbi decoding (single pass), Turbo decoding loops through multiple
iterations, refining the decision with each pass.

▪ Use of Soft Information (LLR - Log-Likelihood Ratios):


Turbo decoders use probabilistic inputs (not hard 0/1 decisions), improving
accuracy in error correction.

▪ Random Interleaving:
Data bits are randomly shuffled before encoding, which spreads burst errors
across time, making them easier to correct during decoding.

Basic Principles of Channel Coding


Channel coding is a technique used in digital communication to add redundancy to
the transmitted message. This redundancy enables the receiver to detect and correct
errors caused by noise and interference in the communication channel.

www.techedgewireless.com
Purpose of Channel Coding:

▪ Correct errors caused by noise, fading, or interference.

▪ Improve the reliability of data transmission without increasing transmit


power.

▪ Maintain data integrity over unreliable or noisy channels.

Types of Channel Coding:


1. Block Coding:

▪ Divides input data into fixed-size blocks.

▪ Adds redundant bits based on mathematical rules.

▪ Example: Hamming Code, Reed-Solomon.

2. Convolutional Coding:

▪ Adds redundancy based on current and previous input bits (uses


memory).
▪ Common in real-time systems like mobile and satellite
communication.
▪ Often decoded using the Viterbi Algorithm.

3. Turbo Coding:

▪ Combines two or more convolutional codes with interleaving.

▪ Offers near-Shannon limit performance.

▪ Used in 3G/4G systems.

4. LDPC (Low-Density Parity-Check) Coding:

▪ Uses sparse parity-check matrices.

▪ Achieves very high performance close to the Shannon limit.

▪ Widely used in 5G and Wi-Fi 6.

Code Rate (R):

www.techedgewireless.com
A lower code rate means more redundancy (e.g., 1/2), and a higher code rate means
less redundancy (e.g., 3/4).

BER (Bit Error Rate):

▪ The fraction of bits received in error over total bits transmitted.

▪ A lower BER indicates better performance.

SNR (Signal-to-Noise Ratio):

• Ratio of signal power to noise power, usually expressed in dB (decibels).

• Higher SNR usually leads to lower BER.

Why Turbo Codes?

Feature Convolutional Code Turbo Code


Near Shannon? ❌ ✅

Iterative decoding ❌ ✅

Complexity Moderate Higher

Latency Low Higher (due to iterations)

Turbo codes outperform traditional schemes like Viterbi in low SNR regimes with
substantial coding gain.

Architecture of Turbo Codes


The architecture of Turbo Codes is built on the principle of Parallel Concatenated
Convolutional Codes (PCCC). This structure uses two identical Recursive Systematic
Convolutional (RSC) encoders connected via an interleaver. The input data stream is
processed simultaneously by both encoders, one directly and one after interleaving,
to generate high-performance error-correcting codes.

The diagram illustrates the encoder structure of Turbo Codes, which are powerful
error-correcting codes used in modern communication systems like 3G, 4G, and 5G.
Turbo codes achieve performance close to Shannon's limit by combining two
Recursive Systematic Convolutional (RSC) encoders with an interleaver.

www.techedgewireless.com
Reference: https://en.wikipedia.org/

Turbo encoders form the foundation of turbo codes. They are designed using two
Recursive Systematic Convolutional (RSC) encoders connected in parallel with an
interleaver between them. The encoder outputs consist of:

▪ The original (systematic) bits,

▪ Parity bits from the first RSC encoder,

▪ Parity bits from the second RSC encoder (after interleaving).

This turbo encoder divides the transmitted data into three parts:

1. Payload bits – This is the original message, consisting of m bits.

2. First set of parity bits – Generated by passing the payload through the first
Recursive Systematic Convolutional (RSC) encoder.

3. Second set of parity bits – Generated by passing a permuted version of the


same payload through a second RSC encoder.

www.techedgewireless.com
So, while the payload remains unchanged, two different sets of parity bits are
produced using different inputs—one original, one interleaved. Together, the final
transmission consists of m + n bits, where nnn is the total number of parity bits,
giving the encoder a code rate of:

The interleaver plays a crucial role here—it shuffles the input bits before they go into
the second encoder, ensuring that errors are less likely to affect both parity streams
in the same way.

▪ The architecture features two identical RSC encoders, labelled C1 and C2.
▪ These are arranged in parallel concatenation, meaning they operate on the
same input stream but one uses a reordered version.
▪ The "M" blocks in the diagram represent memory registers, used for delay and
state storage.
▪ The delay line and interleaver make sure the bits dk are processed in different
sequences by C1 and C2.
▪ At the output, the encoder emits the systematic bits xk, and two sets of parity
bits: y1k and y2k.

If the encoders are used in n1 and n2 iterations respectively, their rates can be
calculated as:

These formulas show how the number of iterations and parity outputs influence the
code rate, which determines how much redundancy is added.

www.techedgewireless.com
Recursive Systematic Convolutional (RSC) Encoder
A Recursive Systematic Convolutional (RSC) encoder outputs:

▪ The input bit itself (systematic output),

▪ A parity bit generated by passing the input through a shift-register-based


generator polynomial.

Encoder Parameters:

• Constraint length K

• Generator polynomials G1,G2

• Rate: 1/2 for each encoder

A typical RSC encoder with constraint length 3:

Reference: https://en.wikipedia.org/

Interleaver Design and Its Role


The interleaver is a fundamental component in the turbo coding system. It rearranges
the order of the input bits before they are fed into the second Recursive Systematic
Convolutional (RSC) encoder. This rearrangement breaks up patterns in the input

www.techedgewireless.com
data, ensuring that burst errors or correlated errors do not degrade the decoding
performance.

Purpose of Interleaving

▪ Error Spread: Interleaving ensures that any localized error burst is spread
across a wider range, improving error correction.

▪ Randomization: It reduces dependencies between the bits being processed by


the two encoders.

▪ Diversity: Helps introduce time and statistical diversity during decoding.

Types of Interleavers

1. Random Interleaver

▪ Bits are reordered based on a pseudo-random sequence.

▪ Most used in Turbo Codes due to effectiveness.

2. Block Interleaver

▪ Data is written in rows and read column-wise.

▪ Simple and deterministic, but less effective than random ones.

3. S-Random Interleaver

▪ A constrained random interleaver that ensures minimum spacing (S)


between consecutive bits.

▪ Provides a balance between performance and implementation


complexity.

Design Considerations

▪ Interleaver Size: Larger interleavers offer better performance but increase


latency.

▪ Collision Avoidance: Good interleavers avoid mapping two closely spaced bits
to closely spaced outputs.

▪ Hardware Realization: Should be simple enough to implement in ASIC/FPGA

Role in Decoding
▪ The decoder uses the same interleaver pattern in reverse (de-interleaver) to
match parity and systematic bits during each iteration.

www.techedgewireless.com
▪ Correct alignment of bits after interleaving is crucial for accurate log-likelihood
ratio (LLR) computation.

Turbo Decoder: Iterative Decoding


The turbo decoder employs iterative decoding, a key innovation that allows Turbo
Codes to approach the Shannon limit. It uses two soft-input soft-output (SISO)
decoders corresponding to the two RSC encoders in the encoder structure.

▪ The decoder operates iteratively by exchanging soft information (log-likelihood


ratios) between the two component decoders.

▪ With each iteration, the estimation of the transmitted data improves.

Reference: https://www.intel.com/

Inputs:

▪ r(Xk) = Received systematic bits

▪ r(Zk) = Received parity bits from Encoder 1

▪ r(Z'k) = Received parity bits from Encoder 2

Step 1: Interleaving

▪ Systematic bits r(Xk) is passed to both the upper and lower decoders.

▪ An interleaver scrambles the sequence for the lower decoder, simulating the
encoding interleaving process.

Step 2: Decoding (Iterative Process)

▪ Upper Decoder processes r(Xk) and r(Zk) and generates extrinsic information.

▪ This extrinsic info is interleaved and passed to the Lower Decoder.

www.techedgewireless.com
▪ Lower Decoder uses r(Z'k) and the interleaved extrinsic input to decode.

▪ It then generates updated extrinsic information, which is deinterleaved and


passed back to the Upper Decoder.

Step 3: Output

▪ After several iterations, the refined information is passed to the final output as
the decoded bit stream.

Key Features

▪ Based on soft-input soft-output (SISO) decoders.


▪ Uses extrinsic information exchange between decoders.
▪ Employs interleaving/deinterleaving for iterative improvement.
▪ Provides performance close to Shannon’s limit.

Termination Criteria

▪ Fixed Iterations: Typically, 5 to 15 iterations.

▪ Early Stopping: If LLR values or CRC indicate convergence, iterations stop.

Challenges

▪ Complexity increases with iterations.

▪ High latency in real-time systems.

▪ Memory requirements for LLR values.

Soft-Input Soft-Output (SISO) Decoding

The Soft-Input Soft-Output (SISO) decoder is the core functional unit used in turbo
decoding. Unlike traditional decoders that provide hard binary decisions, SISO
decoders accept and produce probabilistic information — typically represented as
log-likelihood ratios (LLRs). This enables iterative improvement of the decoding
accuracy

www.techedgewireless.com
BER Performance
This plot illustrates the Bit Error Rate (BER) performance of Turbo Codes used in LTE,
showing how it varies with Eb/N0 (energy per bit to noise power spectral density ratio)
under different conditions

Reference: https://www.mathworks.com/

We can observe:

1. Increasing Iterations Improves Performance:

▪ For both block sizes, increasing the number of decoder iterations from
3 to 6 significantly improves BER.

▪ Example: For N=512, the BER curve drops faster with 6 iterations than
with 3.

2. Larger Block Size Gives Better Performance:

▪ Longer block size (N = 2048) shows much better performance than the
shorter one (N = 512).

▪ Example: At the same Eb/N0, the BER is significantly lower for N=2048.

www.techedgewireless.com
3. Best Case:

▪ The black diamond curve (N = 2048, 6 iterations) shows the lowest BER
across all Eb/N0 values. This is the most efficient configuration shown.

4. Error Floor Reduction:

o Higher block size and more iterations push the BER curve down,
achieving BERs as low as 10-7or lower.

Points to understand

▪ Turbo codes become more powerful with more iterations, allowing the
decoder to refine its estimates.
▪ Longer block lengths provide more room for redundancy and interleaving,
improving error correction.
▪ LTE systems benefit from a trade-off between complexity (iterations) and
latency (block size) to optimize performance.

Turbo Codes in LTE and 5G


Turbo codes have been a key component in mobile wireless standards.

LTE

▪ Used for channel coding in control and data channels.

▪ Configurable for different code rates through puncturing.

▪ Implemented with standard interleavers defined in 3GPP TS 36.212.

Role in 5G

▪ Turbo codes are not used in 5G NR data channels.

▪ Replaced by LDPC (data) and Polar Codes (control).

▪ Still used in fallback systems and legacy interoperability.

Advantages in LTE

▪ Excellent performance under fading.

▪ Flexible code rates.

▪ Mature hardware implementations.

www.techedgewireless.com
Turbo codes represent a landmark in error correction coding, providing near-capacity
performance with practical implementations. Though now superseded in 5G by
LDPC and Polar Codes, turbo codes remain essential in legacy, satellite, and
aerospace systems.

Future Work

▪ Turbo-style decoding for new code families.

▪ Efficient hardware for IoT and embedded systems.

▪ Deep learning-enhanced turbo decoding.

Turbo codes continue to inspire advances in both theoretical research and practical
communication system design.

References
1. Berrou, C., Glavieux, A., & Thitimajshima, P. (1993). "Near Shannon limit error-
correcting coding and decoding: Turbo-codes". IEEE International Conference on
Communications, 1064–1070.

2. Benedetto, S., Montorsi, G. (1996). "Unveiling Turbo Codes: Some Results on Parallel
Concatenated Coding Schemes". IEEE Transactions on Information Theory, 42(2),
409–428.

3. 3GPP TS 36.212 V15.0.0. (2017). "Evolved Universal Terrestrial Radio Access (E-
UTRA); Multiplexing and channel coding".

4. Hagenauer, J. (1995). "The Turbo Principle: Tutorial Introduction and State of the Art".
International Symposium on Turbo Codes, Brest.

5. Costello, D. J., & Lin, S. (2004). Error Control Coding, 2nd Edition. Pearson Prentice
Hall.

6. Richardson, T. J., & Urbanke, R. L. (2008). Modern Coding Theory. Cambridge


University Press.

7. Wang, Z., & Poor, H. V. (2004). "Iterative (Turbo) Soft Interference Cancellation and
Decoding for Coded CDMA". IEEE Transactions on Communications, 47(7), 1046–
1061.

8. Moon, T. K. (2005). Error Correction Coding: Mathematical Methods and Algorithms.


Wiley-Interscience.

9. MATLAB Documentation. "lteTurboEncode" and "lteTurboDecode". MathWorks.

10. CommPy Library Documentation. Python-based communication system simulation


tools.

www.techedgewireless.com
11. CCSDS 131.0-B-3 (2017). "TM Synchronization and Channel Coding". Consultative
Committee for Space Data Systems (CCSDS).

12. Qualcomm White Paper. (2013). "The Evolution of LTE and the Role of Turbo Codes".

13. Sklar, B. (2001). Digital Communications: Fundamentals and Applications. Prentice


Hall.

14. MacKay, D. J. C. (1999). "Good Error-Correcting Codes Based on Very Sparse


Matrices". IEEE Transactions on Information Theory, 45(2), 399–431.

15. Andrews, J. G., Buzzi, S., Choi, W., Hanly, S. V., Lozano, A., Soong, A. C., & Zhang, J.
C. (2014). "What Will 5G Be?" IEEE Journal on Selected Areas in Communications,
32(6), 1065–1082.

www.techedgewireless.com

You might also like