Turbo Codes Principles and Applications
Turbo Codes Principles and Applications
Branka Vucetic
The University ofSydney
Sydney, Australia
Jinhong Yuan
The University ofNew South Wales
Sydney, Australia
"
~.
Vucetic, Branka.
Turbo codes : principles and applications / Branka Vucetic, Jinhong Yuan.
p. cm. -- (The Kluwer international series in engineering and computer science
; SECS 559.)
lncludes bibliographical references and index.
ISBN 978-1-4613-7013-0 ISBN 978-1-4615-4469-2 (eBook)
DOI 10.1007/978-1-4615-4469-2
1. Coding theory. 2. Signal theory (Telecommunication) I. Yuan, Jinhong, 1969- II.
Title. III. Series.
List of Acronyms xi
Preface xxv
1 Introduction 1
1.1 Digital Communication System Structure. 2
1.2 Fundamental Limits . . . . . . . . . . . . 5
2 Block Codes 13
2.1 Block Codes 13
2.2 Linear Systematic Block Codes 15
2.3 Parity Check Matrix . . . . . . 16
2.4 The Minimum Distance of a Block Code 17
2.5 Maximum Likelihood Decoding of Block Codes for a
BSC Channel . . . . . . . . . . . . . . . . . . . .. 18
2.6 Maximum Likelihood Decoding of Block Codes for a
Gaussian Channel. . . . . . . . . . 19
2.7 Weight Distribution of Block Codes . . . . . . 20
2.8 Performance Upper Bounds . . . . . . . . . . 23
2.8.1 Word Error Probability Upper Bounds 23
2.8.2 Bit Error Probability Upper Bounds 26
2.9 Coding Gain . . . . . . . . . . . . . . . . . . 28
2.10 Soft Decision Decoding of Block Codes . . . . 30
2.11 Trellis Structure of Linear Binary Block Codes. 30
vi CONTENTS
3 Convolutional Codes 37
3.1 Introduction . . . . .............. 37
3.2 The Structure of (n,l) Convolutional Codes 38
3.3 The Structure of (n, k) Convolutional Codes 43
3.4 Systematic Form .. 45
3.5 Parity Check Matrix 50
3.6 Catastrophic Codes . 51
3.7 Systematic Encoders 53
3.8 State Diagram. . . . 58
3.9 Trellis Diagram . . . 60
3.10 Distance Properties of Convolutional Codes 62
3.11 Weight Distribution of Convolutional Codes 63
3.12 Punctured Convolutional Codes . . . . . . . 66
7 Interleavers 193
7.1 Interleaving . . . . . . . . . . . . . . . . 193
7.2 Interleaving with Error Control Coding. 195
7.3 Interleaving in Turbo Coding . . . . . . 196
7.3.1 The Effect of Interleaver Size on Code Per-
formance. . . . . . . . . . . . . . . . . . .. 197
7.3.2 The Effect of Interleaver Structure on Code
Performance . . . . . . . 198
7.3.3 Interleaving Techniques. 200
7.4 Block Type Interleavers 200
7.4.1 Block Interleavers . . . . 200
7.4.2 Odd-Even Block Interleavers . 202
7.4.3 Block Helical Simile Interleavers . 204
7.5 Convolutional Type Interleavers . 206
7.5.1 Convolutional Interleavers 206
7.5.2 Cyclic Shift Interleavers 208
7.6 Random Type Interleavers . . . 209
7.6.1 Random Interleavers . . 209
7.6.2 Non-uniform Interleavers 210
7.6.3 S-random Interleavers . 211
7.7 Code Matched Interleavers . . . 213
7.8 Design of Code Matched Interleavers 214
7.9 Performance of Turbo Codes with Code Matched In-
terleavers . . . . . . . . . . . . . . . . . . . . . . . 220
7.10 Performance of Turbo Codes with Cyclic Shift Inter-
leavers . . . . . . . . . . . . . . . . . . . . . . . .. 222
Index 307
List of Acronyms
ML maximum likelihood
VA Viterbi algorithm
WEF weight enumerating function
4.1 Best rate 1/3 turbo codes at high SNR's [14] 103
4.2 Rate 1/3 ODS turbo codes at Low SNR's . 104
This book grew out of our research, industry consulting and con-
tinuing education courses.
Turbo coding initially seemed to belong to a restricted research
area, while now has become a part of the mainstream telecommu-
nication theory and practice. The turbo decoding principles have
found widespread applications not only in error control, but in de-
tection, interference suppression and equalization.
Intended for use by advanced students and professional engi-
neers, involved in coding and telecommunication research, the book
includes both basic and advanced material. The chapters are se-
quenced so that the knowledge is acquired in a logical and progres-
sive way. The algorithm descriptions and analysis are supported
by examples throughout the book. Performance evaluations of the
presented algorithms are carried out both analytically and by sim-
ulations.
Basic material included in the book has been taught to students
and practicing professionals over the last four years in the form of
senior undergraduate or graduate courses, lecture series and short
continuing education courses.
Most of the presented material is a compilation of the various
publications from the well established literature. There are, how-
ever, original contributions, related to decoding algorithms, inter-
Ie aver design, turbo coded modulation design for fading channels
and performance of turbo codes on fading channels. The bidirec-
tional SOYA decoding algorithm, presented in the book, had been
developed for soft output detection and originally applied to cellu-
lar mobile receivers, but was subsequently modified for decoding of
turbo codes. We have published various versions of the algorithm
xx.vi PREFACE
Special Thanks
We would like to thank everyone who has been involved in the
process of writing, proof reading and publishing this book. In par-
ticular we would like to thank Dr Lei Wei, Dr Steven Pietrobon,
Dr Adrian Barbulescu, Dr Miroslar Despotovic, Prof Shu Lin, and
Prof Dusan Drajic for reading the manuscript and providing valu-
able feedback. We would also like to thank Dr Akihisa Ushirokawa
for constructive discussions and Enrico Vassallo for providing the
details on the CCSDS standard.
We are pleased to acknowledge the students' contribution to
advancing the understanding of turbo coding. In particular, we
thank Wen Feng for her work reported in Chapters 6 and 7, Jade
Kim for her work reported in Chapter 6, Mark Tan for his work
reported in Chapter 7 and Lei Wan for her comments on Chapters
5 and 6.
We express our appreciation to Wen Feng for providing simula-
tion results as well as to Maree Belleli and Zhuo Chen for typing
the manuscript and preparing illustrations for the book.
We owe special thanks to the Australian Research Council, NEC,
DSTO, Motorola and other companies, whose support enables grad-
uate students and the staff of Sydney University to pursue contin-
uing research in this important field.
Alex Greene, senior editor, of Kluwer, helped and motivated us
during all phases of the preparation of the book.
Finally, we would like to thank our families for providing the
most meaningful content in our lives.
Chapter 1
Introduction
Inform-
ation r-- Source
r-- Encoder
Channel ---+- Modu-
Source Encoder lator
!
Channel
Data
Sink
+--
Source Channel Demodu-
Decoder f+- Decoder f+- lator
I
Fig. 1.1: Model of a digital communication system
rb
'Tj = - bits/sec/Hz (1.2)
B
6 Introduction
It can be expressed as
rslR
ry=- (1.3)
B
where rs is the symbol rate. As the minimum required bandwidth
for a modulated signal is r s Hz, the maximum spectral efficiency,
denoted by rymax, is given by
rymax = lR (1.4)
Another important parameter used to measure the reliability of
information transmission in digital communication systems is the
bit error probability. Power efficiency is captured by the required
bit energy to one sided noise power spectral density ratio, Ebl No, to
achieve a specified bit error probability. The signal-to-noise ratio
(SNR), denoted by SIN, is related to Eb/No as
(1.5)
For a given channel, there is an upper limit on the data rate related
to the signal-to-noise ratio and the system bandwidth. Shannon
introduced the concept of channel capacity, C, as the maximum
rate at which information can be transmitted over a noisy channel.
This rate is referred to as the channel capacity and for an additive
white Gaussian noise (AWGN) channel it is given by the Shannon-
Hartley formula
Assuming that the data rate takes its maximum possible value
for error-free transmission, equal to the channel capacity C, the
maximum spectral efficiency, TJrnax = ~, can be expressed as
(1.7)
lim
1Jmax ---+ 0
~b = In 2 = -1.59
1" 0
dB (1.10)
N 4
~
U!
'"
.0
"3
256state8PSKTCM
2 x QPSK
64state8PSKTCM
x BPSK
than the reference uncoded system with the same spectral efficiency.
The asymptotic coding gains for two-dimensional TCM schemes
vary form 3 to 6 dB.
For example, a 64-state 8-PSK TCM has a bit error rate of
10- 5 at an ~ ratio of 6.05 dB with the spectral efficiency of 2
bits/sec/Hz, gaining 3.5 dB relative to the reference uncoded QPSK,
while a 256-state 8PSK TCM gains 4 dB.
Turbo codes with iterative decoding [4] have almost closed the
gap between the capacity limit and real code performance. They
get a bit error rate of 10- 5 at an ~ of 0.7 dB with the spectral
efficiency of 0.5 bits/sec/Hz.
Bibliography
[13] L. Swanson, "A New Code for Galileo" , Abstracts 1988, Inter-
national Symposium on Information Theory, p. 94.
[14] J. P. Odenwalder, "Optimal Decoding of Convolutional
Codes", PhD Thesis, Systems Science Department, University
of California, Los Angeles, 1970.
I[
The k vectors generating the code go, gll ... , gk-l can be arranged
I
as rows of a k x n matrix as follows
~~::=~
gOI
G =
go
gl =
goo
glO gl1
[
1 1 1 0
o 1 0 1
1 000 ~1
The message c = (0 1 1) is encoded as follows:
v e·G
o· (011100) + 1 . (101010) + 1 . (110001)
(000000) + (101010) + (110001)
(011011)
Table 2.1 presents the list of messages and codewords for the (6, 3)
linear block code.
Linear Systematic Block Codes 15
Messages Codewords
(co, CI, C2) (VO,VI,V2,V3,V4,V5)
(0 0 0) (0 0 0 0 0 0)
(1 0 0) (0 1 1 1 0 0)
(0 1 0) (1 0 1 0 1 0)
(1 1 0) (1 1 0 1 1 0)
(0 0 1) (1 1 0 0 0 1)
(1 0 1) (1 0 1 1 0 1)
(0 1 1) (0 1 1 0 1 1)
(1 1 1) (0 0 0 1 1 1)
'(co, CI, ... , Ck-;) ----+ VO, VI, ... ,Vn-k~l' CO, CI, ... Ck-l'
, y ' , y "
parity-check message
The generator matrix for a systematic block code has the following
form
G = [P IkJ
Ik is the k x k identity matrix and P is a k x (n - k) matrix of the
form
Poo POl PO,n-k-1
PlO Pll PI,n-k-1
P= P20 P21 P2,n-k-1
where Pij = 0 or 1.
16 Block Codes
G = [ :~ 1= [~1 1~ 0~ 0~ 0~ ~1 1= [P
~
13 ]
v· HT = 0
On the other hand, the minimum distance of the code is the small-
est number of nonzero components in a codeword. Therefore, the
minimum distance of a linear block code is equal to the smallest
number of columns of the matrix H that sum to O. The parity
check matrix H for the (6,3) code is given by
1 000 1 1
H= [ 0 1 0 1 0 1
1
001 1 1 0
All columns in H are nonzero and no two of them are the same.
Hence, no two or less columns sum to O. Hence, the minimum
distance of the code is 3.
P(r I v)
if and only if
d(r, v) < d(r, w)
Hence the MLD rule for a BSC channel can be obtained by com-
paring the received vector r with all codewords and selecting the
codeword that has the smallest Hamming distance from r as the
estimate of the actually transmitted codeword. This decoding rule
is known as the minimum distance decoding. The MLD rule results
in the minimum block error probability.
_{+1
Xi -
-1
Vi
Vi
=1
= 0
for each codeword v and selects the codeword with the largest
conditional probability as the estimate of the actually transmitted
codeword.
The conditional probability P(r I v) for a channel with noise
affecting each sample being independent Gaussian random variable
with zero mean and variance a 2 is given by
P{r I v) = rr n- 1
i=O
1
--===e
J27ra2
_(r._~)2
20-
(
1 )n e _ " n - l (r._x;)2
L.J.=o ~
J27ra 2
where Xi is the ith component of the sequence x obtained by mod-
ulating codeword v. The above expression is a monotonically de-
creasing function of Er~l (ri - Xi)2 and it is maximized when
n-l
d~ =L (ri - Xi)2
i=O
(2.11)
G(D) = [ ~100 1
o 1 0 1 1 0 ~ i1 (2.12)
o 0 1 0 1 1
The WEF of the code is
Ao(Z) 1
AI(Z) 3Z 2 + Z3
A 2 (Z) 3Z + 3Z 2
A3(Z) 1 +3Z
A 4 (Z) Z3
(2.16)
where ni is a noise sample with a zero mean and variance (]"2. The
noise sequence is n = (no, nl, .. " nn-d. The decoder performs a
maximum likelihood sequence decoding which maximizes the prob-
ability P(r I v).
24 Block Codes
c Channel v x
Modulator
Encoder
+ n
Channel r
Demodulator
Decoder
(2.17)
n-l
A = L ni (Xi - Xi) (2.18)
i=O
expressed as [3]
(2.19)
where R = kin is the code rate, Eb is the signal energy per bit, No
is the single sided power spectral density of the Gaussian noise and
Q(x) is the complementary error function defined by
(2.20)
(2.21)
(2.22)
"L -1 A de -dR5
No
d=dmin
2
~[A(X)
2 - III X=e
E
-R~
NO
(2.23)
26 Block Codes
(2.24)
2d . R-
mIll
Eb)
No e
~
d minR N ""'
o.!-'.
A
de
-dR ~
N
0
d-dmm
d=dmin
(2.26)
where Bd is the error coefficient, i.e., the average number of bit er-
rors caused by transitions between the all-zero codeword and code-
words of weight d (d 2: d min ). The error coefficient Bd can be
obtained from the code IRWEF. It is given by
(2.27)
(2.28)
d=dmin
2
E k W
2k Ww Aw(Z)
/
W=z=e-R~
W8A(W,Z)/ (2.29)
2k 8W w=z=e-R~
In addition, a tighter bit error probability upper bound can be
expressed as
Eb) edminR~
2dmm. R M "
~
B de -dR~ (2.30)
t
o d=dmin
d=dmin
2
7 -3R~ 7 -4R~ 1 -7R~
-e NO + -e No + -e NO (2.31)
222
By referring to (2.27) and (2.14), the code error coefficients Bd
can be computed as
123
d=3 Bd = -4
x 3+ - x 3+ -
44
x 1=3
123
d=4 Bd = - x 1 + - x 3 + - x 3 = 4
444
4
d=7 Bd = -4 x 1=1 (2.32)
28 Block Codes
"
L.,; -1 B d e -dR.§z.
NO
d=dmin
2
3
-e
-3R.§z.
NO + 2e-4R.§z. 1 -7R.§z.
No + -e No (2.33)
2 2
The word error probability and bit error probability upper bounds
are shown in Fig. 2.2.
10-6L-_L-_-'--_~_~_...L-_--L-_--:':-_-L-_---:'-_....:..J
o 23456789
Ell/No (dB)
Fig. 2.2: Performance upper bounds for the (7,4) Hamming code
d~,min
G = 101OgIO-;P- dB (2.34)
u
(2.35)
where a~ and a~ are the variances of Gaussian noise for the coded
and uncoded systems, respectively.
124
G = 101ogiO 47 = 2.34dB
30 Block Codes
(2.36)
such that
(2.38)
(2.39)
for all i E /z, m E I I +1, and j E {O, 1}, where at are binary inputs
. {1
ai = 0
j=1
j=O
(2.40)
32 Block Codes
:La1h = 0
i (2.42)
i=O
(2.43)
(00)
(01)
(10)
(11)
(01)
(10)
(11)
[11] R. J. McEiece, "On the BCJR Trellis for Linear Block Codes",
IEEE Trans. Inform. Theory, vol. 42, No.4, July 1996.
[12] H. Imai, "Essentials of Error-Control Coding Techniques",
Academic Press, 1990.
Convolutional Codes
3.1 Introduction
Convolutional codes have been widely used in applications such as
space and satellite communications, cellular mobile, digital video
broadcasting etc. Its popularity stems from simple structure and
availability of easily implement able maximum likelihood soft deci-
sion decoding methods.
Convolutional codes were first introduced by Elias [5]. The
ground work on algebraic theory of convolutional codes was per-
formed by Forney [6]. In this chapter we present the main results
on the algebraic structure of convolutional codes needed in the de-
sign of turbo codes. Of particular importance are code structure,
encoder realization and trellis representation. We first discuss the
structure of convolutional codes and then show how to implement
feedforward and feedback convolutional encoders.
The equivalence of encoders is discussed and it is shown how to
get equivalent systematic encoders with rational generator matrices
from nonsystematic encoders with polynomial generator matrices.
Next, we present the finite state machine description of convolution
codes and derive the state and trellis diagrams.
In the interest of brevity many details of the theory had to be
omitted. The chapter is concluded by the discussion of punctured
convolutional codes.
(3.5)
where g(l) represents the upper and g(2) the lower connections with
the leftmost entry being the connection to the leftmost stage. The
term convolutional codes comes from the observation that the ith
output sequence, i = 1,2, given by Eq. (3.2), represents the convo-
lution of the input sequence and the ith generator sequence.
(3.6)
c = (1011100···) (3.7)
v = (11,01,00,10,01,10,11,··0 (3.10)
40 Convolutional Codes
If the generator sequences g(l) and g(2) are interleaved and ar-
ranged in the matrix
9~1) 9~2) 9P) 9i2) 92(1) 92(2) o 0 o 0
o 0 9~1) 9~2) 91 91 92(1) 92(2) o 0
(1) (2)
o 0 o 0
the encoding operation can be represented in a matrix form as
V= c· G (3.11)
where all operations are modulo 2.
Observe that each row of G is obtained by shifting the preceding
row by n = 2 places to the right. The generator matrix is semi-
infinite corresponding to the fact that the input sequence may be
infinite.
Similarly, we can generate an (n, 1) convolutional code by a
feedforward linear shift register shown in Fig. 3.2, which has one
input and n output sequences. The code rate is lin.
The encoder is specified by a set of n generator sequences of
length (lJ + 1), where lJ is the encoder memory, given by
g(l) (9~1) 9P) .. ·9£1))
g(2) (9~2) 9i2) ... 9£2)) (3.12)
with coefficients in the binary field GF(2). The ith output sequence
is obtained by convolving the input message sequence c and the ith
generator sequence g(i)
V(i) -
-
c * g(i)
""
i = 1 2 ... n (3.13)
At the time l, the ith output symbol is
(i)
VI
(3.14)
The Structure of (n,l) Convolutional Codes 41
Input c
... --Bt
Cl Cl-I Cl- v Cl Cl-I Cl- v Cl Cl-I Cl- v
(3.15)
(3.16)
The encoding equations for the encoder in Fig. 3.1 can be now
written as
c(D)g(1)(D)
C(D)g(2)(D) (3.17)
(3.18)
(3.19)
(1 + D2 + D3 + D4)(1 + D2) = 1 + D3 + D 5 + D6
(1 + D2 + D3 + D4)(1 + D + D2) = 1 + D + D4 + D6
(3.20)
(3.24)
(3.26)
(3.30)
and the overall encoder memory as the sum of the memories of the
encoder inputs
(3.32)
i=l
(3.33)
(3.34)
v(D) [1 + D2 ~ ~ 1~ D 1
1 + D] [ 1 D
[1 + D3 1 + D3 D2 + D 3] (3.35)
The overall memory for the encoder in Fig. 3.3 is 2 and the
memory for each encoder input is 1. The number of memory el-
ements required for this type of implementation is equal to the
overall memory.
(1)
Cl
(2) (2)
+ VI VI V
(2)
Cl
c(D)(l + D2) [ 1 1+ D
1 + D2
+ D2] (3.39)
c'(D) [1 1~~;2D2]
c' (D)G 1 (D)
Systematic Form 47
where
c' (D) = c(D)T(D) (3.40)
and
(3.41)
Clearly, multiplication of c(D) by T(D) generates a scrambled ver-
sion of the input sequence. Thus, the set of scrambled input se-
quences, c' (D), multiplied by the generator matrix GI(D) produces
the same set of output sequences as the original generator matrix
G(D), where
G(D) = T(D)G I (D) (3.42)
We say that these two matrices are equivalent. The set of sequences
c(D) is identical to the set of sequences c' (D) if and only if T(D)
is invertible. The inverse of T(D) in this example is given by
(3.43)
The ratio
T(D) = a(D) (3.46)
q(D)
represents a rational transfer function. In general, the generator
matrices of convolutional codes have as entries rational transfer
functions. The outputs are obtained by multiplying the input se-
quence with the generator matrix containing rational transfer func-
tions. Given a rational transfer function and the input sequence the
multiplication can be performed by linear circuits in many different
ways. Fig. 3.4 shows the controller canonical form of the rational
function in Eq. (3.46).
Another so-called observer canonical form of the rational trans-
fer function in Eq. (3.46) is illustrated in Fig. 3.5.
As the circuit in Fig. 3.5 is linear, we have
v(D) = c(D)(ao + aID + '" + avDV) + V(D)(qID + ... + qvDV)
which is the same as (3.44). In this implementation the delay ele-
ments, in general, do not form a shift register, as they are separated
Systematic Form 49
by adders.
Systematic encoders in the controller and observer canonical
form, based on the generator matrix G1(D), are shown in Figs. 3.6
and 3.7, respectively.
c
Fig. 3.6: The controller canonical form of the systematic (2,1) encoder
with the generator matrix G 1 (D)
Fig. 3.7: The observer canonical form of the systematic (2,1) encoder
with the generator matrix G 1 (D)
(3.47)
(3.48)
given by
H(D) = [1 +1 D+D2+ D2 1] (3.50)
(3.54)
(3.55)
If the generator matrix inverse G -1 (D), does not exist, the code
is called catastrophic. For catastrophic codes a finite number of
channel errors causes an infinite number of decoding errors.
(3.56)
(3.58)
1
l+D (3.59)
The weight of the code sequence is 3, while the weight of the in-
put sequence is infinite. If this code sequence is transmitted over a
binary symmetric channel (BSC) in which three errors occur, chang-
ing the three non zero symbols to zero, the received sequence will
contain all zeros. Since this is a valid code sequence, the decoder
will deliver it to the user. Thus, the decoded sequence will have an
infinite number of errors, though there were only three errors in the
channel.
Catastrophic codes should be avoided for obvious reasons.
Systematic Encoders 53
(3.61)
(3.63)
G(D) = [1 OlD
D 11 (3.64)
T(D) = [~ ~ 1 (3.65)
det(T(D)) = 1
T-1(D) . G(D)
[ 1 0 1 + D2 ] (3.67)
OlD
T
-1( )
D
1 [1
= 1 + D + D3 D
D2 1 + D
1 (3.71)
(3.72)
56 Convolutional Codes
cOO v OO
0 0 ... 1
a~k+l)(D)
q(D)
akn)(D)
q(D)
The encoder circuit in its canonical observer form for this system-
atic generator matrix is depicted in Fig. 3.10.
An (n, n - 1) systematic rational generator matrix is given by
1 0
o 1 ... 0
0 ai(D)/q(D)
a~(D)/q(D)
1
G(D) = [ : (3.74)
o 0 1 ~-1 (D)/q(D)
Systematic Encoders 57
where
a~n)
z
(D) = a~n)
z,O
+ a~n)
z,l
D + ... + a~n)
Z,II DII (3.75)
for 1 :::; i :::; n - 1. The parity check matrix for this encoder is given
by
or
c(n-l) v(n-I)
Fig. 3.12: State diagram for the (2,1) nonsystematic convolutional en-
coder from Fig. 3.1
0/00
0/01
Fig. 3.13: State diagram for the (2,1) systematic encoder in Fig. 3.7
11
10
01
00
o 1 2 3 4
Fig. 3.14: Trellis diagram for the (2,1) nonsystematic encoder in Fig.
3.1
the encoder can be in one offour possible states (00), (01), (10) and
(11). The number of states is doubled upon every new shift. The
encoder reaches the maximum possible number of states, 2K after
K time units. For t > K, the structure of trellis becomes repetitive.
In the example, the encoder reaches the maximum number of states
after two time units.
There are two branches emanating from each state correspond-
ing to two different symbols. There are also two branches merging
into each state. The transitions caused by an input symbol 0 are in-
dicated by lower branches while the transitions caused by an input
symbol 1 are indicated by upper branches. Each branch is labelled
by the output block.
The output code sequence can be obtained by tracing a path in
the trellis specified by the input information sequence. For example,
if the input sequence is c = (11001), the output code sequence can
be read out from the trellis
V= (1110101111) (3.76)
ou
YZ
Fig. 3.15: Augmented state diagram of Fig. 3.12
8 _ XYZ82
(3.81)
3 - 1-XYZ
X3Z 2S 2
Sout = 1- XYZ (3.84)
Now, substituting (3.83) into (3.84) we get for the generating func-
tion T(X, Y, Z)
Sout
T ( X,Y,Z ) = (3.85)
Sin 1-XYZ(1+Z)
we can write
Note that the minimum free distance for the code is 5 and that the
number of code sequences at this distance is A5 = 1. The infor-
mation sequence generating this code sequence has the Hamming
weight of 1 and the code sequence contains three branches. An-
other information sequence of weight 2 produces a code sequence
of weight 6 with four branches and so on.
If we ignore the branch lengths, by setting Z to 1, the generator
function becomes
(3.86)
That is, on the path with a weight of 5 there is one information bit,
on the path with a weight of 6, there are 4 information bits etc.
1/01 1!X1
11
10
01
00
Fig. 3.16: Trellis diagram of a rate 2/3 punctured code produced by
periodically deleting symbols from a rate 1/2 code
generator matrix
G(D) = [ 1+D
o
1+D
D
1
l+D
1 (3.89)
The encoder and the trellis diagram for this code are given in Figs
3.17 and 3.18, respectively. The trellis is more complex than the
trellis for the punctured rate 2/3 code shown in Fig. 3.16 since there
are four paths entering each state rather than two. This leads to
more complex encoding and decoding operation. Thus, punctured
codes have an advantage, particularly for high rate applications.
(3.90)
A zero in the puncturing table means that the code symbol is not
transmitted. In the above example the first symbol in the second
branch is not transmitted.
In general, a rate pi q punctured convolutional code can be con-
structed from an (n, 1) convolutional code by deleting np - q code
68 Convolutional Codes
000
00 111::---------------,,,,.
11
Fig. 3.18: Trellis diagram of a rate 2/3 code
(3.91)
Punctured Convolutional Codes 69
where
g(i)(D) = g~i) + g~i) D + ... + g~i) D V
(3.92)
where 1::; i::; nand g1 E {a, I}, l = a···v.
The code rate of the punctured code, Rp, is defined by
R = P
p np - np+ q
[11] S. Lin, and D.J. Costello, Jr., "Error Control Coding: Funda-
mentals and Applications", Prentice-Hall, 1983.
4.1 Introduction
It is well known that a good trade-off between coding gain and
complexity can be achieved by serial concatenated codes proposed
by Forney [1]. A serial concatenated code is one that applies two
levels of coding, an inner and an outer code linked by an inter-
leaver. This approach has been used in space communications, with
convolutional codes as the inner code and low redundancy Reed-
Solomon codes as the outer code. The primary reason for using
a concatenated code is to achieve a low error rate with an overall
decoding complexity lower than that required for a single code of
the corresponding performance. The low complexity is attained by
decoding each component code separately. As the inner decoder
generates burst errors an interleaver is typically incorporated be-
tween the two codes to decorrelate the received symbols affected by
burst errors. Another application of concatenation is using a band-
width efficient trellis code as an inner code [2] or concatenating two
convolutional codes [3]. In decoding these concatenated codes, the
inner decoder may use a soft-input/soft-output decoding algorithm
to produce soft decisions for the outer decoder.
Turbo codes exploit a similar idea of connecting two codes and
separating them by an interleaver [4]. The difference between
c Vo
RSC
Encoder 1
IInterleaver I
c RSC
Encoder 2
be represented as
G(D) = [1 (4.1)
codes with code rate 1/2 and memory order v = 4 (the number of
states is M s =16). The generator matrix of the RSC code is given
by
G(D) = [1, 1 + D + D2 + D3 + D4 (4.2)
c = (1011001) (4.3)
Then the two output sequences of the first component encoder are
Vo (1011001)
VI (1110001) (4.4)
We assume that the interleaver permutes the information sequence
to
c = (1101010) (4.5)
The parity check sequence of the second component encoder is
V2 = (1000000) (4.6)
The turbo code sequence is given by
V= (111,010,110,100,000,000,110) (4.7)
4.2.2 Interleaving
The interleaver in turbo coding is a pseudo-random block scrambler
defined by a permutation of N elements with no repetitions.
The first role of the interleaver is to generate a long block code
from small memory convolutional codes. Secondly, it decorrelates
the inputs to the two decoders so that an iterative suboptimum
decoding algorithm based on information exchange between the two
component decoders can be applied. If the input sequences to the
two component decoders are decorrelated there is a high probability
that after correction of some of the errors in one decoder some of the
remaining errors should become correctable in the second decoder.
Turbo Coding 77
c ~~------------------------------------------~Vo
1...-_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ VI
~-----------------V2
~--------------------~+:r--.
Various overall code rates such as 1/2, 2/3, 3/4, 5/6 and so on,
can be obtained by puncturing the rate 1/3 turbo encoder shown
in Fig. 4.l.
When puncturing is considered, some output bits of vo, VI and
V2 are deleted according to a chosen pattern defined by a puncturing
matrix P. For instance, a rate 1/2 turbo code can be obtained by
puncturing a rate 1/3 turbo code. The commonly used puncturing
[i n
matrix is given by
(4.9)
p=
(4.10)
The component codes are two identical (3,2,4) RSC codes with
code rate 2/3 and memory order v = 4. The information sequences
Co and CI are encoded by the first encoder to generate the first
parity check sequence V2. The interleaved sequences Co and CI are
encoded by the second encoder to produce the second parity check
sequence V3. Then the information sequences Vo and Vi, and the
parity check sequences of the two component encoders V2 and Va,
are multiplexed to generate the code sequence for a rate 1/2 turbo
encoder.
80 Turbo Coding Performance Analysis and Code Design
COI~~--~------~~------~----------------~----VO
Va
code.
A turbo codeword consists of the input information sequence
and the parity check sequences from the first and second component
encoders. Let w be the weight of an input information sequence,
and Zl and Z2 be the weights of the first and second parity check
sequences, respectively. The weight of the corresponding codeword
will be d = w + Zl + Z2. If the conditional WEF's of the com-
ponent code are known, it can be used to determine the weight
of the first parity check sequence. However the weight of the sec-
ond parity check sequence will not only depend on the weight of
the input information sequence, but also on how the information
bits have been permuted by the interleaver. The turbo code con-
ditional WEF's cannot be uniquely determined because of various
interleaver structures.
A simple solution to this problem is obtained by using a uni-
form interleaver, which is based on a probabilistic analysis of the
ensemble of all interleavers [13].
A uniform interleaver is a probabilistic device which maps a
given input sequence of length N and weight w into its all distinct
( ~) permutations with equal probability of 1/ ( ~ ).
For a uniform interleaver of size N, there are N! possible per-
mutations of binary sequences with length N, each of which has a
probability 1/ N!. If the weight of an input sequence is w, there
will be w!(N - w)! permutations which generate the same output
sequence. Thus the probability for each distinct permutation is
w!(N-w)! 1
(4.11)
N!
(4.12)
where A~(Z) and A~(Z) are the conditional WEF's of the equiv-
alent block codes of the first and second component codes, respec-
tively. Since the conditional WEF's of the turbo code are averaged
over the ensemble of interleavers of length N, they are also called
the conditional WEF's of the "average" turbo code. If two iden-
tical component encoders are employed in a turbo encoder, it is
appropriate to let
Error Event 1 2 n
.. Interleaver Size N
(4.15)
(4.19)
Let
A(w,Z,n) = LTw,z,nZz (4.21)
z
nmax ( N ) (4.22)
A~(Z) :;:::j ~ n A(w, Z, n)
A~(Z) . A~(Z)
(:)
Aw(Z)
Performance Upper Bounds of Turbo Codes 85
where
(4.23)
( ~ ) ~ ~~ (4.24)
LN w.w'. N2nmax-W-l
W=Wmin 2· n max- '2
. WW[A(w, Z, n max )]2Iw=z=e -R~ (4.27)
and the tighter union upper bound
The average upper bound on the bit error probability of the turbo
code with interleaver size 500 is evaluated. The result is shown in
Fig. 4.6.
t·
I
I Duman&Salehi Bound
I Union Upper Bound
I
I
'I
\
I
10-2 \
: \
I
\
10-4 \
\
10-<lL---'---'-:c_--,J::--~---:":------:'-:---:::'=---:'-=--~
12 1.4 1.6 18 2 2.2 2.4 2.6 2.8 3
EbiNo (dB)
Fig. 4.6: Bit error probability upper bounds for a turbo code with in-
terleaver size 500
1
--In(2 1 R
- - 1)
R
2.03dB (4.30)
T(X,y)IY=l L
d=dfree
Ad Xd
BT(X, Y) I
00
- L Wd Xd (4.31 )
BY Y=l d=dfree
where dfree is the code free distance, Ad is the number of code se-
quence with weight d and Wd is the total weight of information
sequences which generate code sequences of weight d. The two sets
of error coefficients Ad and Wd can be used to derive the code word
error probability and bit error probability upper bounds, respec-
tively.
In [22], various RSC and NRC codes have been investigated and
their error coefficients compared. It is shown that the set of error
coefficients Ad is the same for RSC and NRC codes, since both RSC
and NRC encoders generate the same set of code sequences. This
results in the same word error probability for both codes. However,
the set of Wd for RSC codes is different from that of NRC codes.
This is due to a different input-output weight correspondence be-
tween RSC and NRC encoders, which leads to different bit error
probability performance. In general, for code rates R ::; 2/3, the
first two coefficients wdfree and wdfree+1 of RSC codes are larger than
those of NRC codes. Therefore, at high SNR's, the bit error proba-
bility performance of NSC codes is a little better than that of RSC
codes. On the other hand, with increasing weight d, the value of Wd
for RSC codes grows more slowly than that for NRC codes. Thus,
at low SNR's, the bit error probability performance of RSC codes
is superior to that of NRC codes.
The difference in the input-output weight correspondence be-
tween RSC and NRC encoders can also be explained by the differ-
ence in the minimum weight of information sequences generating a
finite weight code sequence.
For NRC codes any information sequence of weight one will
generate a finite weight code sequence. On the other hand RSC
Performance Upper Bounds of Turbo Codes 89
(4.33)
The above upper bound shows that, for NRC component codes,
error paths with information weight w = 1 and their compound er-
ror paths have a dominant effect on the turbo code bit error prob-
ability. At high SNR's the single error paths with w = 1 have the
dominant contributions to the error performance. In this case, the
bit error probability is independent of N, as the factor Nw-l = l.
Thus no performance gain can be achieved by interleaving for these
codes. The same conclusion applies to turbo codes with component
nonsystematic block codes [13].
In an RSC encoder, the minimum information weight for error
paths is Wmin = 2. The largest number of single error paths in
a compound error path with an information weight W is n max =
lw /2 J, where lx J means "integer part of x". We now separate the
90 Turbo Coding Performance Analysis and Code Design
analysis of terms in the sum in (4.27) into odd and even weight w.
For odd values of w, i.e., w = 2i + 1, a term in the sum in (4.27)
can be expressed as
i ( ~i ) N- 1 W 2i [A(2i, Z, i)]2
Comparing (4.34) and (4.35), it is clear that, for large N, the terms
with odd ware negligible, since they depend on N- 2 while the
terms with even w depend on N- 1 . Hence, for turbo codes with
RSC component codes, the bit error probability is upper-bounded
by
10·
\
I
, '
,, ,
N=128
I N=256
10-'
:\ N=512
I 'j
,
10-2
I
I ,
,I,
I-
"
I
ffi 10-"
III
,I ,
,
\ \
\
\ \
10-' \
I
\
\
, ...
\ ...
10-6
1,5 2 25 3
EblNo (dB)
Fig. 4.7: Bit error probability upper bounds for a turbo code with various
interleaver sizes
the turbo code bit error probability performance. Let Zmin denote
the lowest weight of the parity check sequence in error paths of
RSC component encoders generated by an information sequence
with weight 2. The weight enumerating function of single error
paths with information weight 2 for RSC component codes is given
by [13]
A(2, Z, 1) ZZmin + Z2Z min- 2 + Z3Zmin- 4 + ...
ZZmin
(4.37)
R (e) <Llf j i
'" (2·) N-
't
(H2+2Zmin)i
1 • ---.:.---..:......-;:-:-
I (4.38)
b - ~ i (1 _ HZmin-2)2i H=e -R~
The above bound clearly shows that the most important param-
eter which has a significant influence upon turbo code performance
92 Turbo Coding Performance Analysis and Code Design
(4.40)
where the set of all pairs of (d, Ed) is the code distance spectrum.
For turbo codes, the error coefficients Ed can be represented by
w
L N . Aw,z
d=w+z
(4.41)
G1(D) [1,
= 1 + D2 ]
1 + D+D2
G (D) = [1,
1 +D +D2]
2 (4.42)
1 +D2
The turbo encoders are shown in Figs. 4.8 and 4.9, respectively.
In the performance evaluation, we consider interleavers with
small sizes. This is due to the performance upper bound diverging
severely from the actual bit error probability obtained by simulation
in the region of small SNR's for turbo codes with large interleaver
sizes. Using the transfer function method, the distance spectra of
the component code of TC1 and the turbo code TC1 are calculated.
The results for interleaver sizes of 20 and 50 are shown in Fig.
4.10. The bit error probability upper bounds obtained from (4.40)
are plotted in Fig. 4.11. It is apparent from Fig. 4.10 that the
shape of the distance spectrum of the turbo code is very similar
to the component code. However, for small distance values, the
94 Turbo Coding Performance Analysis and Code Design
c ------~------------------------------------~Vo
L..-_ _ _ _ _ _ +-I+ Vl
L - -_ _ _ _ _ _~+ V2
c ------~----------------------------------~Vo
I----Vl
1012i---.------,----.------;:==::::I======:c=====::;l
Component Code. N=50
Component Code. N=20
10'· Turbo Code, N=50
Turbo Code, N=20
10·
,
,
10· ,
,
/
dl
5iU 10' ... ,
~
/
I
~ 10' I
g 1 '1\
W I 1\ I
10· \/'I',
10-2 " ,
10-<
10'"
0 20 40 60 80 100 120
Hamming Distance d
Fig. 4.10: Distance spectra for component code of TCI and turbo code
TCI with interleaver sizes of 20 and 50
10-' E--~----''------r---,:::r====:::I=====r;====:::;-l
\
--.-
---ir-
-9-
ComponentCode,N=50
Component Code, N=20
Turbo Code, N=50
Turbo Code, N=20
10-' \"
\ ,,
\
\
,,
\
\
6l ,,
10-3
\
\ . ,
....
\
a:
'!\ , "' ....
"0,
~
....
,, ,,
, ,
10'"
" ",
10'"
" ....
",
",
'Q.
,,
" ",
, " 'e
'-- ",
", ,
10'"
1 2 3 4 5 6 7 8
EbINo (dB)
Fig. 4.11: Bit error probability upper bounds for component code of
TCl and turbo code TCl with interleaver sizes of 20 and 50
are lower than those for the code with N = 20. However, for large
distances, the error coefficients for the code with N = 50 are higher.
In spite of this, the code with N = 50 has better performance than
the code with N = 20 in the medium to high SNR region as shown
in Fig. 4.11. This property remains invariant for various codes.
Consequently, we can conclude that only error coefficients at low to
medium distances are significant for error performance. We call the
part of the spectrum that gives a considerable contribution to the
error probability the significant spectral lines. Significant spectral
lines are shown by bold thick lines in Fig. 4.10.
.
0.45
/
- - d=7 Bd=O.04 (
/
0.4 - -€)- d=8 Bd=O.01
--€)- d=9 Bd=O.22
/
- -8- d=10 Bd=O.26 /.
0.35
-+-- d=16 Bd=8.30 /
--<3-- d=17 Bd=12.35 /
/
ffi 0.3
--e-- d=18 Bd=25.04 I
III ----8-- d=19 Bd=37.02
g
'"i5 0.25
~
E
8 02
.!
~ 0.15
0.1
3.5 4 4.5 5
045
-
/
D:
-A- d=13 Bd=O.27 /
/
w 0.3 -'11-
ea d=14 Bd=O.39 /
..!5
9
0.25 -e--
d=28 Bd=3.18e+2
d=29 Bd=6.11 e+2
/
/
- -+
-e--
E
·c ~
d=30 Bd=1.05e+3
d=31 Bd=1.72e+3
C
0
0 0.2 ~ d=32 Bd=2.93e+3
j
£015
0.1
0.05
10'°,.----.,----,----,----.-------,-----.-----,
TCl N=20
/
\.
TC2 N=20
\ TCl N=50
\ TC2 N=50
/
- I
... \.
·1
I I
I',"
I I
I I
I
I / I
I /
/. I I"
I"
,
~ /
~,
"
10-''----.1...----'-----'-----'-----'------'------'
o 20 40 60 80 100 120 140
HammIng DIstance d
Fig. 4.14: Distance spectra for turbo codes TC1 and TC2 with inter-
leaver sizes of 20 and 50
,,
... --- TC1 N=20
...~ - .. - TC2 N=20
" " ,'.,. --e-- TC1 N=50
- -e - TC2 N=50
, ' ...
~
10-4
10~
Fig. 4.15: Bit error probability upper bounds for turbo codes TC1 and
TC2 with interleaver sizes of 20 and 50
(4.45)
Turbo Code Design 101
Zmin ::; (n - 1) (2 V
- 1 + 2) (4.46)
Since gl(D) and go(D) are relatively prime, the input sequence
1 + Dr must be a multiple of go(D) and periodic with a period of
I. Increasing the period I will increase the length of the shortest
code sequence with input weight 2. Intuitively, this will result in
increasing weight of the code sequence. For polynomial go(D) with
degree v, any polynomial divisible by go(D) is periodic with period
I ::; 2V - 1. The maximum period is 2V - 1, which is obtained
when go(D) is a primitive polynomial. The corresponding encoder
is generated by a maximal length linear feedback shift register with
degree v [24]. In this case, the parity check sequence weight depends
only on the primitive feedback polynomial and is independent of the
polynomial gl(D).
According to this design criterion, Benedetto and Montorsi found
the best RSC component codes with rate 1/2 and memory order
from one to five by computer search. The code search procedure
can be summarized as follows [14]
4. From all of the candidate codes, choose the one with the low-
est bit error probability in the desired range of SNR's.
In the design procedure, Steps 1 and 2 make sure the candidate
code has a maximum Zmin = 211 - 1 + 2, and thus the maximum
drree,eff = 211 + 6. Then the best code is chosen from all the candidate
(4.49)
where B rree eff is the error coefficient related to the code effective
free distance. For a fixed effective free distance, optimizing the
bit error probability implies minimization of the error coefficient.
Thus, Steps 3 and 4 in the procedure can be replaced by
4. From all the candidate codes, choose the one with the mini-
mum Brree,eff.
The above procedure was applied to find good rate 1/3 turbo
codes using rate 1/2 RSC component codes [14]. A uniform inter-
leaver with a size of 100 is employed in performance evaluation.
The results are reported in Table 4.1, where generator polynomials
go(D) and gl (D), the effective free distance d rree ,eff, the free dis-
tance d rree and its corresponding input weight W rree are shown. In
the table, the generator polynomials are given in octal form. For
example, the generator polynomials of the RSC code with memory
lJ = 4, go(D) = 1 + D + D4 and gl(D) = 1 + D + D3 + D4, are
represented by go(D) = (31) and gl(D) = (33), respectively.
Turbo Code Design 103
Table 4.1: Best rate 1/3 turbo codes at high SNR's [14]
v go(D) gl(D) dfree,eff dfree Wfree
1 3 2 4 4 2
2 7 5 10 7 3
3 15 17 14 8 4
4 31 33 22 9 5
31 27 22 9 5
5 51 77 38 10 6
51 67 38 12 4
Turbo codes using the best rate 1/2 RSC component codes can
achieve near optimum bit error probability performance at high
SNR's. These codes have applications in many real systems oper-
ating at relatively low BER's, such as data transmission in satellite
and space communications.
3. From all of the candidate codes, choose the one with the small-
est error coefficients for low to medium Hamming distances.
Using the above method good component RSC codes with rate
1/2 and memory order v from two to five were found [25]. A uniform
interleaver with size 40 is used to evaluate the distance spectrum. In
Table 4.2, the code parameters of the ODS turbo codes are shown.
10' , - - - - - . - - - - , - - - , - - - - - - - - , - - - . , - - - - - ,
10'
, • I
I I
I I
I
I I
I I
I I
I I
/
/ I
I I
I
I
I: ,"
\
10-<
10-"'------'-------'-----'-----'----""-----'
o 20
Fig. 4.16: Distance spectra for ODS turbo codes with interleaver size 40
- v=2 N=40
10 -e- v=3 N=40
--a-- v=4 N=40
-b- v=5 N=40
10"
10-<1
Fig. 4.17: Bit error probability upper bounds for ODS turbo codes with
interleaver size 40
10-6
Fig. 4.18: Performance of ODS and BM turbo codes with rate 1/3 and
memory order 4 on AWGN channels
where R is the overall turbo code rate, Eb is the received bit en-
ergy, No is the one sided Gaussian noise spectral density, d is the
Hamming distance between the codewords, N is the interleaver size,
and dmin is the concatenated code minimum distance. Aw,d is the
average number of concatenated codewords of the equivalent block
code whose Hamming weight is d produced by a weight w input
sequence. Using the uniform interleaver the coefficients Aw,d can
be expressed as
N AGo. AG;
Aw,d = L
1=0
(W'IN )l'd (4.51)
1
where A~71 and Afd are the corresponding input-output weight dis-
tribution coefficients for equivalent block codes for the outer and
Serial Concatenated Convolutional Codes 109
A~,d ~ E(N/
nM
nP
)Aw,d,n (4.53)
(4.54)
(~)~:~
110 Turbo Coding Performance Analysis and Code Design
(4.55)
(4.56)
where
a(d) = max{nO(d)
w,l
+ ni(d) - i - I } (4.57)
aM = n~ -1 ~ 0 (4.59)
Serial Concatenated Convolutional Codes 111
This result indicates that for serial concatenated codes with block
or nonrecursive convolutional inner codes there are always some
terms of Hamming distance d, whose error coefficients in (4.55)
increase with interleaver size of N. That means no interleaver gain
is achieved for these terms.
For recursive convolutional inner codes the largest exponent of
N is given by [26]
aM = - ldO + I'
f 2 J< 0 (4.60)
for dj even
(4.61 )
for dj odd
where d~,ef f is the effective free distance of the inner code. This
is the minimum weight of sequences of the inner code generated
by a weight-2 input sequence. Also, d~?n is the minimum weight of
sequences of the inner code generated by a weight-3 input sequence.
The asymptotic expressions, for N very large, for the bit error
probability are given by
(4.62)
(4.63)
for odd values of dj, where Ceven and Codd are the coefficients for
di even and odd, respectively, which do not depend on N.
112 TUrbo Coding Performance Analysis and Code Design
Equations (4.62) and (4.63) show that the bit error probability
for large interleaver size N is dominated by the inner code effective
free distance and the outer code free distance. On the basis of these
results it is possible to formulate serial concatenated code design
rules. They are summarized as follows.
Design Rules for Serial Concatenated Codes
1. The inner code should be chosen to be a recursive convolu-
tional code. The outer code could be either nonrecursive or
recursive.
5 .1 Introduction
In Chapters 2 and 3 we presented methods of representing linear
block and convolutional codes by trellises. In this chapter we first
examine various trellis based strategies for decoding of linear codes.
Each of these can be used as a basic building block for decoding of
concatenated and turbo codes.
Trellis based decoding algorithms are recursive methods for es-
timation of the state sequence of a discrete-time finite-state Markov
process observed in memoryless noise. The Viterbi algorithm (VA)
minimizes the sequence error probability. Its output is in the form
of hard-quantized estimation of symbols in the most likely transmit-
ted code sequence. In concatenated systems, with multiple signal
processing stages, the overall receiver performance is improved if the
inner stages produce soft output estimation. We show how the VA
can be modified to generate soft-output information. The related
algorithm is known as the soft output Viterbi algorithm (SOVA).
The SOYA produces in addition to the maximum likelihood hard
estimates of a code sequence, a reliability measure for each received
symbol. However, these reliability estimates are suboptimum.
In another class of decoders, the decoding criterion is minimiza-
tion of the symbol or bit error probability. The decoder generates
Memoryless
noise
c v x r
Encoder 1-----+1 Modulatorl----+l ........--+1 Decoder
(5.1)
(5.2)
The functions f (.) and g(.) are generally time varying.
The state sequence from time 0 to t is denoted by S& and is
written as
(5.3)
The state sequence is a Markov process, so that the probability
P(StH I So, Sl,"', St) of being in state St+1, at time (t + 1), given
all states up to time t, depends only on the state St, at time t,
(5.4)
The encoder output sequence from time 1 to t is represented as
(5.5)
where
Vt = (Vt,O, Vt,l, ... , Vt,n-1) (5.6)
is the code block of length n.
The code sequence vi is modulated by a BPSK modulator. The
modulated sequence is denoted by xi and is given by
(5.7)
where
(5.8)
and
Xt,i = 2Vt,i - 1, i = 0 , 1 , ... , n - 1 (5.9)
As there is one-to-one correspondence between the code and
modulated sequence, the encoder/modulator pair can be repre-
sented by a discrete-time finite-state Markov process and can be
graphically described by state or trellis diagrams.
120 Trellis Based Decoding of Linear Codes
(5.14)
v (VI,I, ... ,VI,log2 M, V2,1, ... ,V2,log2 M, ... ,VN ',1' ... ,vN ' ,log2 M)
X (XI,X2,·· ·,xN ')
An optimum receiver is designed to minimize one of the follow-
ing error probabilities:
(5.16)
Pw = 1 - JPr(c Ir)Pr(r) dr
r
(5.17)
D ( I ) = Pr ( c) . Pr (r I c) (5.18)
rr C r Pr{r)
Assuming that the signals are equally likely, it suffices for the re-
ceiver to maximize the likelihood function Pr(r I c). A decoder that
selects its estimate by maximizing Pr{r Ic) is called maximum like-
lihood (ML) decoder. The above expression shows that if the se-
quences are equally likely, MAP and ML decoders are equivalent in
terms of word error probability.
The probability Pr{r Ic) for the received sequence of length T,
can be expressed as
Pr{rI IcD
Pr(rI IxI)
t=l
n-l 1
II II - - e-
T (rt .-Xt .)2
"2.,.2 " (5.19)
t= 1 i=O V2ii a
In order to simplify the operations we introduce the log function
log Pr{r I c), given by
T
Summary of the VA
Example 5.1 Consider a 1/2 rate RSC code with the encoder
shown in Fig. 5.2 (a). Its state and trellis diagrams are shown in
Figs.5.2 (b) and (c), respectively. If the received sequence is
the branch metrics are shown in Fig. 5.3. As there are four message
symbols, the transmitted sequence is extended by adding one input
symbol equal to the feedback symbol, to terminate the trellis to the
zero state.
Ct ---------~~------. Vt,O
J
I
ll-----+( + 1----..----- Vt,l
(a) Encoder
1/11
0/00 0/01
1/10
(b) State diagram
01 01
1
o
(c) Trellis diagram
input bit Ct = 1
- - - - - - input bit Ct = 0
The SOYA estimates the soft output information for each trans-
mitted binary symbol in the form of the log-likelihood function
128 Trellis Based Decoding of Linear Codes
Time 0 1 2 3 4 5
3.24 o o
1
o 4 7.24 4 4 4
---input bit Ct = 1
- - - - - input bit Ct = 0
a
L t=t
4
b) ~4.04
/--~~.04
c) - .. 4.04
- - - - "12.04
t=3
d)
.. - -- -
e)
/.----~----~~04
.. - - - -
Fig. 5.4: The survivors and their path metrics in Example 5.1
A( Ct), as follows
(5.24)
The Bidirectional Soft Output Viterbi Algorithm 129
if A(Ct) ~ 0
(5.25)
otherwise
The decoder selects the path x with the minimum path metric
J-tr,min as the maximum likelihood (ML) path in the same way as
the standard VA. The probability of selecting this path, from Eqs.
(5.18), (5.19), (5.21) and (5.23), is proportional to
(5.26)
Pr(ct= 11 rD f"Ve-JtT,min
Let us denote by J..l~ the minimum path metric for all paths for
which Ct is 1 and J-t~ the minimum path metric for all paths for
which Ct is O. If the ML estimate at time t is 1, its complementary
symbol at time t is O. Then J..l~ = J..lr,min and J..l~ = J..lt,c and the
log-likelihood in Eq. (5.28) becomes
I Pr { Ct = 11 rl} 0 1
(5.29)
og P r { Ct = 0 IrI} f"V J-tt - J-tt
130 Trellis Based Decoding of Linear Codes
J-lr,min - J-lt,c
J-l~ - J-li (5.30)
As Eqs. (5.29) and (5.30) indicate, regardless of the value of the
ML hard estimate, the log-likelihood ratio can be expressed as
I og Pr{ Ct = 11 rl}
Pr{Ct = 0 I rI}
rv J-l~ - J-li (5.31)
That is, the soft output of the decoder can be obtained as the
difference of the minimum path metric among all the paths with
symbol 0 at time t and the minimum path metric among all the
paths with symbol 1 at time t. The sign of A(Ct) determines the
hard estimate at time t and its absolute value represents the soft
output information that can be used for decoding in the next stage.
If the decision is made on a finite length block, as in block
codes, turbo codes or convolutional codes in TDMA systems, the
SOYA can be implemented as a bidirectional recursive method with
forward and backward recursions.
The SOYA can be summarized as follows.
II Backward recursion
1. Set t =0
2. Increase time t by 1
- At time t, identify the maximum likelihood estimate
ct=i,i=O,l
- Determine /-l~ as
i _ .
/-It - /-lr,mm
132 'Trellis Based Decoding of Linear Codes
/l-~ = min{/l-Ll(l')
I,e
+ v~(l', i) + /l-~(i)} (5.32)
(5.33)
and the encoder starts and ends at state O. The branch metrics for
this received sequence are shown in Fig. 5.5. They are computed
as in the standard VA, Eq. (5.22). The result of applying the VA
in the forward recursion is shown in Fig. 5.6. The survivor path
metrics are shown above each node and the ML path is represented
by the thick line. The final survivor is the ML path with the metric
/l-4,min = 0.04.
The results of the backward recursion are shown in Fig. 5.7.
The first number above each node shows the forward survivor path
metric, the second number shows the backward survivor path met-
ric.
At time t = 1, the ML hard estimate is Cl = 1 and thus 14 =
/l-4,min = 0.04. /l-~ is the minimum path metric of the paths that
have 0 at time 1. There is only one path with zero at time 1, and
The Bidirectional Soft Output Viterbi Algorithm 133
Time 0 1 2 3 4
7.24
1
o
- - - - input bit Ct = 1
- input bit Ct = 0
o o 7.24
..
0.04
Fig. 5.6: The forward recursion in Example 5.2, the ML path is shown
by the thick line
..
0.04/0
Fig. 5.7: The backward recursion in Example 5.2, the ML path is shown
by the thick line
its path metric is calculated from Fig. 5.7 as the sum of the forward
and backward survivor path metrics at node 0,
where J-l{ (If) is the forward survivor path metric at time 1 and node
t', If = 1, 0, vi( If, l) is the branch metric at time 2 for the input
symbol 1 and J-l~(l) is the backward survivor path metric at time 2
and node l, l = 1, O.
J-l~ = min{(O + 8 + 3.24), (8 + 4 + 0.04)}
= 11.24 (5.38)
The log-likelihood ratio at time 2 is given by
A(C2) = J-lg - J-l~ = 0.04 - 11.24 = -11.2 (5.39)
By applying this procedure to other time instants we obtain for
other soft outputs
A(C3) = 11.2 (5.40)
The computational complexity of the forward recursion is equiv-
alent to that of the Viterbi algorithm. The computational com-
plexity of the backward recursion is usually less than that of the
Viterbi algorithm, since there is no need to store backward sur-
vivors. Therefore, the computational complexity of the SOVA is
upper-bounded by two times that of the VA. With binary inputs,
the computational complexity is about 1.5 times of the VA.
The algorithm can be directly extended to the case k > 1, by
considering a1l2 k - 1 complement symbols to the ML symbol for each
node in (5.32).
It can be similarly generalized to handle non-binary modulation
schemes.
Sliding Window SOYA 135
Forward
Processing 10
Backward
Processing 10 In - - - 12D
Sub-block 1
Decoder
Output I I
o D
Forward
Processing
Backward
Processing
Sub-block 2
Decoder
Output I I
D 2D
Forward
Processing
Backward
Processing
Decoder Sub-block 3
Output I I
2D 3D
Fig. 5.8: Forward and Backward processing for the simplified SOYA
A( ) - 1 Pr { Ct = 11 r} (5.41)
Ct - og -P,-r{=-c-t-O---'I'---r-7-}
for 1 ~ t ~ T, where T is the received sequence length, and compares
this value to a zero threshold to determine the hard estimate Ct as
if A(ct) ~ 0
(5.42)
otherwise
The value A(ct) represents the soft information associated with the
hard estimate Ct. It might be used in a next decoding stage.
Consider again a system model shown in Fig. 5.1. For simplicity,
we assume that a binary sequence c of length N is encoded by a
systematic convolutional code of rate lin. The encoding process is
modelled by a discrete-time finite-state Markov process described
by a state and a trellis diagram with the number of states Ms. We
assume that the initial state So = 0 and the final state S7 = O. The
received sequence r is corrupted by a zero-mean Gaussian noise
with variance (J'2.
As an example a rate 1/2 memory order 2 RSC encoder is shown
in Fig. 5.9, and its state and trellis diagrams are illustrated in Figs.
5.10 and 5.11, respectively.
The content of the shift register in the encoder at time t repre-
sents St and it transits into St+! in response to the input Ct+! giving
as output the coded block Vt+!. The state transition of the encoder
is shown in the state diagram.
The state transitions of the encoder are governed by the transi-
tion probabilities
Pt (lil') = Pr {St = l I St-1 = l'}; O:S; l , l' :s; Ms - 1
The encoder output is determined by the probabilities
qt (Xt Il', l) = Pr {Xt I St-1 = l', St = l}; 0 ~ l , l' ~ Ms - 1
The MAP Algorithm 139
For the encoder in Fig. 5.9, Pt (l Il') is either 0.5, when there
°
is a connection from St-l = l' to St = l or when there is no
connection. qt (x Il', l) is either 1 or 0. For example, from Figs.
5.10 and 5.11 we have
pt(210) = pt(l) = 0.5; pt(112) = Pt(1) = 0.5
(5.43)
pt(310) = 0; pt(113) = Pt(O) = 0.5
and
°
the encoding process starts at the initial state So = and produces
an output sequence xI ending in the terminal state S7 = 0, where
T = N + 1/. The input to the channel is xI and the output is
rI = (rl,r2,···,r7 ).
140 'Ifellis Based Decoding of Linear Codes
Fig. 5.10: State transition diagram for the (2,1,2) RSC code
where
n-l
R (rjlxj) = II Pr (rj,ilxj,i) (5.46)
i=O
and
1 _(r]'i+l)2
Pr{rj,ilxj,i = -I} = V2if(je 2u 2 (5.47)
1 h,i-l)2
Pr {r·),Z·Ix··
),Z
= +l} = - -e-
V2if(j 2u 2 (5.48)
8=01
8=10
8=11
t=O t=1 t=2 t=3 t=4
the input to the Markov source, by examining r[. The MAP al-
gorithm provides the log likelihood ratio, denoted by A(Ct), given
the received sequence rI, as indicated in Eq. (5.41). The decoder
makes a decision by comparing A (Ct) to a threshold equal to zero.
We can compute the APP's in (5.41) as
Pr{Ct °
= I rI} = L
(I' ,1)fB~
Pr{St-l = l',St = llrI}
~ Pr{St-l=l',St=l,rI} (
L.J 5.49)
(I' ,1)EB~ Pr {rI}
where B2 is the set of transitions St-l = l' ~ St = l that are caused
by the input bit Ct = o. For example, B2 for the diagram in Fig.
5.11 are (3,1), (0,0), (1,2) and (2,3).
Also
Pr{Ct = 11 rI} = L Pr{St-l = l',St = llrI}
(l' ,1)EBI
P r {Ct -_ 01 r T}
1 -
_ '"
L...i
(ft(l" l) (5.51)
(1',I)EBr Pr {rI}
(5.53)
A (c) = iog
L(l' I)EBl
, t
,i (i', I) (3t(l)
CYt-1 (i')
(5.58)
t L(l',I)EB? CYt-1 (i') ,2 (i', i) (3t(l)
Derivation of a
We can obtain CY defined in (5.54) as
CYt(l) = Pr {St = I, rU
L~OI Pr {St-I = I', St = I, rU
L~OI Pr iSt-1 = I', St = i, ri- I , rt}
"Ms-I PS
L..I'=O .. r t-I i= ' ,r t-I}
l
Derivation of f3
We can express 13t(l) defined in (5.55) as
Derivation of I
We can write for 1': (l', l) defined in (5.56)
The MAP Algorithm 145
"n-l( i _ • (1))2)
. P (i)exp ( - for (1, 1') E Bti
1'; (1', 1) = { t
L.Jj=O Tt,J Xt,j
20- 2
o otherwise
(5.61)
where pt(i) is the a priori probability of Ct = i and xtj(l) is the en-
coder output associated with the transition St-l = [' to St = [ and
input Ct = i. Note that the expression for R (rtIXt) is normalized
by multiplying (5.46) (y"Fia) n.
2. Backward recursion
146 Trellis Based Decoding of Linear Codes
[16].
It is also worth mentioning that the branch metric is dependent
on the noise variance a 2 , while the branch metric for the VA and
SOVA is not. However, it has been shown that, at high SNR's, the
accurate estimation of the noise variance is not very important [17].
If the final state of the trellis is not known, the probability 137(/),
can be initialized as
(5.66)
Solution
The trellis diagram for this code is shown in Fig. 5.14.
The branch metrics 'Y~i)(l,z'), 1,1' = 0,1, t = 1,2"",6 are
computed by Eq. (5.61).
Starting from the initial condition ao(O) = 1, ao(l) = 0 and
using the forward recursion formula in Eq. (5.59) at(l), I = 0,1,
t = 1, ... ,6, are computed.
Similarly, using the initial conditions 136(0) = 1 and 136(1) =
0, 13t(l) , I = 0,1, t = 1,2"",6, are computed by the backward
recursion formula in Eq. (5.60).
The soft-outputs are calculated by Eq. (5.65).
The hard estimates Ct are obtained by comparing the soft output
At(Ct) to the threshold of O.
The values of Ct, 'Yii) (1,1'), at(l), 13t(l), At(Ct) and Ct are shown
in Table 5.1.
The Max-Log-MAP Algorithm 149
Table 5.1
t 1 2 3 4 5 6
Ct 1 1 0 0 1
rl
t 0.030041 -0.570849 -0.38405 -0.744790 0.525812 0.507154
r; 0.726249 -0.753015 -1.107597 -0.495092 1.904994 -1.591323
1'P(O,O) 0.034678 0.425261 0.386226 0.404741 0.000408 0.088558
1'P(I,I) 0.236152 0.058185 0.020712 0.109450 0.062573 0.001323
1'{(0,1) 0.255655 0.012881 0.007510 0.015309 0.250953 0.005052
1'{(1,0) 0.037542 0.094143 0.140044 0.056593 0.001638 0.338085
G.t(O) 0.034678 0.038815 0.017137 0.006971 0.000003 0.000599
G.t(1) 0.255655 0.015322 0.000609 0.000329 0.001770 0.000002
/h (0) 0.005783 0.013448 0.034680 0.089880 0.088558 1
!1t (1) 0.001557 0.005005 0.007135 0.021300 0.338085 0
At (Ct) 0.685647 0.177998 -1.920772 -4.239018 4.407100 7.598074
Ct 1 1 0 0 1
1
0/01
Fig. 5.14: Trellis diagram for the encoder in Example 5.3
f3t(l) logf3t(l)
L L
M.-l
log e13t +1 (1')+'Yt+1 (1,1')
I' =0 iE(O,l)
(5.71)
(5.72)
(5.76)
°
can be implemented by a look-up table. It has been shown that it
is enough to store 8 values of 162 - 61 1, ranging between and 5
Comparison of Decoding Algorithms 153
1if.-----.------.-----,------,------.-----.------.
MAP
--6-- SOVA
a:
w
<Xl
10-<
1~~----~-----L------~----~----~----~----~
-2 -1 0 2 3 4 5
Eb/No (dB)
[11] R.W. Chang and J.C. Hancock, "On receiver structures for
channels having memory", IEEE Trans. Inform Theory, Vol.
IT-12, pp. 463-468, Oct. 1966.
Iterative Decoding
c Vo }----rO
(6.5)
Iterative Decoding of Turbo Codes Based on the MAP Algorithm 159
If the random variables r' and r" are uncorrelated we can assume
(6.6)
: Deinterleaver :
MAP
ro Ale . 11
I Interleaver I
Decoder 1
i...-.+ A2e
MAP
. : Interleaver :
ro
Decoder 2 A2
IDeinterleaverl
1
!
Fig. 6.2: An iterative turbo code decoder based on the MAP algorithm
(6.8)
where we introduce the notation p~(l) and pHD) for the a priori
probabilities for 1 and 0 at the input of the first decoder, respec-
tively, since they will differ from the corresponding a priori prob-
abilities at the input of the second decoder denoted by p;(l) and
p;(O).
In the initial decoding operation in the first decoder we assume
that
p~(l) =p~(O) = 1/2 (6.9)
pHI)
Al (ct) = log pHD)
+ log "M.-I
, (l') 0)2 "n-l(
( _ rt,o-xt,o o (»)2)
+L..Jj=o rt,J-xt,j 1 . R (l)
L..J(1=0 at-l exp 20-2 fJt
(6.10)
162 Iterative Decoding
where
",Ms-l a (z') exp (_ L;~11(rt')-XL(I»)2) . j3 (l)
~I'I=O t-l 20-2 t
A1e(Ct) = log ,
",Ms-l a
(
(l') exp _
n-l 0 2) . j3 (l)
L)=o (rt,j-x t ,) (I»)
~(I=l t-l 20-2 t
(6.11)
Ale(Ct) is called the extrinsic information. It is a function of the
redundant information introduced by the encoder. It does not con-
tain the information decoder input rt,o. This quantity may be used
to improve the a priori probability estimate for the next decoding
stage.
Let us observe the input to the second MAP decoder. Since the
input to the second decoder includes the interleaved version of ro,
°
the received information signal ft correlates with the interleaved
soft output from the first decode~, Al(Ct). Therefore, the contri-
bution due to rt,O must be taken out from Al (et), to eliminate this
correlation.
However, A1e(Ct) does not contain rt,O and it can be used as the
a priori probability for decoding in the second stage. That is, the
interleaved extrinsic information of the first decoder Ale is the a
priori probability estimate for the second decoder
- p;(l)
Ale(Ct) = log p~(O) (6.12)
and
(6.15)
(6.16)
(6.17)
A2e (Ct) is the extrinsic information for the second decoder, which
depends on the redundant information supplied by the second en-
coder, as in Eq. (6.11). The second decoder extrinsic information
can be used as the estimates of the a priori probabilities for the first
decoder as in (6.12). The log-likelihood ratio for the first decoder
can be written as
(6.18)
ACr)()
Ie Ct = ACr)()
I Ct -
2
-2 rt 0 -
A-Cr-I)(
2e Ct
) (6.19)
a '
Cr)()
A2e ACr)()
Ct = 2 Ct -
2-
-2 rt 0 -
A- Cr )()
Ie Ct (6.20)
a '
• 1=1
-01=3
- x 1=5
- + 1=8
- - • 1=12
- - 0 1=15
- - x 1=18
Fig. 6.3: BER performance of a 16 state, rate 1/3 turbo code with
MAP algorithm on an AWGN channel, interleaver size 4096 bits, variable
number of iterations.
10° , - - - - - - , - - - - - - - , - - - - - - - , - - - - , - - - - - - , - - - - - - ,
-01=1
- 01=3
- x 1=5
- + 1=8 ,,
- - 01=12
- - 0 1=15
,,
- - x 1=18 "
"- ,
"
10-''------'--------'--------'----'------'------''
o 0.05 0.1 015 0.2 025 0.3
EblNO (dB)
Fig. 6.4: BER performance of a 16 state, rate 1/3 turbo code with MAP
algorithm on an AWGN channel, interleaver size 16384 bits, variable
number of iterations
10· r-----,-----,--T----.--~~~::;:;;:;:;I:;-"l
- • N=420. 1=8
- 0 N=1024.1=8
- x N=2048. 1=8
- - • N=4096.1=18
- - 0 N=8192.1=18
,, - - x N=16384.1=18
, lit
\,
10-2 ~, '
\ '>.. \
~ '< "
\
\
\'
ffi 10-3 k \ \
<II \ III \
~ \
\
~
'\ \"
"\
\ \ \
\ \ "\
\ ~ \:
\ \ '1\
't \ -,
\ - \
\ - \
\ \: \"
~ \- "\
~ \
10-6 L..-_-'-_'2I...--L!.-\_ - - ' -_ _- ' - - - - ' - - - - ' - _ - - ' ' ' ' - _ - - - ' - - _ - - - '
o 02 0.4 06 08 1.4 1.6
EtlINO (dB)
Fig. 6.5: BER performance of a 16 state, rate 1/3 turbo code with
MAP algorithm on an AWGN channel, interleaver size N, the number
of iterations 18
p=[~ ~l
168 Iterative Decoding
The code is obtained from the 1/3 rate turbo code with the gener-
ator polynomials go = (37) and gl = (21). The performance loss
relative to the 1/3 rate code for interleaver size 4096 is about 0.70
dB at the bit error rate of 10- 4 .
10' 1-----,-----,~-,---,---,.---__;:::::J;::;::::;:;:::~:;_"l
- • N=16384,1=18
- 0 N=8192, 1=18
- x N=4096, 1=18
Fig. 6.6: BER performance of a 16 state, rate 1/2 turbo code with
MAP algorithm on an AWGN channel, interleaver size N, the number
of iterations 18
The simulation results for a rate 2/3, 16-state turbo code, ob-
tained by puncturing the rate 1/2 turbo code in the previous ex-
ample are shown in Fig. 6.7. The puncturing matrix is
p=[ll 10]
101 1
Comparison Between Analytical Upper Bounds and Simulation Results 169
10-'L:,.---"T--.-----r--.---,--~;r::=:;:;::r;::;;~;:,_1
- • N=16384,1=18
- 0 N=8192, 1=18
- x N=4096,1=18
10-4
Fig. 6.7: BER performance of a 16 state, rate 2/3 turbo code with
MAP algorithm on an AWGN channel, interleaver size N, the number
of iterations 18
10' ,,
\
\
\
\
10-2
~ 10-3
10-4
10-'
--
--&--
-+--
Average bound
it=1
it=3
it=5
it=8
-+- it=10
10-"
0 0.2 0.4 0.6 0.8
EblNo(dB)
Fig. 6.8: Simulation result of a 16 state, rate 1/3 turbo code with MAP,
interleaver size 1024 bits, variable number of iterations I and the theo-
retical bound on an AWGN channel.
of iterations.
Approaching the optimum performance requires a higher num-
ber of iterations for low SNR's. For example, 5 iterations are needed
at the BER of 10-5 , while at BER 10- 3 at least 8 iterations are re-
quired.
It is worth noting that, for particular interleaver designs, it is
possible to achieve better results than for random and uniform in-
terleavers, as discussed in Chapter 7.
(6.21)
Simulation
Lower bound
a:
w
'"
---
Fig. 6.9: Simulation result of a 16 state, rate 1/3 turbo code with MAP,
interleaver size 1024 bits, the number of iterations 10 and the theoretical
bound on an AWGN channel.
(6.22)
or its logarithm
- nTlogcy - ,z 2 ,z (6.24)
t=l i=O 2cy
Iterative SOYA Decoding of Turbo Codes 173
The SOYA decoder selects a path with the minimum path met-
ric. A SOYA decoder in the iterative scheme shown in Fig. 6.10
computes the soft output as
(6.28)
where J-L~ and J-L} are the minimum path metrics with the symbol
o and symbol 1, respectively, at time t, J-LT,min is the maximum
likelihood path metric and J-Lt,c is the best competitor path metric,
as described in Section 5.5. They can be computed by the following
procedure.
At time t, the maximum likelihood estimate Ct is obtained from
the maximum likelihood path J.lt,cp computed by using the Viterbi
algorithm, as
J-LT,min
t-l T
: Deinterleaver 1
SOYA Ale I ,
rO -, Interleaver ,
Decoder 1
: Interleaver :
ro
- SOYA
Decoder 2
A 2e
A2
IDeinterleaverl
1
l
Fig. 6.10: An iterative turbo code decoder based on the SOYA algorithm
where
t-l T
where l', l = 0,1, ... , Ms - 1, J.1Ll (l') is the path metric of the
forward survivor at time t - 1 and node l', J.1~(l) is the path metric
of the backward survivor at time t, Ms is the number of nodes in the
trellis and vf(l', l) is the branch metric of the complement branch
from node l' to l. J.1t,e can be represented as
e
+ Vt (6.34)
/I
J.1t,e = J.1t
where J.1~ is a positive number, similar to (6.31), and vf is given by
n-l
v~ = L (rt,i - X~,i)2 - logpt(c) (6.35)
i=O
where X~,i is the ith modulated coded signal in the trellis, generated
by input c.
By substituting the path metrics from Eqs. (6.30) and (6.34)
into Eq. (6.28), we can compute the soft output for the first de-
coder, Al(ct), as
(6.36)
h
were J.1t,l' et J.1t,l an de
' vt,l, /I
Vt,l reler
C
to t h e quantItIes
.. J.1t,
' vtet , J.1t an d
/I
where A1e(Ct) is the extrinsic information for the first decoder given
by
and pi(l) and pi(O) are the a priori probabilities for the binary 1
and 0, respectively, at the input of the first decoder.
The interleaved extrinsic information of the first decoder im-
proves the a priori probabilities at the input of the second SOYA
decoder, as in the MAP iterative decoder
(6.39)
where pt(l) and pt(O) are a priori probabilities for the binary 1 and
o at the input of the second decoder, at time t, respectively, which
can be estimated by Eqs. (6.14) and (6.15).
The second decoder computes the soft output A 2(Ct) which can
be written as
(6.40)
where jt~ ,2 and jt~,2 refer to the quantities jt~ and jt~ , for the second
decoder, respectively.
Substituting the extrinsic information from the first decoder,
A1e(Ct) from Eq. (6.39) into Eq. (6.40) we get for A 2 (Ct)
(6.42)
The interleaved extrinsic information from the second decoder,
A2e (Ct), is used as the estimate of the logarithm of the encoder input
probabilities in the next iteration
(6.43)
Comparison of MAP and SOYA Iterative Decoding Algorithms 177
A(r)()
Ie Ct =
A(r)()
1 Ct - 4rt,O - A-(r-I)()
2e Ct (6.45)
(r)()
A 2e Ct = A(r)()
2 Ct - r t,O - A-(r)()
4- Ie Ct (6.46)
I--MAP
-
-
0 Logmap
+ SOVA
a:
w
III
Fig. 6.11: BER performance of a 16 state, rate 1/3 turbo code with MAP,
Log-MAP and SOYA algorithm on an AWGN channel, interleaver size
4096 bits, the number of iterations 18
r r r c
MAP MAP
Deinter- outer
Aoe inner A,e A,e Aoe
decoder leaver decoder
Interleaver
output of the outer decoder, so the soft output does not contain
the contribution of the information bit.
The extrinsic information Aoe (Ct,j) is interleaved and passed to
the first encoder as the a priori probability for the next iteration.
(r) -
A ie -
A(r)(
i Ct,).) _ ~
a 2 r t,). _ A-(r-l)(
oe Ct,).) (6.49)
oe ,) = A(r)(Ct')
A(r)(Ct') 0 ,)
- x(r)(Ct')
ze ,) (6.50)
where j = 0,1, .. " no - 1.
r r r c
SOVA SOVA
Deinter-
Aoe inner
decoder
A ..
leaver A. e outer
decoder
Aoe
Interieaver
- Compute A(r)(Ct
oe ,).) as
(6.55)
The generator polynomial for the inner code, denoted by Gi(D), is
given by
Gi(D) = [ 1
0 ltD2
ltft1p2
1 (6.56)
o 1 ltD+D2
As the simulation results indicate, the bit error probability can
be reduced by increasing the interleaver size.
Comparison of turbo and serial concatenated codes is illustrated
in Fig. 6.15. This figure shows the turbo code outperforms the
corresponding serial concatenated code for the BER above 10- 5 .
At lower BER's, the serial concatenated code is superior, since the
BER of turbo code levels out in this region due to the small free
distance, while the BER of serial concatenated code has no error
floor even at the BER of lO- lO •
The effect of the variable number of iterations on the BER per-
formance on AWGN channels is shown in Fig. 6.16. As with the
turbo codes, increasing the number of iterations reduces the BER,
up to a certain threshold, which increases with the interleaver size.
184 Iterative Decoding
10-'k---.-----.------,r----;=~~:==,___,
_ . N=512
- x N=1024
- 0 N=2048
- - • N=4096
- - x N=8192
- - 0 N=16384
,
I '
10-3 \ ''\
\ 1io
\ '\
\ "
~ 10-4
'-
,
,
I
\.
~
..,
\ ,\
\
if
10-' I
I
I
,
I
<i1 I
I
'\ I
I
if
10'" \
~
,
I
,
\ \
,
I
\ \ \
\
I
10-7 \
15 2 2.5 3
EblNO (dB)
10'
- • Turbo l=lB
- 0 Seriall=lB
10-'
10-'
ffiIII 10-3
10-'
10- 5
10-'
0.6 0.7 O.B 09 1 11 1.2 1.3 1.4
EblNO (dB)
Fig. 6.15: Comparison of a rate 1/3, memory order 2 turbo code with
interleaver size 4096 bits and a rate 1/3 serial concatenated code with
memory order 2 outer code, interleaver size 4096 bits on an AWGN
channel, SOYA decoding algorithm, the number of iterations 18.
10·
10-'
::::::2~~__ :
10-'
................ -
....... '"'Q.... .... ,.. •
.... ~.:- ... .:-~
.... ... -Sc.... :: .... ...
. ................ . .
: . . )C.: . . . . . .
....... ""0... .... ...
-"
10"
~
a: ............ ~ ........
w
III , ....
.....................
10-' , , '
.."
........ '$.... ......... .
'1<...' , .
..........
10-'
.... '
. . .:-e. , . .
.....
-·1=1 ,,.,
- 0 1=3
- x 1=5
.. " "
- + 1=8 '~
10"" - - • 1=12
- - 0 1=15
- - x 1=18
10-7':--_-::'::-_---:':--_-:":-_-'-_ _-'-_-'-_ _~_...J
0.6 07 08 09 1 1.1 1.2 1.3 1.4
EblNO{dB)
Fig. 6.16: BER performance of a rate 1/3 serial concatenated code with
rate 1/2, 4 state outer code and rate 2/3, 4 state inner code with SOYA
algorithm on an AWGN channel, interleaver size 4096 bits, variable num-
ber of iterations
10-7'-----'L-----'-----'----'-----'----'--~--'--_'_--'
0.6 0.7 0.8 0.9 1.1 1.2 1.3 1.4 1.5 1.6
EIlINO (dB)
Fig. 6.17: Comparison of a rate 1/3 turbo code for different memory
order with SOYA algorithm on an AWGN channel, interleaver size 1024
bits, the number of iterations 12
10-'
10'"
10~L6--0:-':7----:"08::------::-0,':-9---'---1-'-1--'':-----,L,-~1.J...4--'15---.J
16
EbiNO (dB)
Fig. 6.18: Comparison of a rate 1/3 turbo code for different memory
order with SOYA algorithm on an AWGN channel, interleaver size 4096
bits, the number of iterations 18
1O-5~_.l....-_-'-_--'--_--'-_--'-_---'_ _'--_.l....-_-'----',-"
080,9 1,1 1,2131.4 15 1.6
EbiNO (dB)
Fig. 6.19: Comparison of a rate 1/3 serial concatenated code for different
outer code memory order with SOYA algorithm on an AWGN channel,
interleaver size 1024 bits, the number of iterations 12.
Serial Concatenated Convolutional Codes with Iterative Decoding 189
9 ---1---,"'":.,---,':,2:----,".3::----:-'1.-._ _':-..>...0.-"'.
'O-~L..8------::0.L..
EbiNO (dB)
Fig. 6.20: Comparison of a rate 1/3 serial concatenated code for different
outer code memory order with SOVA algorithm on an AWGN channel,
interleaver size 4096 bits, the number of iterations 18
- - "MAP, 1=12
- - "SOVA, 1=12
II:
~
10-3
Fig. 6.21: Performance comparison of MAP and SOVA for a rate 1/3
serial concatenated convolutional code
Bibliography
IEEE Globecom Conf, San Francisco, CA, Dec. 1994, pp. 1298-
1303.
[11] P. Jung, "Novel low complexity decoder for turbo codes", Elec-
tronics Letters, 19th Jan. 1995, Vol. 31, No.2, pp. 86-87.
[14] W. Feng and B. Vucetic, "A list bidirectional soft output de-
coder of turbo codes," in Proc. Int. Symp. on Turbo Codes
and Related Topics, Brest, France, Sep. 1997, pp. 288-292.
Interleavers
7 .1 Interleaving
Interleaving is a process of rearranging the ordering of a data se-
quence in a one-to-one deterministic format. The inverse of this
process is deinterleaving which restores the received sequence to its
original order.
(7.1)
where Ci E {O, I}, 1 ~ i ~ N. The interleaver permutes the
sequence c to a binary sequence
(7.2)
~ (~,~,~,~,~,~,~,~)
(C2, C4, Cl, C6, C3, Cs, C5, C7)
(7.6)
where Ed is the error coefficient and the set of all pairs of (d, Ed)
represents the code distance spectrum.
The distance spectra of the turbo code with various interleaver
sizes are calculated and substituted in (7.6) to calculate the bit
error probability. The results are shown in Figs. 7.3 and Fig. 7.4
for the interleave sizes of 128, 256 and 512.
It is apparent from Fig. 7.3 that the shapes of the distance
spectra for the turbo code with various interleaver sizes are quite
similar. However, for significant spectral lines which dominate the
turbo code performance, as discussed in Section 4.4, the error co-
efficients decrease with increasing interleaver size. The spectral
thinning manifests in significant performance improvements pro-
portional to the interleaver gain. As Fig. 7.4 shows the interleaver
gain grows linearly with the interleaver size.
198 Interleavers
10200 ,---,---,------,,------,,----,----,----,
N=128
N=256
N=512
10"JO
\
/ \
.,t- .....
E // '
" /,.... " ,
~ 10
/
50 //
8 // \
\
g \ \
w \ \
10' \ \
\
10- '00 " - -_ _" - -_ _' - - - _ - - " ' - - - _ - '_ _- '_ _- - - '_ _-------'
o 200 400 600 800 1000 1200 1400
Hamming distance
Fig. 7.3: Distance spectra for a turbo code with various interleaver sizes
10· ~-~-----r--------,----------,
\ :
N=128
N=256
:,
\
'-
\ N=512
:\
\ \
\ \
:11
. ';1
\
I'
\
\ \
\ \
--
'. \ ,
\
\
\
\
"- "-
... -
---
10-6 L -_ _ _ _ _ _...l.-_ _ _ _ _ _- ' -_ _ _ _ _ __
15 2 2.5 3
Eb/No (dB)
Fig. 7.4: Bit error probability upper bounds for a turbo code with various
interleaver sizes
The previous analysis and discussion dearly show that the in-
terleaver size and structure affect the turbo code error performance
considerably. At low SNR's, the interleaver size is the only impor-
tant factor, as the code performance is dominated by the interleaver
gain. The effects induced by changing the interleaver structure at
low SNR region are not significant. However, both the interleaver
size and structure affect the turbo code minimum free distance and
first several distance spectral lines. They play an important role
in determining the code performance at high SNR's, and conse-
quently, the asymptotic performance of the turbo code. It is possi-
ble to design particular interleavers which can result in good code
performance at high SNR's. This is achieved by breaking several
200 Interleavers
l·eM
I1 T
1 2 n
~ite [,
n+l n+2 2n
m rows
(m - l)n + 1 (m - l)n + 2 mn
I' n columns '1
Iread
0 0 0 0
0 0
1 0 0 1
write. 0 0 0 0
0 0 0 0
1 0 0 1
0 0
0 0 0 0
VI,I V5,2 V3,1 VS,2 V5,1 [v12,2 V7,1 V6,2 V9,1 [v15,2 Vll,1 V4,2 fU13,1 rvlO,2 V15,1
It is obvious from Table 7.1 that each odd information bit has
a coded parity digit associated with it. However, the even coded
204 Interleavers
C4 C5]
Cg CIO
C14 C15
Now each odd and even information bit has a coded digit asso-
ciated with it. The coded sequence is obtained by multiplexing the
coded sequences from Tables 7.1 and 7.4, as shown in Table 7.5.
With this sequence the error protection is uniformly distributed,
resulting in a better decoder performance, compared to the scheme
with a non odd-even interleaver.
VI,1 V6,2 V3,1 V2,2 V5,1 V12,2 V7,1 VS,2 V9,1 V4,2 Vll,l V14,2 'lJr3,1 ~10,2 tu15,1
CI C2 C3
C4 C5 C6
C7 Cs Cg (7.11)
ClO Cll Cl2
Cl3 Cl4 C15
0------1 (L-l)8 ~
(a) Interleaver
o o
(b) Deinterleaver
where zeros are produced by the initial zero states of the shift reg-
isters.
Ramsey's Type III (nI' n2) interleaver can be generated by using
the same function as for the convolutional interleaver, given by
(7.13), with ni = LB - 1 and n2 = L. The interleaver has the
following property [14]:
(7.14)
208 Interleavers
I 19 I 16 I 13 I 10 I 7 4 1
I 20 I 17 I 14 I 11 I 8 5 2
21 18 15 12 9 6 3
whenever
17r(i) -7r(j)I:::; n2 -1, i, j E A (7.15)
If the parameters Band L of the convolutional interleaver are
chosen properly, it can break some rectangular low weight input
patterns which appear in the block interleavers, and it can give
very good performance [16]. However, from Eq. (7.13), we can see
that for any value of i, the corresponding value of 7r(i) is always
larger than or equal to i. That is, the output sequence is expanded
by (L - I)LB symbols relative to the input sequence. One way to
overcome the sequence expansion is to not include the zeros within
the interleaved sequence. However, this will change the interleaving
function and worsen the interleaver performance.
C= (Cl' C17, C12, C4, C20, C15, C7, C2, C18, ClO, C5, C2l,
[~~
21
~~ ~~
18 15
t! i i ~ 1 cyclic shift, [ ~g~
Step 1. Choose randomly an integer i l from the set A = {I, 2, ... , N},
according to a uniform distribution between 1 and N, with
the probability of p( id = 1:i. The chosen integer i l is set to
be 7r(1).
but the data are read out diagonally with certain row and column
jumps between each reading.
Let i and j be the addresses of the row and column for writing,
and ir and jr the addresses of the row and column for reading. For
N = M x M, where M is a power of 2, the non-uniform interleaving
may be described by [3J
M
Zr (2 + 1) . (i + j) mod M
k (i + j) mod L
Jr {[P(k)· (j + l)J -I} mod M
(7.20)
whenever
(7.21)
where S1 and S2 are two integers smaller than N. In a turbo en-
coder, these two parameters should, in general, be chosen to cor-
respond to the maximum input pattern lengths to be broken by
the interleaver. Thus, they should be chosen as large as possible.
The length of an input pattern is defined as the length of the input
sequence, starting from the first binary "I" to the last binary "1".
However, as the search time for this algorithm becomes pro-
hibitively large for large values of S1 and S2, a good trade-off be-
tween interleaver performance and search time is obtained for S1,
S2 < IN/2.
When two identical component encoders are employed in a turbo
encoder, it is appropriate to set S1 = S2. In the following analysis,
we assume the two component codes are identical and S1 = S2 = S.
Note that, for S = 1, the S-random interleaver becomes a random
interleaver.
For a turbo encoder, an S-random interleaver can break the
input patterns with lengths up to S + 1 and generate high weight
parity check sequences, as explained below.
Let {c} be the set of all the input patterns generating an error
event of the component code. The length of an input pattern c is
denoted by 1(c), and the weight of the input pattern is denoted by
w(c). If the length of an input pattern is small, it will likely produce
a low weight codeword. Therefore, the interleaver should break this
kind of input patterns. With an S-random interleaver, the input
pattern will be mapped to another sequence c. If c is not an error
pattern, we say that the input pattern for the component encoder is
broken. The second encoder will produce a parity check sequence
of infinite weight (if no termination is performed). Otherwise, if
Code Matched Interleavers 213
l(c)::; 8+1, because of the 8-constraint, l(c) > (w(c) -1) (8+1).
As the path length increases, c will likely produce a high weight
parity check sequence. Thus, in both cases, the overall codeword
weight will be high.
Based on the previous discussion, we can conclude that an 8-
random interleaver can either break the input patterns with length
up to 8 + 1 or expand these input patterns to longer error patterns
with length more than (w-1)(8+1), where w is the input sequence
weight, no matter what the component code is. Thus 8-random
interleavers can achieve better performance compared to pseudo-
random interleavers.
It is worth noting that the 8-random interleaver functions (7.20)
and (7.21) agree with the property of the convolutional interleaver
as shown in (7.14) and (7.15), if the parameters n1 and n2 of the
convolutional interleaver are chosen in a particular way, such as
n1 = 8 2 , n2 = 8 1 + 1. That is to say these two types of interleavers
are equivalent in the sense of breaking low weight input patterns.
In addition, dithered golden interleavers have been reported in
[23]. It is shown that they perform quite similar to 8-random inter-
leavers for low rate turbo codes, but better for high rate punctured
turbo codes.
but ions to the error probability for the overall code at high SNR's.
In the interleaver design, we make sure that these significant input
patterns are broken so that they do not appear after interleaving.
This interleaver will eliminate first several spectral lines of the origi-
nal distance spectrum and increase the overall turbo code Hamming
distance. Consequently, the code performance at high SNR's is im-
proved and the error floor is lowered.
(7.22)
Note that, for recursive systematic convolutional component codes,
the minimum input weight resulting in a finite weight error pattern
is two. Throughout the section we use an example of a turbo code
with generator matrix (1, 21/37)(oct) for both component codes.
v = (00···0011001100···00) (7.24)
216 Interleavers
(7.26)
Ii - jl
mod 5 o
17r(i)-7r(j)1 mod 5 o (7.28)
(7.29)
5t 1
z··---------------·j
C2 ••• 00100 00100···
\
C2 ... ···00100···00100
I
7r(i) • • 7r(j)
5t 2
whenever
Ii - j I mod 5 = 0 (7.31)
In fact, it is not possible to break all these patterns. For an un-
broken input pattern, in order to maximize the overall codeword
weight, we require maximizing the value of
(7.33)
where 7"1 and 7"2 are integers with 7"2 > 7"1 + 5t l .
C4 0 0 10 o 100 0010···0100
the input sequence will generate a low weight codeword. The overall
codeword weight is
(7.35)
where
Design of Code Matched Interleavers 219
t2 li2 - hi /5
t3 11T(id-1T(i2)1/5
t4 11T(jl)-1T(h)I/5
A code matched interleaver should avoid the crossing mapping
of two single weight-2 input patterns as illustrated in Fig. 7.12 or
maximizing the value of min( tl + t2 + t3 + t4) for the unbroken input
patterns ..
From the above discussion we can conclude that the most sig-
nificant low weight input patterns are those of weight-2 and com-
binations of weight-2. In the interleaver design, we only need to
break these particular input patterns. This makes the interleaver
design computationally efficient.
The code matched interleaver proposed in [9] is a modified S-
random interleaver. The modification consists in breaking the most
significant low weight error patterns. The weight of the input se-
quence to be broken is denoted by w. The design procedure is
summarized as follows:
1. Select an integer randomly from the set A = {1, 2, 3, ... , N}.
The selected integer represents the position of the interleaver
output.
4. If no integer from the set A can satisfy both Step 2 and Step
3 simultaneously, for a specified number of iterations, then
reduce S by 1 and start the search again.
10-'.--------,----,---------,----,---------.
----- Random
- S-Random
-+- Code Matched
10~
10-"
Fig. 7.13: BER performance of the 4-state, rate 1/3, (1, 5/7) turbo code
with random, S-random and code matched interleavers on an AWGN
channel
10·
10-' -- Random
-e- S-Random
-+-- Code Matched
10-2
10-"
W10-4
10-5
10-5
10-7 .!
10'"
0.6 0.8 1.2 1.4 1.8 2
EblNo (dB)
Fig. 7.14: BER performance of the 8-state, rate 1/3, (1, 17/15) turbo
code with random, S-random and code matched interleavers on an
AWGN channel
(7.36)
Performance of Turbo Codes with Cyclic Shift Interleavers 223
10-'
---
--
-+--
Random
S-Random
Code Matched
10-3
10--4
~ 10--
10-6
10-7
10-8
10-·
08 1.2 1.4 1.6 1.8 2 2.2
EbINo(dB)
Fig. 7.15: BER performance of the 16-state, rate 1/3, (1, 33/31) turbo
code with random, S-random and code matched interleavers on an
AWGN channel
S-Random 8=15
~ CycllCshlftm=16B=1
-+- Cyclic shIft m=16 B=2
10-2
10-6
10-6
0.8 0.9 11 1.2 1.3 1.4 1.5 1.6
EbINo(dB)
Fig. 7.16: BER performance of the 16-state, rate 1/3, (1, 33/31) turbo
code with S-random and cyclic shift interleavers on an AWGN channel
leaver must be stored in memory for both the turbo encoder and
decoder. However, for a cyclic shift interleaver, the interleaved or
deinterleaved sequence can be generated from the interleaving ma-
trix based on cyclic shifts. There is no need to store the interleaving
vector. Therefore, the cyclic shift interleavers reduce the memory
requirement and are easy to implement.
It is worth noting that the cyclic shift interleavers examined in
Fig. 7.16 have not been designed for a particular turbo code. For a
given turbo code, we could design a code matched interleaver based
on the cyclic shifts to further improve the turbo code performance
at high SNR's.
Bibliography
[18] M. Oberg and P. H. Siegel, "Lowering the error floor for turbo
codes," in Proc. Int. Symposium on Turbo Codes and Related
Topics, Brest, France, Sep. 1997, pp. 204-207.
BIBLIOGRAPHY 229
8.1 Introduction
In the previous chapters we have discussed the design, performance
analysis and iterative decoding of turbo codes for Gaussian chan-
nels. It is observed that turbo codes can achieve a remarkably low
bit error rate with iterative decoding at a signal-to-noise ratio close
to the Shannon capacity limit on AWGN channels.
However, on many real links such as radio, satellite and mobile
channels, transmission errors are mainly caused by variations in re-
ceived signal strength referred as fading. This severely degrades the
digital transmission performance and powerful error control coding
techniques are needed to reduce the penalty in signal-to-noise ratio.
In this chapter, we consider turbo and serial concatenated code
performance on fading channels. Analytical average upper bounds
for turbo and serial concatenated code performance based on the
union bound and code weight distributions are derived for indepen-
dent fading channels. For decoding, we consider the iterative MAP
and SOYA decoding methods with channel state information. The
turbo and serial concatenated code performance over independent
and correlated fading channels is discussed.
where v is the vehicle speed and c is the speed of the light. The
Doppler shift effect in multipath propagation environment spreads
the bandwidth of the multipath waves into the range of fc ± fd max '
where hmax is the maximum Doppler shift and it is given by
vfc
fd max = ~ (8.2)
(8.4)
a2':O
(8.5)
a<O
0.9.----,----__r---~--_,__--__r--_____,
01
°0L---~0.5---~--~1~.5--~2~--=2~.5~--~3
a
a~O (8.9)
a<O
236 Turbo Coding for Fading Channels
where D2 is the direct signal power and 10 (-) is the modified Bessel
function of the first kind and zero-order.
By assuming that the total average signal power is normalized
to unity, the pdf in (8.9) becomes
(8.11)
where h (-) is the first order modified Bessel function of the first
kind. Small values of K indicate a severely faded channel. For
K = 0, there is no direct signal component and the Rician pdf
becomes a Rayleigh pdf. On the other hand, large values of K
indicate a slightly faded channel. For K approaching infinity, there
is no fading at all resulting in an AWGN channel. The Rician
distributions with various K are shown in Fig. 8.2.
2.------,------,------,,------,------,------,
18 K=O
K=2
K=5
1.6 K=10
14 . /,
/ \
/
12 I'
:§:1
c. '/. "
/ -- "- ,\
( , I
, I
08
/
/ ,
/'
,
\
\I
\I
06 I
,
"
/
,
/
,
/ I,
/ /
\,
,,,
/ I
/ \
I
/ , ....' .....
0.5 15 2 2.5 3
a
c -i 1 p(a)p (r I x = 1, a)
where
<P(r) = fap(a) . p (r I x = -1, a) da (8.16)
fap(a) . p (r I x = +1, a) da
The channel capacity of (8.13) and (8.15) can be calculated by
using numerical integration. The results from [6J are plotted in
Fig. 8.3. From the figure we can see that the penalty in Es/ No is
in excess of 0.8 1 dB when no CSI is utilized relative to the case
f'V
with perfect CSI on Rayleigh fading channels for codes of rate 1/4
to 1/2.
Capacity of Fading Channels 239
09
08
.,0.7 .- .-
~'" .- .- Ideal CSI
.0
I=-=-=- NoCSI
I
...
;:0.6
:0 .-
c.
.-
.- .-
U 0.5
a; .-
.,c:c:
.t: .- .-
U 0.4
0.3 .- .-
02
0.1
-2 0 2 4 6 8 10 12 14 16 18
ES/No (dB)
(8.17)
where R is the code rate and Hb(e) is the entropy of a binary source
with probability Pb(e). It is given by
(8.18)
Eqs. (8.17), (8.13) and (8.15) determine the channel capacity bounds
for fading channels. Due to the complexity of the channel capac-
ity expressions in (8.13) and (8.15), it is not possible to compute
analytically the pairs (Pb(e), Eb/No) satisfying (8.17) with equality.
However, by letting the code rate equal to the channel capacity,
we can obtain the smallest E b / No such that arbitrarily small error
performance is achievable. For codes with rates of 1/2 and 1/3,
the capacity limits for BPSK signalling on independent Rayleigh
240 Turbo Coding for Fading Channels
fading channels are given in Table 8.1. These values can be used
to indicate the channel capacity boundary of the Rayleigh fading
channels.
Q9-- a
Etr-- n
Data Sink
- Channel
Decoder
BPSK
I+-- Demodulation I---
r
or equivalently
n n
L Iri - ai Xil 2 ~ L Iri - ai Xil 2 (8.21)
i=l i=l
(8.25)
(8.26)
(8.27)
in (8.25) we obtain
(8.28)
1 +K ( wK )
E [exp( -wa2 ) ] = 1 + K + w exp - 1 + K + w ,w > 0 (8.30)
1 l+K KdR~
P x x <- ex ( _ (8.31)
1 + K + R~ ) + R~ )
No
1+K
d
( n, n) - 2 ( P
244 Turbo Coding for Fading Channels
(8.34)
and is given by
P(Xn' Xn)
<
E [P(x n, xnlan)]
(8.37)
As the fading amplitudes ~ = (aI, a2, ... , an) are independent iden-
tically distributed random variables, for large Hamming distance d
(larger than 5), according to the central limit theorem, the random
variable A is approximately Gaussian with the mean
mA = dma (8.38)
(8.40)
For the turbo codes d is typically very large, so the above bound
holds in most cases.
246 Turbo Coding for Fading Channels
= L
W
Bd kAw,z (8.42)
d=w+z
The error coefficient Bd is the average number of bit errors
caused by the transitions between the all-zero codeword and code-
words of weight d (d ~ dmin ), where dmin is the code minimum dis-
tance. The bit error probability of the code decoded by a maximum-
likelihood algorithm over a fading channel can be upper-bounded
by
Pb(e) ~ L BdPd(Xn, xn) (8.43)
d=dmin
where Pd(x n, xn) is the pairwise error probability when the Ham-
ming distance between the two sequences Xn and xn is d.
Therefore, the bit error probability upper-bound on an indepen-
dent Rician fading channel is given by
(8.44)
where A;,l and Af,'d are the corresponding input-output weight enu-
merating coefficients for the equivalent block codes of the outer and
inner code, respectively, W is input information weight of the outer
code, 1 is the weight of the output sequence for the outer code and
d is the weight of the output sequence for the inner code.
In the evaluation, we consider a turbo and a serial code with
the same code rate R = 1/3 and the memory order v = 2. For fair
comparison of the turbo and the serial code, the interleaver size
should be chosen in such a way that the decoding delay due to the
interleaver is the same. In other words, the input information block
sizes for both codes should be the same. We chose an information
size of N = 100 in the performance evaluation. For the turbo code,
the generator matrix of the component codes is
G(D) = [1 1 + D2 1
1 + D+ D2
The bit error probability upper-bounds for the turbo code with
variable K are illustrated in Fig. 8.5 for the case of ideal channel
248 Turbo Coding for Fading Channels
state information and in Fig. 8.6 for the case of no channel state
information. For the serial code, the generator matrix of the outer
code is
The bit error probability upper-bounds for the serial code with
variable K are illustrated in Fig. 8.7 for the case of ideal channel
state information and in Fig. 8.8 for the case of no channel state
information. From the figures, we can see that for the Rayleigh
fading channel (K = 0), the error performance can be improved
by approximately 1.5 dB at the bit error rate of 10- 5 when ideal
channel state information is utilized relative to the case with no
channel state information. This improvement decreases as K is
getting larger. When K --t 00, the gain of channel state information
disappears.
In order to compare the performance of the turbo and serial
codes on Rayleigh fading channels, we show the distance spectra
and the bit error probability upper bounds for both codes in Figs.
8.9 and 8.10, respectively.
From Fig. 8.9 we can see that the serial code has smaller error
coefficients than the turbo code at low to medium Hamming dis-
tances. Since these distance spectral lines dominate the code error
performance in the region of high SNR's [9], the serial code can
achieve better performance than the turbo code on fading channels
at high SNR's as illustrated in Fig. 8.10. It may also be observed
from Fig. 8.9 that for medium distance values, the error coefficients
of both codes are almost the same, which will result in almost the
same error performance at low SNR's, as the code performance in
this region is determined by the medium distance spectral lines [9].
Iterative Decoding on Fading Channels 249
10°
\ \
\
\
\
\
10-'
\ \
\
10-2 \
\
\
\
,,
\
10-3
\
,,
\
,
II: , ,
W
<Xl
10-4
10-'
10-13
10-'
1.5 2 2.5 3 3.5 4 4.5 5
EblNo (dB)
Fig. 8.5: Bit error probability upper bound for the 4 state, rate 1/3 turbo
code with interleaver size 100 on independent Rician fading channels
with ideal channel state information. The curves are for Rician channels
with K =0, 2, 5, 50, starting from the top, with the bottom one referring
to an AWGN channel.
10·
I
I
I
I I
I I
I I
10-' I I
I
I
\ \
10-3 \
,,
II:
w
\
, ... ...
m
10-'
" ...
10-5
10-5
... ...
10-7
1.5 2 2.5 3 3.5 4 4.5 5 5.5 6 6.5 7
EbiNo (dB)
Fig. 8.6: Bit error probability upper bound for the 4 state, rate 1/3 turbo
code with interleaver size 100 on independent Rician fading channels
without channel state information. The curves are for Rician channels
with K =0, 2, 5, 50, starting from the top, with the bottom one referring
to an AWGN channel.
(XI,O, XI,I, ... , XI,n-l, X2,O, X2,1, •.•... , XN,O, XN,I, ... , xN,n-d
10·
I
I
I
I
10-'
I I
I I
10-' I
\ \
I \
\ I
I \
10-' I
\
I \
~ I
I
\
\
I
10-4 \ I \
\ \
\
\
\ \
10-5
\
\
,
10-6
10-'
1.5 2 2.5 3 3.5 4 4.5 5 5.5 6 6.5 7
EbiNo (dB)
Fig. 8.8: Bit error probability upper bound for the 4 state, rate 1/3 serial
code with information size 100 on independent Rician fading channels
without channel state information. The curves are for Rician channels
with K =0, 2, 5, 50, starting from the top, with the bottom one referring
to an AWG N channel.
(8.46)
10" ,------.--------r----.-----,----,------,
10'"
/
10'· /
/
/
/
/
/
/
/
"
10-20
Turbo Code N=100
Serial Code N=100
Fig. 8.9: Distance spectrum comparison of the 4 state, rate 1/3 turbo
and serial concatenated codes with information size 100
i(
'Yt rt"
l'l) _ (.). (Ej,:J(rt,j-at,jx~,j)2)
- Pt z exp - 20- 2 ' i = 0,1
(8.47)
where the first term is the a priori information obtained from the
other decoder. The second term is the systematic information gen-
erated by the code information bit. Ae (Ct) is the extrinsic informa-
254 'furbo Coding for Fading Channels
10'"
10-3
-.
10'"
Fig. 8.10: Bit error probability upper bound comparison of the 4 state,
rate 1/3 turbo and serial concatenated codes with information size 100
on independent Rayleigh fading channels
(8.48)
Iterative Decoding on Fading Channels 255
A(ct) _ I [P{ct =
og P {Ct
11
= 0 I r}
r}]
_ log [P{Xt,O = +11 r}]
P{Xt,O = -11 r}
Jl"t 1 - Jl~ (8.49)
where Jl"t 1 is the minimum path metric corresponding to Xt,O = -1
and Jl} is the minimum path metric corresponding to Xt,O = 1. The
path metric and the SOYA output have the additive property. Fol-
lowing the derivation in MAP, we can split the soft output into two
parts, the intrinsic information Ai (Ct) and the extrinsic information
Ae (ct), as follows
A (ct) = Ai (Ct) + Ae (ct) (8.50)
The intrinsic information, Ai (ct), is given by
(8.52)
256 'furbo Coding for Fading Channels
--..
10-'
--+-- SOVAwith CSI
SOYA without CSI
-+- MAP with CSI
10-'
- - MAP without CSI
~
!It.
,,
" \ \
"
,
10-3 \ \
.
\ \
\
cr:
w
m
, ",
>I-
10'"
\
, \
\
\ \
\ \
\ \
\
\ \
\ \
\
10--
" \
•
\
'+
10'"
15 2 2.5 3 3.5 4
EblNo (dB)
Fig. 8.11: Performance comparison of MAP and SOYA, with and with-
out CSI, for the 16 state, rate 1/3 turbo code on an independent Rayleigh
fading channel, information size 1024, the number of iterations 8
10-'
*"- ..,.-,
----M--
Turbo N=1024
- -K- .
Serial N=1024
,, ... "., --e--
-
Turbo N=4096
,, ')(
-()- - Serial N=4096
b ...
"- ,
,, " , "-
... x.
"-
lSI ,
~ 10-'
"., ,
"- ,,
X
10-'
", ,,
10-6 "-
"-
'b
10-'
28 3 32 3.4 3.6 3.8 4 4.2 4.4
EblNO (dB)
Fig. 8.12: Performance comparison for the 4 state, rate 1/3 turbo and
serial codes on an independent Rayleigh fading channel
--..
10-'
.. ,
10-2
\
"' \.
<- , \.
,
-10
,, '- , ,.
10- 3 ,,
"' \.
a:
w
'-
"t-, "' '- ,
'" \ '-
*, ,
\.
\.
10-' '-
... ~
",
'.,
" ,, '-
*'- ,
'- ,
10--
.... ... '-
.... "-
'- , ,.
"
10-6
1.5 2 2.5 3 3.5 4 4.5 5 5.5
EbINo (dB)
Fig. 8.13: Performance comparison of MAP and SOYA, with and with-
out CSI, for the 16 state, rate 1/3 turbo code on a correlated Rayleigh
fading channel, the fading rate normalized by the symbol rate is 10- 2 ,
information size 1024, the number of iterations 8
260 Turbo Coding for Fading Channels
10-2 " . ,
~, 'I.,
., , , ,
, , ""
, ,
Q
,
"
" , ....
~
~
,,
10" \
, ,
\
,, "
, "x "
10'" 'I. " .,
",
\ '>
'\
""
10... ' - - - - ' - - - - ' - - - - ' - - - - ' - - - - - ' - - - - - ' - - . . . . . L - - - - ' - - - - ' - - - - - '
2 2.5 3 3.5 4 4.5 5 5.5 6 6.5 7
EblNO(dB)
Fig. 8.14: Performance comparison for the turbo and serial codes on a
correlated Rayleigh fading channel, the fading rate normalized by the
symbol rate is 10- 2 , information size N, the number of iterations I
We can see from the figure that the serial concatenated code
consistently outperforms the turbo code on correlated fading chan-
nels. At a BER of 10-5 , the corresponding improvement achieved
by the serial concatenated code relative to the turbo code is 1.25 dB
and 1.2 dB for the information size of 1024 and 8192, respectively.
Bibliography
9.1 Introduction
The effectiveness of trellis coded modulation (TCM) techniques,
proposed by Ungerboeck in 1982 [1], for communication over band-
width limited channels is well established. TCM schemes have been
applied to telephone, satellite and microwave digital radio channels,
where coding gains of the order of 3-6dB are obtained with no loss
of bandwidth or data rate.
Turbo codes can achieve remarkable error performance at a low
signal-to-noise ratio close to the Shannon capacity limit. However,
the powerful binary coding schemes are not suitable for bandwidth
limited communication systems.
In order to achieve simultaneously large coding gains and high
bandwidth efficiency, a general method is to combine turbo codes
with trellis coded modulations. In this chapter we discuss several
bandwidth efficient turbo coding schemes, their encoding/decoding
algorithms and performance on Gaussian and flat fading channels.
For Gaussian channels, turbo coded modulation techniques can
be broadly classified into binary schemes and turbo trellis coded
modulation. The first group can further be divided into "prag-
matic" schemes with a single component binary turbo code and
multilevel binary turbo codes. Turbo trellis coded modulation can
systematic bits
I
I
Binary
Inter- Signal
Thrbo Mapper
c leaver
Encoder
Q
systematic bit
I
I Bit
Binary
Demodu- LLR
plexer Thrbo
r lator leaver
tation
t
parity check bits
p = [~ ~ 1 (9.1)
This leads to a rate 1/2 turbo code. Let us denote four encoded bits
at time t by Vt,i, i = 1,2,3 and 4. They are mapped to a 16-QAM
266 Turbo Trellis Coded Modulation Schemes
• • 11 • •
• • 10 • •
-----------+----------__ I
00 10 11
/.~
Vt,l Vt,2
• 00 • •
• • 01 • •
~1.4
Fig. 9.3: 16-QAM with Gray mapping
i = 1,2
A(Vt,d = i = 3,4
(9.6)
268 Turbo Trellis Coded Modulation Schemes
cl
cf Turbo Encoder Cl I vi
Serial/
c Signal x
Parallel
Converter
c2 ITurbo Encoder c21 v2 Mapper
c1 I 1 v1
ITurbo Encoder C 1 I
The sequence is split into 1 blocks. The ith block sequence is given
by
(9.7)
where k i is the length of the block and
(9.8)
(9.9)
: Turbo Decoder D2 ~
x2 x2
r
,I Turbo Decoder DI : xl i ...
xl
>-.x
XU Selector
IEncoder 11 . IMapper 1:
x'
...
1 Symbol 1 1 Symbol .1
Interleaver Deinterleaver
...
n=k+l
xl
::1Encoder 21 .IMapper 2:
U
xt = Xt,I + JXt,Q
U . U
c= (00,01,11,10,00,11)
XU = (0,2,7,5,1,6)
The information sequence is interleaved by a symbol interleaver and
encoded into the modulated sequence by the lower decoder, denoted
by xl,
Xl = (6, 7, 0, 3, 0, 4)
After deinterleaving this sequence becomes
Xl = (0,3,6,4,0, 7)
The selector alternately connects the upper and lower encoder to
the channel, transmitting the first symbol from the upper encoder,
Turbo Trellis Coded Modulation 273
Infobit pairs
Mapper ~ p
.-----.,~ 0,3,7,4,1,7
0,3,6,4,0,7
6,7,0,3,0,4
~______-4______- r______~8PSK 8PSK symbols
Mapper
Fig. 9.7: Example of a turbo trellis coded 8-PSK with parity symbol
puncturing
the second symbol from the lower encoder, the third symbol from
the upper encoder, the fourth symbol from the lower encoder, etc.
The transmitted sequence for this example is
x = (0,3,7,4,1,7)
Thus the parity symbol is alternately chosen from the upper and
lower encoder. Each information group appears in the transmitted
sequence only once.
For this coded modulation approach, the component code should
be chosen with no parallel transitions so that each information bit
274 Turbo Trellis Coded Modulation Schemes
1 Pr{ Ct = ilrI}
A(Ct = i) = og Pr{ Ct = Olrr}
l:(I',I)EBi CXt-l (I') ,: (1',1) f3t(l)
(9.12)
log l:(l',I)EB~ CXt-l (I') ,2 (l', l) f3t(l)
,t
where i denotes an information group from the set, {O, 1, 2, ... , 2k -
I}, and the probabilities CXt(l), f3t(l) and (l' , l) can be computed
recursively [3J. The symbol i with the largest log-likelihood ratio
'furbo Trellis Coded Modulation 275
r _ _--+-"
. "
Decoder2
~----~ r-~----.
A2
in Eq. (9.12), i E {O, 1,2, ... , 2k - 1}, is chosen as the hard decision
output.
The MAP algorithm requires exponentiation and multiplication
operations to compute the log-likelihood ratio A(Ct). One way of
simplifying it is to work with the logarithms of probabilities of Cl!t(l) ,
f3t(l) and "(t(l', l), denoted by Cl!t(l), lit(l) and 'Yt(l', l), respectively.
The forward recursive variables can be computed as follows
M.-l
Cl!t (l) = log L eQt - 1 (1')+'Yt(l' ,1)
(9.13)
1'=0
Cl!o(O) = 0
ao(l) = -00, 1 =J 0
and the backward recursive variables can be computed as
= log L
M.-l
lit (l) e13t+ 1 (1')+'Yt+1 (l,l') (9.14)
1'=0
276 Turbo 'Itellis Coded Modulation Schemes
;B7(0) = 0
where rt,I and rt,Q are the channel output in-phase and quadrature
components at time t respectively, xtI(I) and xtQ(I) are the mod-
ulated in-phase and quadrature components at time t, associated
with the transition St-l = I' to St = I and input Ct = i, respectively,
and pt(i) is the a priori probability of Ct = i.
The (tt(l) and ;Bt(l) in Eqs. (9.13) and (9.14) can be calculated
using the Jacobian algorithm
can be split into three terms. They are the a priori information gen-
erated by the other decoder, the systematic information generated
by the code information bit and the extrinsic information generated
by the code parity check digit. The extrinsic information is inde-
pendent of the a priori and systematic information. The extrinsic
information is exchanged between the two component decoders. In
contrast to binary turbo codes, in turbo TCM we cannot separate
the influence of the information and the parity-check components
within one symbol. The systematic information and the extrinsic
information are not independent. Thus both systematic and ex-
trinsic information will be exchanged between the two component
decoders. The joint extrinsic and systematic information of the first
Log-Map decoder, denoted by AI,es(Ct = i), can be obtained as
However, for every even received signal, the decoder receives the
punctured symbol in which the parity digit is generated by the
other encoder. The decoder in this case ignores this symbol by set-
ting the branch transition metric to zero. The only input at this
step in the trellis is the a priori component obtained from the other
decoder. This component contains the systematic information.
where J-L~ is the minimum path metric for Ct = 0, and J-L~ is the
minimum path metric for Ct = i.
We define the branch metric, assigned to a trellis branch at time
t, as
(9.20)
where rt,I and rt,Q are the channel outputs of the in-phase and
quadrature components at time t, respectively, Xt,I and Xt,Q are
the modulated in-phase and quadrature components of the branch,
generated by input Ct = i, respectively, and pt(i) is the a priori
probability of Ct = i. The path metric, for a path x in the trellis,
can be computed as
1/:
T
J-Lx =L (9.21)
t=l
1 pt(i)
AI,es (Ct = Z.) = AI (Ct = Z.) - og Pt(O) (9.22)
where Al (Ct = i) is the soft output of the first decoder and log ;:(~)
is the a priori information for Ct = i. The joint extrinsic and sys-
tematic information is used as the a priori probability at the next
decoding stage as
(9.23)
N=1024,1=8
N=4096,1=18
10'"
Fig. 9.9: Performance of the rate 2/3, 4-state turbo trellis coded 8-PSK
with various interleaver sizes on an AWGN channel, SOYA decoding
algorithm, the number of iterations I, bandwidth efficiency 2 bits/s/Hz
j=:= N=1024.1=8
N=4096. 1=18
10-3
10-6 ' - - - - ' - - - ' - - - - - ' - - - - - ' - - - - " ' - - - - ' - - - - ' - - - - ' - - - - ' - - - - '
34 3.5 3.6 3.7 38 3.9 4 4.1 4.2 43 4.4
EbINO(dS)
Fig. 9.10: Performance of the rate 2/3, 8-state turbo trellis coded 8-PSK
with various interleaver sizes on an AWGN channel, SOYA decoding
algorithm, the number of iterations I, bandwidth efficiency 2 bits/s/Hz
for a low number of states and increases with the growing number
of states, reaching a value of about O.3dB at the BER of 10- 5 for
the 16-state code, as shown in Fig. 9.15.
282 Turbo Trellis Coded Modulation Schemes
N=1024,1=8
N=4096, 1=18
Fig. 9.11: Performance of the rate 2/3, 16-state turbo trellis coded 8-
PSK with various interleaver sizes on an AWGN channel, SOYA de-
coding algorithm, the number of iterations I, bandwidth efficiency 2
bits/s/Hz
- N=1024, SOVA
~ N=1024,Log-~P
10-3
10-s '---'---'---':----'':-----::'':-----::'----::'----"-:---'''--J.----'
3.4 35 36 37 3.8 3.9 4 41 42 4.3 44
EblNO(dB)
Fig. 9.12: Performance comparison of the Log-MAP and SOYA for the
rate 2/3, 8-state turbo trellis coded 8-PSK with interleaver size 1024 on
an AWGN channel, bandwidth efficiency 2 bits/s/Hz
10-' , - - - - - - r - - - - - , - - - - - - - - - - , - - - - - r - - - - - ,
- N=1024, SOYA
~ N=4096, SOYA
- *- N=1024, Log-MAP
- e - N=4096, Log-MAP
Fig. 9,13: Performance comparison of the Log-MAP and SOYA for the
rate 3/4, 4-state turbo trellis coded 16-QAM with various interleaver
sizes on an AWGN channel, bandwidth efficiency 3 bits/s/Hz
- N=1024, SOYA
- N=4096, SOYA
-*- N=1024, Log-MAP
- e - N=4096, Log-MAP
\
,
\
\
\
10-3 \ .
\
<1l
a: \
w
Ol \
\
\
10'" \
\
'q
\
\
10-<1
\
\
10-" '----.L---'----e----'-----'---$---'------'------'------'-----'
5 5.1 5.2 53 5.4 5.5 56 5.7 58 59 6
EblNO(dB)
Fig. 9.14: Performance comparison of the Log-MAP and SOYA for the
rate 3/4, 8-state turbo trellis coded 16-QAM with various interleaver
sizes on an AWGN channel, bandwidth efficiency 3 bits/s/Hz
(9.26)
286 Turbo Trellis Coded Modulation Schemes
10° r - - - , - - - - - - - , - - - - , - - - , . . - - - - , - - - - - - - ,
- N=1024. SOVA
-e- N=4096. SOVA
- • - N=1024. Log-MAP
- e - N=4096. Log-MAP
ffiIII 10-3 Q.
\
\
10~
~
\
,,
10-' , ,
\ :
\.
~
,,
Q. ,
10-6
4.8 5 5.2 5.4 5.6 5.8 6
EblNO (dB)
Fig. 9.15: Performance comparison of the Log-MAP and SOYA for the
rate 3/4, 16-state turbo trellis coded 16-QAM with various interleaver
sizes on an AWGN channel, bandwidth efficiency 3 bits/s/Hz
Fig. 9.16: Thrbo trellis coded 16-QAM with systematic symbol punc-
turing
10-'
.. ..
.. . .. ....
.. --+- Turbo TCM, N=2*16384
-e-- Pragmatic turbo code, N=3276
10-2 ,.
.. ....
.... ..
10'"
.. .. .. .. ..
..
II:
w 10"
<Il
..
10-'
"
..
10'"
..: .. ..
..
10-'
2.6 265 2.7 275 2.8 2.85 2.9 295 3
EblNo (dB)
rI~3~t~
,'"
\1,.1
~
U)
P-.
,1-. 00
\'l.I X
I-+-
A..... ~
/es o
U)
P-.
00
-So
A.-
A7-a
8
EKJ ,V
-1'''' -I't'\
~\;'!7
1"1-.
,p
I"~
'17
-
I"f-.
\.1/
,,- I-.
7"
Fig. 9.18: Turbo trellis coded 8-PSK with systematic symbol puncturing
10~
Fig. 9.19: Performance of the turbo trellis coded 8-PSK with systematic
symbol puncturing with bandwidth efficiency 2 bits/s/Hz and interleaver
size 16384 on an AWGN channel, the number of iterations I
C=(C1,C1],"',CN)' (9.27)
(9.28)
I~ N=1024,1=8
N=4096, 1=8
I·,,
ffi
m
10-3
10-4
Fig. 9.21: Performance of the I-Q turbo coded 16-QAM with bandwidth
efficiency 2 bits/s/Hz and various interleaver sizes on a Rayleigh fading
channel
10-'.------.-------.------.------.
-+- 1-0 turbo code, N=·1096
-e- Pragmatic turbo code, N=40
Applications of Turbo
Codes
G(D) = [1 1+D+D2+D3+D4
1+D3+D4
The rate 1/2 RSC code is the same as the component code in the
rate 1/3 turbo encoder.
The rate 1/6 turbo code is obtained by parallel concatenation
of a 16-state rate 1/4 RSC encoder and a 16-state rate 1/3 RSC
encoder whose systematic information bit is eliminated. The gen-
erator matrices of the rate 1/4 and 1/3 RSC codes are given by
G(D) = [1 l+D+D3+D 4
1+D3+D4
1+D+D2+D3+D4
1+D3+D4
and
1+D+D3+D4 1+D+D2+D3+D4
G(D) 1+D3+D4 1+D3+D4
respecti vely.
G(D) = [1 1+D+D2+D3 ]
l+D+D3
r-------------------------------------.Vo
c __---._.......+(
For the rate 1/4 turbo code, the parity digits VI from the first
component encoder and v; from the second component encoder
are alternately punctured. For the rate 1/3 turbo code, the parity
digits V2 and v;
for both encoders are punctured. The rate 1/2
turbo code is generated in a similar way as the punctured rate 1/2
code for the reverse link.
(3GPP) [7]. They include an 8-state rate 1/3 and 1/2 turbo code
and a 4-state rate 1/3 serial concatenated convolutional code.
The 8-state turbo code is generated by parallel concatenation
of two identical rate 1/2 RSC encoders with generator matrix
G(D) = [1
The output of the turbo encoders is punctured to produce coded
digits corresponding to the desired code rate of 1/3 or 1/2. For
rate 1/3, the systematic information bit of the second component
encoder is punctured. For rate 1/2, the parity digits produced
by the two component encoders are alternately punctured. The
encoder structure of the 8-state rate 1/3 turbo code is shown in
Fig. 10.3.
r--------------------------------------Vo
~------.t + 1 - - - -. . VI
}---------.(+ )----~ V2
p = [i ~l
Note that trellis termination is performed in both the turbo and
serial concatenated encoders. In the serial encoder, the tailing bits
of the outer encoder are included in the interleaver.
0'
"1
en
~
5;
<:""t-
CD
o
~
o
~
Bibliography
Viterbi A. M., 87
Viterbi algorithm, 30, 117, 122
voice transmission, 104
Voyager, 7