0% found this document useful (0 votes)
10 views31 pages

Detail Sol Prob Ch7

The document discusses turbo codes and concatenated codes, detailing the encoding and decoding processes, including the use of convolutional encoders and syndrome-error patterns. It provides code tables, minimum Hamming distances, and examples of decoding sequences using the Viterbi algorithm. Additionally, it covers cyclic codes and their properties, concluding with a description of a binary array code and its codewords.

Uploaded by

VIAN ADNAN
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views31 pages

Detail Sol Prob Ch7

The document discusses turbo codes and concatenated codes, detailing the encoding and decoding processes, including the use of convolutional encoders and syndrome-error patterns. It provides code tables, minimum Hamming distances, and examples of decoding sequences using the Viterbi algorithm. Additionally, it covers cyclic codes and their properties, concluding with a description of a binary array code and its codewords.

Uploaded by

VIAN ADNAN
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 31

Turbo codes.

Detailed solutions to problems

Concatenated codes:

7.1) The output codeword of a block code C b ( 6 ,3 ) generated by the generator


matrix G is then input to a convolutional encoder like that seen in the following figure,
operating in pseudo-block form. This means that after inputting the 6 bits of the
codeword of the block code, then two additional zeros (tailing bits) are also input to
clear the registers of the convolutional encoder.

1 1 0 1 0 0  c (1 )
G = 0 1 1 0 1 0 
1 0 1 0 0 1 

c(2)

The transpose of the parity check matrix of the block code is obtained as follows:
I 
[ ]
G = [P I k ] , H = I n -k P T , H T =  n −k 
 P 
1 0 0
0 1 0 
1 1 0 
P =  0 1 
1  , H = 0
T 0 1
1 1 0
 1 0 1   
0 1 1
1 0 1 

The syndrome-error pattern table is:

e S
1 0 0 0 0 0 1 0 0
0 1 0 0 0 0 0 1 0
0 0 1 0 0 0 0 0 1
0 0 0 1 0 0 1 1 0
0 0 0 0 1 0 0 1 1
0 0 0 0 0 1 1 0 1

The trellis of the convolutional encoder seen in the above figure is:
Turbo codes. Detailed solutions to problems

t1 t2 t3 t4
0/00 0/00 0/00
Sa = 00
1/01 1/01 1/01 0/11

Sb = 10
1/10
0/11 0/11

Sc = 01

1/10 0/00 1/10

Sd = 11
1/01

And the minimum Hamming free distance of the convolutional code is equal to d f = 4 :

t1 t2 t3 t4 t5
0/00 0/00 0/00 0/00
Sa = 00 df = 4
1 1 1 2 2

Sb = 10
1 1

2 2 2
Sc = 01

1 0 1 0 1

Sd = 11
1 1

The following is the code table of the concatenated code:

m c w w
000 000000 0 00 00 00 00 00 00 00 00 0
001 101001 3 01 11 10 11 11 01 11 11 13
010 011010 3 00 01 10 00 10 11 11 00 7
011 110011 4 01 10 00 11 01 10 00 11 8
100 110100 3 01 10 00 10 11 11 00 00 7
101 011101 4 00 01 10 01 00 10 11 11 8
110 101110 4 01 11 10 10 01 00 11 00 8
111 000111 3 00 00 00 01 10 01 00 11 5

The minimum Hamming distance of the concatenated code is d min = 5 .

The received sequence r = (01 10 10 00 11 11 00 00 ) is first decoded using the


convolutional code with the classic Viterbi algorithm. Minimum accumulated Hamming
distance values at each state are inside the squares.
Turbo codes. Detailed solutions to problems

01 10 10 00 11 11 00 00
t1 t2 t3 t4 t5 t6 tt7 t8 t9
1 1 1 0 2 2 0 0
Sa = 00
2 2 1 2 1 2 1 22 1 2 1 2
0 2 2
2

Sb = 10 1 2 0 0
0 1 1 1 1 1
1 2 3 3 3 3
1
1 2 0 0 2
2
Sc = 01
1 2 2 3 4 4
0 1 0 0 1 2 1 2 1 1
0 1 0
Sd = 11 1 1
2 1 1 1
2 2 3 4 4 4

Even before the last two steps in the Viterbi decoding, the decoded sequence is seen
to be d = (01 10 00 10 11 11 00 00 ) , which corresponds to the message sequence
m´ = (110100 ) . The tailing zeros are discarded.
This decoded sequence is a codeword of the block code, so that the syndrome
calculation in this second step of the concatenated code is equal to zero, and the
decoded message is directly determined by truncating the decoded vector. Thus,
m = (100 ) .

Note that the minimum distance of the block code on its own is 3. This would also be the
minimum distance of the concatenated code without the extra tailing process, which adds
at least 2 to the weights of the codewords of the block code. This indicates the importance
of properly tailing off when using a convolutional code.

7.2)
The cyclic code C cyc ( 3 ,1 ) generated by the polynomial g ( X ) = 1 + X + X 2 is a
repetition code. Its code table is the following:

m c(X) c w
1 1+X+X2 111 3
0 0 000 -

Its minimum Hamming distance is d min_( 3 ,1 ) = 3 .


The cyclic code C cyc ( 7 ,3 ) , generated by the polynomial g ( X ) = 1 + X + X 2 + X 4 has
the following code table:
Turbo codes. Detailed solutions to problems

m c w
000 0000000 -
001 1101001 4
010 0111010 4
011 1010011 4
100 1110100 4
101 0011101 4
110 1001110 4
111 0100111 4

Its minimum Hamming distance is d min_( 7 ,3 ) = 4 .

Since g ( X ) = 1 + X + X 2 + X 4 and n − k = 7 − 3 = 4 , X n −k = X 4 . As an example, we


calculate one of the code polynomials as follows:

m( X ) = X 2 ;

X6 / X 4 + X 2 + X +1
X6 + X4 + X3 + X2 X 2 +1
−−−−−−−−−−−−
X4 + X3 + X2
X 4 + X 2 + X +1
−−−−−−−−−−
X 3 + X + 1 = p( X )
Then:
c( X ) = X 4 m( X ) + p( X ) = X 6 + X 3 + X + 1 ⇒ c = (1101001)

The remaining code polynomials can be determined in the same way.

The array code is constructed as indicated in the following figure:

n1

k1

k2
n2

where k1 = 1, n1 = 3, k2 = 3 and n2 = 7.

The non-zero codewords of this concatenated code are now determined. Message bits
are in bold:
Turbo codes. Detailed solutions to problems

1 1 1 0 0 0 1 1 1 0 0 0 1 1 1 0 0 0 1 1 1
0 0 0 1 1 1 1 1 1 0 0 0 0 0 0 1 1 1 1 1 1
0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 0 0 0 0 0 0 1 1 1 1 1 1 0 0 0
0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0
1 1 1 1 1 1 0 0 0 1 1 1 0 0 0 0 0 0 1 1 1
1 1 1 0 0 0 1 1 1 1 1 1 0 0 0 1 1 1 0 0 0

The minimum weight in this code table is w min = d min = 12 , so the array code can
correct up to 5 errors and detect up to 6 errors. This minimum Hamming distance is
such that:

d conc _ min = d ( 3 ,1 ) _ min xd ( 7 ,3 ) _ min = 3 x4 = 12

which confirms the product code nature of this type of array code.

Since the 3 message bits generate 21 coded bits, then the rate of the concatenated
code is Rc = 1 / 7 , which can also be determined by multiplying the rates of the two
component codes: (1/3)x(3/7) = 3/21 = 1/7.

If we concatenate these two codes by applying first the cyclic code C cyc ( 7 ,3 ) and then
the cyclic code C cyc ( 3 ,1 ) , then in the above figure k1 = 3, n1 = 7, k2 = 1 and n2 = 3. The
resulting array code is described in the following table, where m is the message bits,
followed by the row constituent codeword and then the column constituent codewords
corresponding to each of the row bits, and the last column in the table is the weight of
each array codeword:

m C cycl (7 ,3 ) C cycl ( 3 ,1 ) w
000 0000000 000 000 000 000 000 000 000 12
001 1101001 111 111 000 111 000 000 111 12
010 0111010 000 111 111 111 000 111 000 12
011 1010011 111 000 111 000 000 111 111 12
100 1110100 111 111 111 000 111 000 000 12
101 0011101 000 000 111 111 111 000 111 12
110 1001110 111 000 000 111 111 111 000 12
111 0100111 000 111 000 000 111 111 111 12

The minimum Hamming distance of the concatenated code is again 12, and as before
is the product of the minimum Hamming distances of the constituent codes
d conc _ min = d ( 3 ,1 ) _ min xd( 7 ,3 ) _ min = 3 x4 = 12 . Also the rate is again 1/7, the product of the
constituent code rates. This confirms that the order of concatenation is unimportant in
the case of this type of array or product code.
Turbo codes. Detailed solutions to problems

Turbo codes:

7.3)

A simple binary array code (or punctured product code) has codewords with block
length n = 8 and k = 4 information bits, in the format seen in the following figure:

1 2 5

3 4 6

7 8

There are 24 = 16 codewords in this code.

The 15 non-zero codewords are determined as follows. In each sub-table, the message
bits are in bold:

0 0 0 0 0 0 0 0 0 0 1 1 0 1 1 0 1 1
0 1 1 1 0 1 1 1 0 0 0 0 0 1 1 1 0 1
0 1 1 0 1 1 0 1 0 0 1 1

0 1 1 1 0 1 1 0 1 1 0 1 1 0 1 1 1 0
1 1 0 0 0 0 0 1 1 1 0 1 1 1 0 0 0 0
1 0 1 0 1 1 0 0 0 1 1 1

1 1 0 1 1 0 1 1 0
0 1 1 1 0 1 1 1 0
1 0 0 1 0 0

The code has rate Rc = 4 / 8 = 1 / 2 .


The minimum Hamming distance is the minimum value of the Hamming weight
calculated over all the non-zero codewords, so this array code has a minimum
Hamming distance d min = 3 . Note that the 3-bit row and column single-parity-check
sub-codes generate independent parity checks. This property can be used to simplify
the encoding and decoding trellises of the array code, as indicated below.

If the missing 9th bit (the check on checks) had been present, then the Hamming
distance of the code would have been 2x2 = 4, but deleting one bit from a code with
even Hamming distance always reduces the minimum distance by one, in this case to
3, thus confirming the above result.

The table below lists the codewords in the code:


Turbo codes. Detailed solutions to problems

1 2 3 4 5 6 7 8
0 0 0 0 0 0 0 0
0 0 0 1 0 1 0 1
0 0 1 0 0 1 1 0
0 0 1 1 0 0 1 1
0 1 0 0 1 0 0 1
0 1 0 1 1 1 0 0
0 1 1 0 1 1 1 1
0 1 1 1 1 0 1 0
1 0 0 0 1 0 1 0
1 0 0 1 1 1 1 1
1 0 1 0 1 1 0 0
1 0 1 1 1 0 0 1
1 1 0 0 0 0 1 1
1 1 0 1 0 1 1 0
1 1 1 0 0 1 0 1
1 1 1 1 0 0 0 0

By re-ordering the code bits of the row sub-codes as 125, 346, then their trellis has the
following form:

0 0 0

1 1 1 1

0
This trellis corresponds to the simple parity check code, whose table is seen below for
the case of the row sub-code 125:

1 2 5
0 0 0
0 1 1
1 0 1
1 1 0

This array code can be regarded as a simple form of turbo code. In terms of the turbo
code structure shown below in the figure, the parallel concatenated component
encoders calculate the row and column parity checks of the array code, and the
permuter alters the order in which the information bits enter the column encoder from
{1,2,3,4} to {1,3,2,4}. The multiplexer then collects the information and parity bits to
form a complete codeword.
Turbo codes. Detailed solutions to problems

Information bits

Row checks
Row encoder

Permuter Multiplexer

Column encoder

Column checks

A codeword from the code is modulated, transmitted over a soft-decision


discrete symmetric memoryless channel like that seen in the following figure, with the
conditional probabilities described in the following table, and received as the vector
r = (10300000 ) . Using the turbo (iterative) MAP decoding algorithm, determine the
information bits that were transmitted.

High reliability
0 output for 0

1 Low reliability output


0 for 0

Low reliability output


2 for 1
1

High reliability
3 output for 1

P( y / x )
x , y 0 1 2 3
0 0.4 0.3 0.2 0.1
1 0.1 0.2 0.3 0.4

We assume that the transmitted code vector is c = ( 00000000 ) ). The received vector
is r = (10300000 ) . Note that elements of the transmitted vector (which we need to
determine) are inputs of the channel of the above figure, and elements of the received
vector are outputs of that channel. The following table shows the transition probabilities
for the input elements ‘1’ and ‘0’ for each received element:

j 1 2 3 4 5 6 7 8
(P ( y j / 0), P ( y j / 1) ) (0.3,0.2) (0.4,0.1) (0.1,0.4) (0.4,0.1) (0.4,0.1) (0.4,0.1) (0.4,0.1) (0.4,0.1)

In the row code, bits 1, 2 and 5 are related as in the trellis seen in the figure. The same
happens to bits 3, 4 and 6, which constitute a parity check code that is independent
Turbo codes. Detailed solutions to problems

from the code of bits 1, 2 and 5. We can calculate the values γ i ( u' ,u ) for each of these
two codes. Coefficient for the row code of bits 1, 2 and 5 will be denoted with a
superindex γiRA ( u' , u ) , whereas coefficients of the row code of bits 3, 4 and 6 will be
denoted as γiRB ( u' , u ) . The trellis for the sub-code 125 is then:

bit 1 bit 2 bit 5

0 0 0

1 1 1 1

0
i=1:

γ1RA ( 0 ,0 ) = ∑ P( S 1 = 0 / S0 = 0 ).P( X1 = x / {S0 = 0 ,S1 = 0 }).P (Y1 / X1 = x )


x∈ Ax

= P ( S1 = 0 / S0 = 0 ).P ( X1 = 0 / {S0 = 0 ,S1 = 0 }).P(Y1 / X1 = 0 )


+ P ( S1 = 0 / S0 = 0 ).P ( X1 = 1 / {S0 = 0 ,S1 = 0 }).P (Y1 / X1 = 1 ) = 0.5 x1x0.3 + 0.5 x0 x0.2
= 0.15

γ1RA ( 0 ,1 ) = P ( S1 = 1 / S0 = 0 ).P ( X 1 = 0 / {S0 = 0 ,S1 = 1}).P(Y1 / X1 = 0 )


+ P ( S1 = 1 / S0 = 0 ).P ( X1 = 1 / {S0 = 0 ,S1 = 1}).P (Y1 / X 1 = 1 ) = 0.5 x0 x0.3 + 0.5 x1x0.2
= 0.1

i=2:

γ2RA ( 0 ,0 ) = ∑ P ( S2 = 0 / S1 = 0 ).P ( X 2 = x / {S1 = 0 ,S2 = 0 }).P (Y2 / X 2 = x )


X

= P( Si = 0 / S1 = 0 ).P ( X 2 = 0 / {S1 = 0 ,S2 = 0 }).P (Y2 / X 2 = 0 )


+ P ( S2 = 0 / S1 = 0 ).P ( X 2 = 1 / {S1 = 0 ,S2 = 0 }).P (Y2 / X 2 = 1 ) = 0.5 x1x0.4 + 0.5 x0 x0.1
= 0.2

γ2RA ( 0 ,1 ) = P ( S2 = 1 / S1 = 0 ).P ( X 2 = 0 / {S1 = 0 ,S2 = 1}).P(Y2 / X 2 = 0 )


+ P ( S2 = 1 / S1 = 0 ).P( X 2 = 1 / {S1 = 0 ,S2 = 1}).P (Y2 / X 2 = 1 ) = 0.5 x0 x0.4 + 0.5 x1x0.1
= 0.05
γ2RA (1,0 ) = P ( S2 = 0 / S1 = 1 ).P ( X 2 = 0 / {S1 = 1,S2 = 0 }).P (Y2 / X 2 = 0 )
+ P ( S2 = 0 / S1 = 1 ).P ( X 2 = 1 / {S1 = 1,S2 = 0 }).P (Y2 / X 2 = 1 ) = 0.5 x0 x0.4 + 0.5 x1x0.1
= 0.05
γ2RA (1,1 ) = P ( S2 = 1 / S1 = 1 ).P( X 2 = 0 / {S1 = 1,S2 = 1}).P (Y2 / X 2 = 0 )
+ P ( S2 = 1 / S1 = 1 ).P( X 2 = 1 / {S1 = 1,S2 = 1}).P (Y2 / X 2 = 1 ) = 0.5 x1x0.4 + 0.5 x0 x0.1
= 0.2

i=5:

γ5RA ( 0 ,0 ) = 1x1x0.4 = 0.4


Turbo codes. Detailed solutions to problems

γ5RA (1,0 ) = 1x1x0.1 = 0.1

Forward recursive calculation of the values αiRA ( u ) is started by setting the initial
conditions α0RA ( 0 ) = 1 , α0RA ( m ) = 0 ; m ≠ 0 :

U 1 1

α1RA ( 0 ) = ∑ α0RA ( u' ).γ1RA ( u' ,u ) = ∑ α0RA ( u' ).γ1RA ( u' ,u ) = α0RA ( 0 ).γ1RA ( 0 ,0 ) + α0RA (1 ).γ1RA (1,0 )
u' =0 u' =0

= 1x0.15 + 0 x0 = 0.15

α1RA (1 ) = α0RA ( 0 ).γ1RA ( 0 ,1 ) = 1x0.1 = 0.1

α2RA ( 0 ) = α1RA ( 0 ).γ 2RA ( 0 ,0 ) + α1RA (1 ).γ2RA (1,0 ) = 0.15 x0.2 + 0.1x0.05 = 0.035
α2RA (1 ) = α1RA ( 0 ).γ2RA ( 0 ,1 ) + α1RA (1 ).γ2RA (1,1 ) = 0.15 x0.05 + 0.1x0.2 = 0.0275

α5RA ( 0 ) = α 2RA ( 0 ).γ5RA ( 0 ,0 ) + α2RA (1 ).γ5RA (1,0 ) = 0.035 x0.4 + 0.0275 x 0.1 = 0.01675

Backward recursive calculation of the values βiRA ( u ) is done by setting the contour
conditions β5RA ( 0 ) = 1 , β5RA ( m ) = 0 ; m ≠ 0 :

β2RA ( 0 ) = β5RA ( 0 ).γ5RA ( 0 ,0 ) = 1x0.4 = 0.4


β2RA (1 ) = β5RA ( 0 ).γ5RA (1,0 ) = 1x0.1 = 0.1

β1RA ( 0 ) = β2RA ( 0 ).γ2RA ( 0 ,0 ) + β2RA (1 ).γ2RA ( 0 ,1 ) = 0.4 x0.2 + 0.1x0.05 = 0.085


β1RA (1 ) = β2RA ( 0 ).γ2RA (1,0 ) + β2RA (1 ).γ2RA (1,1 ) = 0.4 x0.05 + 0.1x0.2 = 0.04

β0RA ( 0 ) = β1RA ( 0 ).γ1RA ( 0 ,0 ) + β1RA (1 ).γ1RA ( 0 ,1 ) = 0.085 x0.15 + 0 .04 x0 .1 = 0.01675

Once the values γiRA ( u' , u ) , αiRA ( u ) and βiRA ( u ) have been determined, then the
values λRA RA
i ( u ) and σ i ( u' , u ) can be calculated as:

λ1RA ( 0 ) = α1RA ( 0 ). β1RA ( 0 ) = 0.15 x 0.085 = 0.01275


λ1RA (1 ) = α1RA (1 ).β1RA (1 ) = 0.1 x0.04 = 0.004

The coefficients λRA i ( u ) determine the estimates for input symbols ‘1’ and ‘0’ when

there is only one branch or transition of the trellis arriving at a given node, which then
defines the value of that node. This happens for instance in the trellis of figure at nodes
λ1RA ( 0 ) and λ1RA (1 ) :

λ1RA ( 0 ) 0.01275
RA RA = = 0.76119
λ1 ( 0 ) + λ1 (1 ) 0.01275 + 0.004
Turbo codes. Detailed solutions to problems

Soft decision for ‘0’ at position 1

λ1RA (1 ) 0.004
RA RA = = 0.23881
λ ( 0 ) + λ1 (1 ) 0.01275 + 0.004
1

Soft decision for ‘1’ at position 1

Coefficients σ iRA ( u' , u ) are then utilized for determining the soft decisions when there
are two or more transitions or branches arriving at a given node of the trellis, and when
these branches are assigned the different input symbols:

σ 2RA ( 0 ,0 ) = α1RA ( 0 ).γ2RA ( 0 ,0 ).β2RA ( 0 ) = 0.15 x0.2 x0.4 = 0.012


σ 2RA (1,0 ) = α1RA (1 ).γ2RA (1,0 ). β2RA ( 0 ) = 0.1x0.05 x0.4 = 0.002

σ 2RA ( 0 ,1 ) = α1RA ( 0 ).γ 2RA ( 0 ,1 ). β 2RA (1 ) = 0.15 x 0.05 x 0.1 = 0.00075


σ 2RA (1,1 ) = α1RA (1 ).γ1RA (1,1 ). β2RA (1 ) = 0.1 x 0.2 x 0.1 = 0.002

σ 5RA ( 0 ,0 ) = α 2RA ( 0 ).γ5RA ( 0 ,0 ). β5RA ( 0 ) = 0.035 x 0.4 x1 = 0.014


σ 5RA (1,0 ) = α 2RA (1 ).γ5RA (1,0 ). β5RA ( 0 ) = 0.0275 x 0.1 x1 = 0.00275

These values allow us to determine soft decisions for the corresponding nodes. For
instance, for position i = 2 , the trellis transition probabilities involved in the calculation
of a soft decision for ‘0’ are:

σ 2RA ( 0 ,0 ) + σ 2RA (1,1 ) 0.012 + 0.002


RA RA RA RA =
σ 2 ( 0 ,0 ) + σ 2 (1,1 ) + σ 2 ( 0 ,1 ) + σ 2 (1,0 ) 0.012 + 0.002 + 0.00075 + 0.002
= 0.8358

Which is a soft decision for ‘0’ at position 2. The soft decision for ‘1’ at that position is
then 1 − 0.8352 = 0.1642 . For position i = 5 , the trellis transition probabilities involved
in the calculation of a soft decision for ‘0’ are:

σ 5RA ( 0 ,0 ) 0.014
= = 0.8358
σ 5RA ( 0 ,0 ) + σ 5RA (1,0 ) 0.014 + 0.00275

And the soft decision for ‘1’ at position i = 5 is:

σ 5RA (1,0 ) 0.00275


RA RA = = 0.1642
σ 5 ( 0 ,0 ) + σ 5 (1,0 ) 0.014 + 0.00275

For the row code of bits 3, 4 and 6, whose trellis is seen in the following figure:

bit 3 bit 4 bit 6

0 0 0

1 1 1 1
0
Turbo codes. Detailed solutions to problems

i=3:

γ3RB ( 0 ,0 ) = 0.5 x1x0.1 = 0.05


γ3RB ( 0 ,1 ) = 0.5 x1x0.4 = 0.2

i=4:

γ4RB ( 0 ,0 ) = 0.5 x1x0.4 = 0.2


γ4RB ( 0 ,1 ) = 0.5 x1x0.1 = 0.05
γ4RB (1,0 ) = 0.5 x1x0.1 = 0.05
γ4RB (1,1 ) = 0.5 x1x0.4 = 0.2

i=6:

γ6RB ( 0 ,0 ) = 1x1x0.4 = 0.4


γ6RB (1,0 ) = 1x1x0.1 = 0.1

Forward recursive calculation of the values αiRB ( u ) is started by setting the initial
conditions α2RB ( 0 ) = 1 , α2RB ( m ) = 0 ; m ≠ 0 :

α3RB ( 0 ) = α2RB ( 0 ).γ3RB ( 0 ,0 ) + α2RB (1 ).γ3RA (1,0 ) = 1x0.05 + 0 x0 = 0.05


α3RB (1 ) = α2RB ( 0 ).γ3RB ( 0 ,1 ) = 1x0.2 = 0.2

α4RB ( 0 ) = α3RB ( 0 ).γ4RB ( 0 ,0 ) + α3RB (1 ).γ4RB (1,0 ) = 0.05 x0.2 + 0.2 x0.05 = 0.02
α4RB (1 ) = α3RB ( 0 ).γ4RB ( 0 ,1 ) + α3RB (1 ).γ4RB (1,1 ) = 0.05 x0.05 + 0.2 x0.2 = 0.0425

α6RB ( 0 ) = α4RB ( 0 ).γ6RB ( 0 ,0 ) + α4RB (1 ).γ6RB (1,0 ) = 0.02 x0.4 + 0.0425 x0.1 = 0.01225

Backward recursive calculation of the values βiRB ( u ) is done by setting the contour
conditions β6RB ( 0 ) = 1 , β6RB ( m ) = 0 ; m ≠ 0 :

β4RB ( 0 ) = β6RB ( 0 ).γ6RB ( 0 ,0 ) = 1x0.4 = 0.4


β4RB (1 ) = β6RB ( 0 ).γ6RB (1,0 ) = 1x0.1 = 0.1

β3RB ( 0 ) = β4RB ( 0 ).γ4RB ( 0 ,0 ) + β4RB (1 ).γ4RB ( 0 ,1 ) = 0.4 x0.2 + 0.1x0.05 = 0.085


β3RB (1 ) = β4RB ( 0 ).γ4RB (1,0 ) + β4RB (1 ).γ4RB (1,1 ) = 0.4 x0.05 + 0.1x0.2 = 0.04

Once the values γiRB ( u' , u ) , αiRB ( u ) and βiRB ( u ) have been determined, then the
values λRB RB
i ( u ) and σ i ( u' , u ) can be calculated as:

λRB RB RB
3 ( 0 ) = α 3 ( 0 ). β 3 ( 0 ) = 0 .05 x 0 .085 = 0 . 00425
Turbo codes. Detailed solutions to problems

λRB RB RB
3 (1 ) = α 3 (1 ). β 3 (1 ) = 0.2 x 0.04 = 0.008

coefficients λRA
i ( u ) determine the estimates for input symbols ‘1’ and ‘0’ when there

is only one branch or transition of the trellis arriving at a given node, which then defines
the value of that node. This happens for instance in the trellis of figure at nodes λRB
3 (0 )

and λRB
3 (1 ) :

λRB
3 (0 ) 0.00425
RB RB = = 0.34694
λ ( 0 ) + λ3 (1 ) 0.00425 + 0.008
3

Soft decision for ‘0’ at position 3

λRB
3 (1 ) 0.008
= = 0.65306
λRB
3 ( 0 ) + λRB
3 (1 ) 0.00425 + 0.008

Soft decision for ‘1’ at position 3

Coefficients σ iRB ( u' , u ) are then utilized for determining the soft decisions when there
are two or more transitions or branches arriving at a given node of the trellis, and when
these branches are assigned the different input symbols:

σ 4RB ( 0 ,0 ) = α3RB ( 0 ).γ4RB ( 0 ,0 ). β4RB ( 0 ) = 0.05 x0.2 x0.4 = 0.004


σ 4RB (1,0 ) = α3RB (1 ).γ4RB (1,0 ). β4RB ( 0 ) = 0.2 x0.05 x0.4 = 0.004

σ 4RB ( 0 ,1 ) = α3RB ( 0 ).γ4RB ( 0 ,1 ). β4RB (1 ) = 0.05 x 0.05 x 0.1 = 0.00025


σ 4RB (1,1 ) = α 3RB (1 ).γ4RB (1,1 ). β4RB (1 ) = 0.2 x0.2 x 0.1 = 0.004

σ 6RB ( 0 ,0 ) = α 4RB ( 0 ).γ6RB ( 0 ,0 ). β6RB ( 0 ) = 0.02 x 0.4 x1 = 0.008


σ 6RB (1,0 ) = α4RB (1 ).γ6RB (1,0 ). β6RB ( 0 ) = 0.0425 x0.1x1 = 0.00425

These values allow us to determine soft decisions for the corresponding nodes. For
instance, for position i = 4 , the trellis transition probabilities involved in the calculation
of a soft decision for ‘0’ are:

σ 4RB ( 0 ,0 ) + σ 4RB (1,1 ) 0.004 + 0.004


RB RB RB RB =
σ 4 ( 0 ,0 ) + σ 4 (1,1 ) + σ 4 ( 0 ,1 ) + σ 4 (1,0 ) 0.004 + 0.004 + 0.00025 + 0.004
= 0.65306

Which is a soft decision for ‘0’ at position 4. The soft decision for ‘1’ at that position is
then 1 0.65306 = 0.34694 . For position i = 6 , the trellis transition probabilities
involved in the calculation of a soft decision for ‘0’ are:

σ 6RB ( 0 ,0 ) 0.008
RB RB = = 0.65306
σ 6 ( 0 ,0 ) + σ 6 (1,0 ) 0.008 + 0.00425
Turbo codes. Detailed solutions to problems

And the soft decision for ‘1’ at position i = 6 is:

σ 6RB (1,0 ) 0.00425


RB RB = = 0.34694
σ 6 ( 0 ,0 ) + σ 6 (1,0 ) 0.008 + 0.00425

A decision taken at this point generates the decoded vector d = (0 0 1 0 0 0 ) .


The decoding has to continue in order to determine the whole decoded vector. The
second decoder can use the a priori information provided as extrinsic information by
the first decoder.
The following calculations are done in logarithmic form in order to determine the
extrinsic information that the first decoder sends to the second decoder. Estimates in
logarithmic form are equal to:

LLRR1(1 )= ln (0 .23881 0 .76119 )= − 1.15921


LLRR2(1 ) = ln (0.1642 0.8358 ) = −1.6275
LLRR3(1 ) = ln (0.6531 0.3469 ) = +0.6325
LLRR4(1 ) = ln (0.3469 0.0.6531) = −0.6325
LLRR5(1 ) = ln (0.1642 0.8358 ) = −1.6275
LLRR6(1 ) = ln (0.3469 0.6531) = −0.6325

LLRR is the Logarithmic Likelihood Ratio for the row code.


The information to be subtracted from the logarithmic estimates is evaluated as:

LCR1(1 ) = ln (0.2 0.3 ) = −0.4055


LCR2(1 ) = ln (0.1 0.4 ) = −1.3863
LCR3(1 ) = ln (0.4 0.1) = +1.3863
LCR4(1 ) = ln (0.1 0.4 ) = −1.3863
LCR5(1 ) = ln (0.1 0.4 ) = −1.3863
LCR6(1 ) = ln (0.1 0.4 ) = −1.3863

The extrinsic information that is going to be passed as a priori information of the


second decoder is determined by doing:

LER1(1 ) = LLRR1(1 ) − LCR1(1 ) = −1.1592 + 0.4055 = −0.7538


LER2(1 ) = LLRR2(1 ) − LCR2(1 ) = −1.6275 + 1.3863 = −0.2412
LER3(1 ) = LLRR3(1 ) − LCR3(1 ) = 0.6325 − 1.3863 = −0.7538
LER4(1 ) = LLRR4(1 ) − LCR4(1 ) = −0.6325 + 1.3863 = +0.7538
LER5(1 ) = LLRR5(1 ) − LCR5(1 ) = −1.6275 + 1.3863 = −0.2412
LER6(1 ) = LLRR6(1 ) − LCR6(1 ) = −0.6325 + 1.3863 = +0.7538

These estimates are converted into the apriori information LbC i(1 ) for the second
(Column) decoder. The two independent column codes have the same type of trellis,
Turbo codes. Detailed solutions to problems

which is also the same as for the two independent row codes. We now take into
account the permutation rule by properly assigning the apriori information.

LbC1(1 ) = LER1(1 ) = −0.7538


LbC 2(1 ) = LER2(1 ) = −0.2412
LbC 3(1 ) = LER3(1 ) = −0.7538
LbC 4(1 ) = LER4(1 ) = +0.7538

These a priori estimates can be converted into probabilities by using expression:

− LbCi( 1 ) / 2
(1 ) e − bi LbCi( 1 ) / 2
PbC ( bi = ±1 ) =
i e
− LbCi( 1 )
1+e

these a priori probabilities are the updated information for the second decoder. They
can be calculated using the above expression, and they are equal to:

PbC1(1 ) ( b1 = −1 ) = 0.6800 , PbC1(1 ) ( b1 = +1 ) = 0.3200


PbC 2(1 ) ( b2 = −1 ) = 0.5600 , PbC 2(1 ) ( b2 = +1 ) = 0.4400
PbC 3(1 ) ( b3 = −1 ) = 0.6800 , PbC 3(1 ) ( b3 = +1 ) = 0.3200
Pb B4(1 ) ( b4 = −1 ) = 0.3200 , Pb B4(1 ) ( b4 = +1 ) = 0.6800

since parity bits for the first decoder are different form those of the second decoder,
probabilities for the redundancy bits of the second decoder are:

PbC7(1 ) ( b7 = −1 ) = 1.0000 or PbC7(1 ) ( b7 = +1 ) = 1.0000


PbC8(1 ) ( b8 = −1 ) = 1.0000 or PbC8(1 ) ( b8 = +1 ) = 1.0000

In the column code, bits 1, 3 and 7 are related as in the trellis seen in the figure below.
The same happens to bits 2, 4 and 8, which constitute a parity check code that is
independent from the code of bits 1, 3 and 7. For these two codes we can calculate the
values γ i ( u' ,u ) . Coefficient for the column code of bits 1, 3 and 7 will be denoted with
a superindex γiCA ( u' , u ) , whereas coefficients of the row code of bits 2, 4 and 8 will be
denoted as γiCB ( u' , u ) .

The trellis for the column code A (sub-code 137) is then:

bit 1 bit 3 bit 7

0 0 0

1 1 1 1

0
By taking into account that the trellis involves bits 1, 3 and 7, we are also directly taking
into account the permutation rule of the whole turbo code.
Turbo codes. Detailed solutions to problems

With these updated probabilities, we can now calculate coefficients γiCA ( u' , u ) for the
column code. Thus, for instance, and for i = 1:

i=1:

γ1CA ( 0 ,0 ) = 0.68 x1x0.3 = 0.2040


γ1CA ( 0 ,1 ) = 0.32 x1x0.2 = 0.0640

i=3:

γ3CA ( 0 ,0 ) = 0.0680
γ3CA ( 0 ,1 ) = 0.1280
γ3CA (1,0 ) = 0.1280
CA
γ23 (1,1 ) = 0.0680

i=7:

γ7CA ( 0 ,0 ) = 0.4000
γ7CA (1,0 ) = 0.1000

Forward recursive calculation of the values αiCA ( u ) is started by setting the initial
conditions α0CA ( 0 ) = 1 , α0CA ( m ) = 0 ; m ≠ 0 , thus:

α1CA ( 0 ) = α0CA ( 0 ).γ1CA ( 0 ,1 ) = 0.2040


α1CA (1 ) = α0CA ( 0 ).γ1CA ( 0 ,1 ) = 0.0640

α3CA ( 0 ) = 0.022064
α3CA (1 ) = 0.020464

α7CA ( 0 ) = 0.011872

Backward recursive calculation of the values βiCA ( u ) is done by setting the contour
conditions β7CA ( 0 ) = 1 , β7CA ( m ) = 0 ; m ≠ 0 :

β3CA ( 0 ) = 0.4000
β3CA (1 ) = 0.1000

β1CA ( 0 ) = 0.0400
β1CA (1 ) = 0.0580

Once the values γiCA ( u' , u ) , αiCA ( u ) and βiCA ( u ) have been determined, then the
values λCA CA
i ( u ) and σ i ( u' , u ) can be calculated as:
Turbo codes. Detailed solutions to problems

λCA
i ( 0 ) = 0.00816

λCA
i (1 ) = 0.003712

Coefficients σ iCA ( u' , u ) are then utilized for determining the soft decisions when there
are two or more transitions or branches arriving at a given node of the trellis, and when
these branches are assigned the different input symbols:

σ 3CA ( 0 ,0 ) = 0.0055488
σ 3CA (1,0 ) = 0.0032768

σ 3CA ( 0 ,1 ) = 0.0026112
σ 3CA (1,1 ) = 0.0004352

σ7CA ( 0 ,0 ) = 0.0088256
σ7CA (1,0 ) = 0.0030464

The trellis for the column code B is seen below:

bit 2 bit 4 bit 8

0 0 0

1 1 1 1

0
By taking into account that the trellis involves bits 2, 4 and 8, we are also directly taking
into account the permutation rule of the whole turbo code.

With these updated probabilities, we can now calculate coefficients γiCB ( u' , u ) for the
column code.

i=2:

γ2CB ( 0 ,0 ) = 0.2240
γ2CB ( 0 ,1 ) = 0.0440

i=4:

γ4CB ( 0 ,0 ) = 0.1280
γ4CB ( 0 ,1 ) = 0.0680
γ4CB (1,0 ) = 0.0680
γ4CB (1,1 ) = 0.1280

i=8:
Turbo codes. Detailed solutions to problems

γ8CB ( 0 ,0 ) = 0.4000
γ8CB (1,0 ) = 0.1000

Forward recursive calculation of the values αiCB ( u ) is started by properly setting the
initial conditions:

α2CB ( 0 ) = 0.2240
α2CB (1 ) = 0.0440

α4CB ( 0 ) = 0.031664
α4CB (1 ) = 0.020864

α8CB ( 0 ) = 0.014752

Backward recursive calculation of the values βiCB ( u ) is done by setting the contour
conditions β8CB ( 0 ) = 1 , β8CB ( m ) = 0 ; m ≠ 0 :

β4CB ( 0 ) = 0.4000
β4CB (1 ) = 0.1000

β2CB ( 0 ) = 0.0580
β2CB (1 ) = 0.0400

Once the values γiCB ( u' , u ) , αiCB ( u ) and βiCB ( u ) have been determined, then the
values λCB CB
i ( u ) and σ i ( u' , u ) can be calculated as:

λCB
2 ( 0 ) = 0.012992

λCB
2 (1 ) = 0.001760

Coefficients σ iCB ( u' , u ) are then utilized for determining the soft decisions when there
are two or more transitions or branches arriving at a given node of the trellis, and when
these branches are assigned the different input symbols:

σ iCB ( u' , u )

σ 4CB ( 0 ,0 ) = 0.0114688
σ 4CB (1,0 ) = 0.0011968

σ 4CB ( 0 ,1 ) = 0.0015232
σ 4CB (1,1 ) = 0.0005632

σ 8CB ( 0 ,0 ) = 0.0126656
σ 8CB (1,0 ) = 0.0020864
Turbo codes. Detailed solutions to problems

With all the values already calculated, an estimate or soft decision can be made for
each step i of the decoded sequence.
We will calculate directly the LLRs of the decoding of the second decoder:

LLRC1(1 ) = −0.7877
LLRB2(1 ) = −1.9990
LLRC 3( 1 ) = −0.0162
LLRC4(1 ) = −1.4869
LLRC7( 1 ) = −1.0637
LLRC8(1 ) = −1.8034

These estimates can be converted into probabilities:

Pb DC1(1 ) ( b1 = −1 ) = 0.6873 , Pb DC1(1 ) ( b1 = +1 ) = 0.3127


Pb DC 2( 1 ) ( b3 = −1 ) = 0.8807 , Pb DC 2( 1 ) ( b3 = +1 ) = 0.1193
Pb DC 3(1 ) ( b2 = −1 ) = 0.5040 , Pb DC3(1 ) ( b2 = +1 ) = 0.4960
Pb DC 4( 1 ) ( b4 = −1 ) = 0.8156 , Pb DC 4( 1 ) ( b4 = +1 ) = 0.1844
Pb DC7(1 ) ( b7 = −1 ) = 0.7434 , Pb DC7(1 ) ( b7 = +1 ) = 0.2566
Pb DC8( 1 ) ( b8 = −1 ) = 0.8586 or Pb DC8( 1 ) ( b8 = +1 ) = 0.1414

A decision taken at this point by the column decoder generates the decoded vector
d = (0 0 0 0 0 0 ) .

The second decoder makes use of a priori information that should be subtracted from
these estimations to calculate the extrinsic information that the second decoder passes
to the first one.
The information to be subtracted from the logarithmic estimates of the second decoder
in the first iteration is evaluated as:

LCC1( 1 ) = ln (0.2 0.3 ) = −0.4055


LCC2(1 ) = ln (0.1 0.4 ) = −1.3863
LCC3( 1 ) = ln (0.4 0.1) = +1.3863
LCC4(1 ) = ln (0.1 0.4 ) = −1.3863

and:

LbC1( 1 ) = ln (0.32 0.68 ) = −0.7538


LbC2(1 ) = ln (0.44 0.56 ) = −0.2412
LbC3( 1 ) = ln (0.32 0.68 ) = −0.7538
LbC4(1 ) = ln (0.68 0.32 ) = +0.7538
Turbo codes. Detailed solutions to problems

The extrinsic information that is going to be passed from the second decoder as a priori
information of the first decoder is determined by doing:

LEC1( 1 ) = LLRC1( 1 ) − LCC1( 1 ) − LbC1( 1 ) = +0.3716


LEC2(1 ) = LLRC2( 1 ) − LCC 2( 1 ) − LbC2( 1 ) = −0.6487
LEC3( 1 ) = LLRC3( 1 ) − LCC3( 1 ) − LbC3( 1 ) = −0.3716
LEC4(1 ) = LLRC4(1 ) − LCC4(1 ) − LbC4(1 ) = −0.8544

These estimates are converted into the apriori information Lb R i( 2 ) for the second
iteration of the first decoder. Thus,

Lb R1( 2 ) = LEC1( 1 ) = +0.3716


Lb R2( 2 ) = LEC2(1 ) = −0.6487
Lb R3( 2 ) = LEC3( 1 ) = −0.3716
Lb R4( 2 ) = LEC4(1 ) = −0.8544

These a priori estimates can be converted into probabilities by using expression:

− LbRi( 2 ) / 2
(2) e −bi Lb Ri( 2 ) / 2
Pb R i ( bi = ±1 ) = − LbRi( 2 )
e
1+e

these a priori probabilities are the updated information for the first decoder. They can
be calculated using the above expression and they are equal to:

Pb R1( 2 ) ( b1 = −1 ) = 0.4082 , Pb R1( 2 ) ( b1 = +1 ) = 0.5918


Pb R 2( 2 ) ( b3 = −1 ) = 0.6567 , Pb R 2( 2 ) ( b3 = +1 ) = 0.3433
Pb R 3( 2 ) ( b2 = −1 ) = 0.5918 , Pb R 3( 2 ) ( b2 = +1 ) = 0.4082
Pb R 4( 2 ) ( b4 = −1 ) = 0.7015 , Pb R 4( 2 ) ( b4 = +1 ) = 0.2985

since parity bits for the second decoder are different form those of the first decoder,
probabilities for the redundancy bits of the second decoder are:

Pb R5( 2 ) ( b5 = −1 ) = 1.0000 or Pb R5( 2 ) ( b5 = +1 ) = 1.0000


Pb R6( 2 ) ( b6 = −1 ) = 1.0000 or Pb R6( 2 ) ( b6 = +1 ) = 1.0000

The decoding needs to continue with the following iteration. The iterative decoding
performs as detailed above, and we summarize the resulting estimates for each
iteration as follows. The first decoder performs the second iteration generating the
estimates:

LLRA1( 2 ) = −0.9379
LLRA2( 2 ) = −1.7782
LLRA3( 2 ) = −0.3204
LLRA4( 2 ) = −1.8107
LLRA5( 2 ) = −1.4102
Turbo codes. Detailed solutions to problems

LLRA6( 2 ) = −0.7999

so that decoded probabilities are:

Pb DR1( 2 ) ( b1 = −1 ) = 0.7187 , Pb DR1( 2 ) ( b1 = +1 ) = 0.2813


Pb DR2( 2 ) ( b2 = −1 ) = 0.8555 , Pb DR2( 2 ) ( b2 = +1 ) = 0.1445
Pb DR3( 2 ) ( b3 = −1 ) = 0.5794 , Pb DR3( 2 ) ( b3 = +1 ) = 0.4206
Pb DR4( 2 ) ( b4 = −1 ) = 0.8594 , Pb DR4( 2 ) ( b4 = +1 ) = 0.1406
Pb DR5( 2 ) ( b5 = −1 ) = 0.8038 , Pb DR5( 2 ) ( b5 = +1 ) = 0.1962
Pb DR6( 2 ) ( b6 = −1 ) = 0.6900 , Pb DR6( 2 ) ( b6 = +1 ) = 0.3100

The decoded vector is now a code vector for the row code. The second decoder
performs its second iteration and the resulting estimates are:

LLRC1( 2 ) = −1.1136
LLRC 2( 2 ) = −0.3910
LLRC3( 2 ) = −1.9536
LLRC4( 2 ) = −1.7190
LLRC5( 2 ) = −1.1987
LLRC6( 2 ) = −1.9394

The decoded message is the same for both decoders at the second iteration, so that
the whole decoded vector is the code vector d = c = (0 0 0 0 0 0 0 0 ) , the
decoded information bits are 0000, and the decoding process was successfully
completed.

A rather inefficient, and unnecessarily more complex way of solving this decoding is by
means of the construction of a trellis for the whole row code, such as is seen in the
following figure.

0 0 0 0 0 0

1 1 1
1 1 1 1 1
0 0
0 0 1
1
1
0
1
0

This trellis can be used also for the column code provided that bits 2 and 3 are
permuted. It can be verified that the decoding using this trellis results into the same
values of estimates, but utilizing a much more complex set of calculations in
comparison to the method described above.
Turbo codes. Detailed solutions to problems

7.4)
The 1/3-rate turbo code encoded shown in the following figure has two constituent
encoders of rate ½.

c (1 )

m
c(2 )

3-bit PR c(3 )
Interleaver

Each constituent encoder is of the form:

c (1 )

m
c(2 )

The following figures show the trellis of each of the constituent encoders and the way
its minimum Hamming free distance is calculated:

00 00 00
0 0 0 0
11 11 11

10 10 10
1 1 1 1
01 01 01
Turbo codes. Detailed solutions to problems

The minimum Hamming free distance of each constituent code is then d free = 3 .

The minimum Hamming free distance of the whole code is determined in a simplified
way. We look for the minimum weight sequence that starts at, and returns to the all-
zero state, considering input sequences of three bits. For this, both encoders have to
end at the all-zero state.

Let us see as an example the output sequence of the turbo code for the input
m = (001) . The fist bit to be input is the rightmost bit. The permutation rule

1 2 3 
 
3 1 2 

is applied as seen in the following table.

m1 State 1 m2 State 2 c(1) c(2) c(3)


1 0
0 1
0 0

Thus, for m = (100 ) :

m1 State 1 m2 State 2 c(1) c(2) c(3)


1 1 0 0 1 0 0
0 1 1 1 0 1 0
0 1 0 1 0 1 1

This case does not end at the all zero state.


Let us see as an example the output sequence of the turbo code for the input
m = (010 ) .

m1 State 1 m2 State 2 c(1) c(2) c(3)


0 0 0 0 0 0 0
1 1 0 0 1 0 0
0 1 1 1 0 1 0

With: m = (001)

m1 State 1 m2 State 2 c(1) c(2) c(3)


0 0 1 1 0 0 0
0 0 0 1 0 0 1
1 1 0 1 1 0 1

With m = (110 ) :

m1 State 1 m2 State 2 c(1) c(2) c(3)


1 1 0 0 1 0 0
1 0 1 1 1 1 0
0 0 1 0 0 0 1
Turbo codes. Detailed solutions to problems

With m = (101) :

m1 State 1 m2 State 2 c(1) c(2) c(3)


1 1 1 1 1 0 0
0 1 1 0 0 1 1
1 0 0 0 1 1 0

With: m = (011)

m1 State 1 m2 State 2 c(1) c(2) c(3)


0 0 1 1 0 0 0
1 1 0 1 1 0 1
1 0 1 0 1 1 1

With m = (111) :

m1 State 1 m2 State 2 c(1) c(2) c(3)


1 1 1 1 1 0 0
1 0 1 0 1 1 1
1 1 1 1 1 0 0

In spite of that some cases do not end at the all-zero state, this very simple method
indicates that the minimum Hamming free distance is 4, which is determined by those
inputs for which the sequence of the whole code starts and ends at the all-zero state.
This is of course not rigorous, because there could be longer sequences that could end
at having smaller weights, so that the code could have an even smaller minimum
Hamming free distance. As a handmade exercise however, the solution is considered
proper enough. The case m = (111) is also one in which the minimum Hamming free
distance is 4, but here this happens in the middle of the input sequence, that is, the
machine ends at the all-zero state after the second bit is input. Looking at length 6
input sequences in the two cases where both states are non-zero and the distance is
less than 4 after inputting the first three bits confirms that the free distance is in fact 4.

We apologise for the mistake seen in the Answers to Problems for this item, which
wrongly indicates that the minimum Hamming free distance of the whole code is 5
instead of 4.

7.5)
Unfortunately there is a text error in the description of this problem. Since the
terminated code case was presented as an example in Chapter 7 of the book, we
wanted to propose here a turbo code whose first encoder is not terminated. The text
should say:
“ the input or message vector is:
m = (− 1 − 1 − 1 + 1 − 1 + 1 − 1 − 1 + 1 + 1 − 1 + 1 + 1 − 1 + 1 − 1) . This
input vector makes the first encoder sequence be non-terminated. “

We describe the encoder of each constituent code, and the corresponding trellis in the
following two figures:
Turbo codes. Detailed solutions to problems
c (1 )
x1
Polar
format

x2
m
c(2 )

S i −1 Si
0/00
0 00

1/11 1/11

2 10
0/00

1/10
1 01

0/01
0/01
3 11
1/10

The decimal number at the left of each state identifies the decimal representation of
each state. The following lists will describe the calculated values of the different
parameters of the turbo decoding.
The input of the second decoder is determined by a proper permutation (as in Example
7.3 on page 243) of the input of the first encoder. We describe these input vectors
below:

m1rst = (− 1 − 1 − 1 + 1 − 1 + 1 − 1 − 1 + 1 + 1 − 1 + 1 + 1 − 1 + 1 − 1)

m 2nd = (− 1 − 1 + 1 + 1 − 1 + 1 + 1 − 1 − 1 − 1 − 1 + 1 + 1 − 1 + 1 − 1)

After being punctured and transmitted, and corrupted by AWGN, the received
sequence, as tabulated in the following table, is then applied to the decoder:
Turbo codes. Detailed solutions to problems

Input sequence Received sequence


-1 -0.5290 -0.3144
-1 -0.01479 -0.1210
-1 -0.1959 +0.03498
+1 +1.6356 -2.0913
-1 -0.9556 + 1.2332
+1 +1.7448 -0.7383
-1 -0.3742 -0.1085
-1 -1.2812 -1.8162
+1 +0.5848 +0.1905
+1 +0.6745 -1.1447
-1 -2.6226 -0.5711
+1 +0.7426 + 1.0968
+1 +1.1303 -1.6990
-1 -0.6537 -1.6155
+1 +2.5879 -0.5120
-1 -1.3861 -2.0449

These received values correspond to a noise dispersion of σ = 1.1 . Bits were


transmitted in normalized polar format ± 1 . In the received sequence column of the
above table the first (LH) value is that of the received information (systematic) bits and
the second (RH) that of the alternately transmitted parity bits.

The decoding of this received vector is performed below. The received vector for the
first decoder is:

Message bit Redundancy bit


-0.5290 -0.3144
-0.01479 0.0000
-0.1959 +0.03498
+1.6356 0.0000
-0.9556 + 1.2332
+1.7448 0.0000
-0.3742 -0.1085
-1.2812 0.0000
+0.5848 +0.1905
+0.6745 0.0000
-2.6226 -0.5711
+0.7426 0.0000
+1.1303 -1.6990
-0.6537 0.0000
+2.5879 -0.5120
-1.3861 0.0000

The values γ i ( u' ,u ) are first calculated and then α i ( u ) and β i ( u ) can be also
determined.
Values of γ i ( u' ,u ) for the first decoder in the first iteration, are described in the
following list:

i=1
Turbo codes. Detailed solutions to problems

2.0078 0 0.4980 0
0.4980 0 2.0078 0
0 0.8375 0 1.1940
0 1.1940 0 0.8375

2.0078 is for instance the value of γ1 ( 0 ,0 ) , and 0.4980 is the value of γ1 ( 0 ,2 ) . The
other values of the list are not needed to be calculated for i=1.

i=2

1.0123 0 0.9878 0
0.9878 0 1.0123 0
0 0.9878 0 1.0123
0 1.0123 0 0.9878

i=3

1.1423 0 0.8754 0
0.8754 0 1.1423 0
0 0.8263 0 1.2103
0 1.2103 0 0.8263

i=4

0.2588 0 3.8643 0
3.8643 0 0.2588 0
0 3.8643 0 0.2588
0 0.2588 0 3.8643

i=5

0.7950 0 1.2579 0
1.2579 0 0.7950 0
0 0.1638 0 6.1044
0 6.1044 0 0.1638

i=6

0.2365 0 4.2291 0
4.2291 0 0.2365 0
0 4.2291 0 0.2365
0 0.2365 0 4.2291

i=7

1.4903 0 0.6710 0
0.6710 0 1.4903 0
0 0.8029 0 1.2455
0 1.2455 0 0.8029
Turbo codes. Detailed solutions to problems

i=8

2.8831 0 0.3469 0
0.3469 0 2.8831 0
0 0.3469 0 2.8831
0 2.8831 0 0.3469

i=9

0.5269 0 1.8979 0
1.8979 0 0.5269 0
0 1.3852 0 0.7219
0 0.7219 0 1.3852

i = 10

0.5727 0 1.7462 0
1.7462 0 0.5727 0
0 1.7462 0 0.5727
0 0.5727 0 1.7462

i = 11

14.0060 0 0.0714 0
0.0714 0 14.0060 0
0 0.1835 0 5.4492
0 5.4492 0 0.1835

i = 12

0.5413 0 1.8473 0
1.8473 0 0.5413 0
0 1.8473 0 0.5413
0 0.5413 0 1.8473

i = 13

1.6000 0 0.6250 0
0.6250 0 1.6000 0
0 10.3632 0 0.0965
0 0.0965 0 10.3632

i = 14

1.7164 0 0.5826 0
0.5826 0 1.7164 0
0 0.5826 0 1.7164
0 1.7164 0 0.5826
Turbo codes. Detailed solutions to problems

i = 15

0.1799 0 5.5600 0
5.5600 0 0.1799 0
0 12.9598 0 0.0772
0 0.0772 0 12.9598

i = 16

3.1441 0 0.3181 0
0.3181 0 3.1441 0
0 0.3181 0 3.1441
0 3.1441 0 0.3181

Forward recursive calculation of the values α i ( u ) is started by setting the initial


conditions α 0 ( 0 ) = 1 , α 0 ( m ) = 0 ; m ≠ 0 :

The following list describes the calculated values of α i ( u ) for the first decoder, in the
first iteration:

1.0000000e+000 0.0000000e+000 0.0000000e+000 0.0000000e+000


2.0078386e+000 0.0000000e+000 4.9804801e-001 0.0000000e+000
2.0325442e+000 4.9199423e-001 1.9834333e+000 5.0417627e-001
2.7524567e+000 2.2490203e+000 2.3413637e+000 2.8170801e+000
9.4030924e+000 9.7766548e+000 1.1218224e+001 1.1491842e+001
1.9773114e+001 7.1988040e+001 1.9600210e+001 7.0362600e+001
3.0911792e+002 9.9528490e+001 1.0064406e+002 3.0220294e+002
5.2745834e+002 4.5720729e+002 3.5574788e+002 3.6798509e+002
1.6792767e+003 1.1843146e+003 1.5011058e+003 1.1532786e+003
3.1325225e+003 2.9118911e+003 3.8111199e+003 2.6811907e+003
6.8787051e+003 8.1905026e+003 7.1376304e+003 6.8644573e+003
9.6928058e+004 3.8715628e+004 1.1520746e+005 4.0154071e+004
1.2398932e+005 2.3455750e+005 2.0001161e+005 1.3654182e+005
3.4498211e+005 2.0859446e+006 4.5278975e+005 1.4343163e+006
1.8074125e+006 2.7257222e+006 3.7813988e+006 1.6128206e+006
1.5480141e+007 4.9130581e+007 1.0539485e+007 2.1193595e+007

Values are given in exponential form (for example,


1.5480141e + 007 = 1.5480141x10 7 ). Again, the above list is read as a matrix with
entries i (rows, or bit positions) and j (columns, or state values in the trellis).
1.0000000e+000 is the value of α 0 ( 0 ) . 2.0078386e+000 is the value of α1 ( 0 ) , and
4.9199423e-001 is the value of α 2 (1 ) , for example.

Backward recursive calculation of the values βi ( u ) is done by setting the contour


conditions, β16 ( m ) = 1; for all m (First code is not terminated):

0.0000000e+000 0.0000000e+000 0.0000000e+000 0.0000000e+000


1.3671631e+008 1.3656783e+008 1.1856153e+008 1.1873363e+008
7.1350992e+007 6.2837297e+007 6.5280862e+007 5.5801411e+007
4.9204714e+007 1.7385051e+007 1.7300082e+007 4.2070032e+007
3.6626355e+006 3.7647496e+006 1.2487991e+007 1.0634829e+007
1.9193438e+006 1.6884863e+006 1.6987243e+006 2.0004382e+006
Turbo codes. Detailed solutions to problems

3.7505399e+005 3.7640694e+005 4.3287517e+005 4.5197500e+005


1.7301921e+005 2.3755943e+005 1.7467061e+005 1.9441164e+005
5.0835024e+004 6.1026893e+004 7.6282601e+004 5.3243200e+004
2.6783118e+004 4.8103672e+004 1.9349272e+004 1.3367137e+004
2.5230783e+004 9.6030247e+003 7.0634442e+003 4.5056208e+003
1.7979759e+003 7.8407703e+002 6.7647010e+002 1.2698304e+003
1.5230542e+002 1.8023503e+002 9.2867662e+002 6.3458787e+002
6.0405314e+001 8.9050093e+001 8.9050093e+001 6.0405314e+001
1.9872124e+001 1.9872124e+001 4.5135464e+001 4.5135464e+001
3.4621180e+000 3.4621180e+000 3.4621180e+000 3.4621180e+000
1.0000000e+000 1.0000000e+000 1.0000000e+000 1.0000000e+000

Once the values γ i ( u' ,u ) , α i (u ) and β i (u ) have been determined, then the values
of the LLR estimates can be calculated. The following list describes these values:

Position of bi L(11 ) ( bi / Y ) Estimated bits Input or message bits

1 -1.5365936 -1 -1

2 -0.076558632 -1 -1

3 -0.87707531 -1 -1

4 +2.8030878 +1 +1

5 -1.7221617 -1 -1

6 +2.8949539 +1 +1

7 -0.65338000 -1 -1

8 -2.1014182 -1 -1

9 +0.99084643 +1 +1

10 +1.1271298 +1 +1

11 -4.4088385 -1 -1

12 +1.3086825 +1 +1

13 +1.7894979 +1 +1

14 -1.2174239 -1 -1

15 +4.3467953 +1 +1

16 -2.2910284 -1 -1

In this case, the error event was such that the first decoder could successfully decode
the received vector. We could stop decoding here. In practice, however, we do not
Turbo codes. Detailed solutions to problems

know when the number of iterations is enough to arrive at the correct decision, and we
usually set a given number of iterations to be performed.

You might also like