Detail Sol Prob Ch7
Detail Sol Prob Ch7
Concatenated codes:
1 1 0 1 0 0 c (1 )
G = 0 1 1 0 1 0
1 0 1 0 0 1
c(2)
The transpose of the parity check matrix of the block code is obtained as follows:
I
[ ]
G = [P I k ] , H = I n -k P T , H T = n −k
P
1 0 0
0 1 0
1 1 0
P = 0 1
1 , H = 0
T 0 1
1 1 0
1 0 1
0 1 1
1 0 1
e S
1 0 0 0 0 0 1 0 0
0 1 0 0 0 0 0 1 0
0 0 1 0 0 0 0 0 1
0 0 0 1 0 0 1 1 0
0 0 0 0 1 0 0 1 1
0 0 0 0 0 1 1 0 1
The trellis of the convolutional encoder seen in the above figure is:
Turbo codes. Detailed solutions to problems
t1 t2 t3 t4
0/00 0/00 0/00
Sa = 00
1/01 1/01 1/01 0/11
Sb = 10
1/10
0/11 0/11
Sc = 01
Sd = 11
1/01
And the minimum Hamming free distance of the convolutional code is equal to d f = 4 :
t1 t2 t3 t4 t5
0/00 0/00 0/00 0/00
Sa = 00 df = 4
1 1 1 2 2
Sb = 10
1 1
2 2 2
Sc = 01
1 0 1 0 1
Sd = 11
1 1
m c w w
000 000000 0 00 00 00 00 00 00 00 00 0
001 101001 3 01 11 10 11 11 01 11 11 13
010 011010 3 00 01 10 00 10 11 11 00 7
011 110011 4 01 10 00 11 01 10 00 11 8
100 110100 3 01 10 00 10 11 11 00 00 7
101 011101 4 00 01 10 01 00 10 11 11 8
110 101110 4 01 11 10 10 01 00 11 00 8
111 000111 3 00 00 00 01 10 01 00 11 5
01 10 10 00 11 11 00 00
t1 t2 t3 t4 t5 t6 tt7 t8 t9
1 1 1 0 2 2 0 0
Sa = 00
2 2 1 2 1 2 1 22 1 2 1 2
0 2 2
2
Sb = 10 1 2 0 0
0 1 1 1 1 1
1 2 3 3 3 3
1
1 2 0 0 2
2
Sc = 01
1 2 2 3 4 4
0 1 0 0 1 2 1 2 1 1
0 1 0
Sd = 11 1 1
2 1 1 1
2 2 3 4 4 4
Even before the last two steps in the Viterbi decoding, the decoded sequence is seen
to be d = (01 10 00 10 11 11 00 00 ) , which corresponds to the message sequence
m´ = (110100 ) . The tailing zeros are discarded.
This decoded sequence is a codeword of the block code, so that the syndrome
calculation in this second step of the concatenated code is equal to zero, and the
decoded message is directly determined by truncating the decoded vector. Thus,
m = (100 ) .
Note that the minimum distance of the block code on its own is 3. This would also be the
minimum distance of the concatenated code without the extra tailing process, which adds
at least 2 to the weights of the codewords of the block code. This indicates the importance
of properly tailing off when using a convolutional code.
7.2)
The cyclic code C cyc ( 3 ,1 ) generated by the polynomial g ( X ) = 1 + X + X 2 is a
repetition code. Its code table is the following:
m c(X) c w
1 1+X+X2 111 3
0 0 000 -
m c w
000 0000000 -
001 1101001 4
010 0111010 4
011 1010011 4
100 1110100 4
101 0011101 4
110 1001110 4
111 0100111 4
m( X ) = X 2 ;
X6 / X 4 + X 2 + X +1
X6 + X4 + X3 + X2 X 2 +1
−−−−−−−−−−−−
X4 + X3 + X2
X 4 + X 2 + X +1
−−−−−−−−−−
X 3 + X + 1 = p( X )
Then:
c( X ) = X 4 m( X ) + p( X ) = X 6 + X 3 + X + 1 ⇒ c = (1101001)
n1
k1
k2
n2
where k1 = 1, n1 = 3, k2 = 3 and n2 = 7.
The non-zero codewords of this concatenated code are now determined. Message bits
are in bold:
Turbo codes. Detailed solutions to problems
1 1 1 0 0 0 1 1 1 0 0 0 1 1 1 0 0 0 1 1 1
0 0 0 1 1 1 1 1 1 0 0 0 0 0 0 1 1 1 1 1 1
0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 0 0 0 0 0 0 1 1 1 1 1 1 0 0 0
0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0
1 1 1 1 1 1 0 0 0 1 1 1 0 0 0 0 0 0 1 1 1
1 1 1 0 0 0 1 1 1 1 1 1 0 0 0 1 1 1 0 0 0
The minimum weight in this code table is w min = d min = 12 , so the array code can
correct up to 5 errors and detect up to 6 errors. This minimum Hamming distance is
such that:
which confirms the product code nature of this type of array code.
Since the 3 message bits generate 21 coded bits, then the rate of the concatenated
code is Rc = 1 / 7 , which can also be determined by multiplying the rates of the two
component codes: (1/3)x(3/7) = 3/21 = 1/7.
If we concatenate these two codes by applying first the cyclic code C cyc ( 7 ,3 ) and then
the cyclic code C cyc ( 3 ,1 ) , then in the above figure k1 = 3, n1 = 7, k2 = 1 and n2 = 3. The
resulting array code is described in the following table, where m is the message bits,
followed by the row constituent codeword and then the column constituent codewords
corresponding to each of the row bits, and the last column in the table is the weight of
each array codeword:
m C cycl (7 ,3 ) C cycl ( 3 ,1 ) w
000 0000000 000 000 000 000 000 000 000 12
001 1101001 111 111 000 111 000 000 111 12
010 0111010 000 111 111 111 000 111 000 12
011 1010011 111 000 111 000 000 111 111 12
100 1110100 111 111 111 000 111 000 000 12
101 0011101 000 000 111 111 111 000 111 12
110 1001110 111 000 000 111 111 111 000 12
111 0100111 000 111 000 000 111 111 111 12
The minimum Hamming distance of the concatenated code is again 12, and as before
is the product of the minimum Hamming distances of the constituent codes
d conc _ min = d ( 3 ,1 ) _ min xd( 7 ,3 ) _ min = 3 x4 = 12 . Also the rate is again 1/7, the product of the
constituent code rates. This confirms that the order of concatenation is unimportant in
the case of this type of array or product code.
Turbo codes. Detailed solutions to problems
Turbo codes:
7.3)
A simple binary array code (or punctured product code) has codewords with block
length n = 8 and k = 4 information bits, in the format seen in the following figure:
1 2 5
3 4 6
7 8
The 15 non-zero codewords are determined as follows. In each sub-table, the message
bits are in bold:
0 0 0 0 0 0 0 0 0 0 1 1 0 1 1 0 1 1
0 1 1 1 0 1 1 1 0 0 0 0 0 1 1 1 0 1
0 1 1 0 1 1 0 1 0 0 1 1
0 1 1 1 0 1 1 0 1 1 0 1 1 0 1 1 1 0
1 1 0 0 0 0 0 1 1 1 0 1 1 1 0 0 0 0
1 0 1 0 1 1 0 0 0 1 1 1
1 1 0 1 1 0 1 1 0
0 1 1 1 0 1 1 1 0
1 0 0 1 0 0
If the missing 9th bit (the check on checks) had been present, then the Hamming
distance of the code would have been 2x2 = 4, but deleting one bit from a code with
even Hamming distance always reduces the minimum distance by one, in this case to
3, thus confirming the above result.
1 2 3 4 5 6 7 8
0 0 0 0 0 0 0 0
0 0 0 1 0 1 0 1
0 0 1 0 0 1 1 0
0 0 1 1 0 0 1 1
0 1 0 0 1 0 0 1
0 1 0 1 1 1 0 0
0 1 1 0 1 1 1 1
0 1 1 1 1 0 1 0
1 0 0 0 1 0 1 0
1 0 0 1 1 1 1 1
1 0 1 0 1 1 0 0
1 0 1 1 1 0 0 1
1 1 0 0 0 0 1 1
1 1 0 1 0 1 1 0
1 1 1 0 0 1 0 1
1 1 1 1 0 0 0 0
By re-ordering the code bits of the row sub-codes as 125, 346, then their trellis has the
following form:
0 0 0
1 1 1 1
0
This trellis corresponds to the simple parity check code, whose table is seen below for
the case of the row sub-code 125:
1 2 5
0 0 0
0 1 1
1 0 1
1 1 0
This array code can be regarded as a simple form of turbo code. In terms of the turbo
code structure shown below in the figure, the parallel concatenated component
encoders calculate the row and column parity checks of the array code, and the
permuter alters the order in which the information bits enter the column encoder from
{1,2,3,4} to {1,3,2,4}. The multiplexer then collects the information and parity bits to
form a complete codeword.
Turbo codes. Detailed solutions to problems
Information bits
Row checks
Row encoder
Permuter Multiplexer
Column encoder
Column checks
High reliability
0 output for 0
High reliability
3 output for 1
P( y / x )
x , y 0 1 2 3
0 0.4 0.3 0.2 0.1
1 0.1 0.2 0.3 0.4
We assume that the transmitted code vector is c = ( 00000000 ) ). The received vector
is r = (10300000 ) . Note that elements of the transmitted vector (which we need to
determine) are inputs of the channel of the above figure, and elements of the received
vector are outputs of that channel. The following table shows the transition probabilities
for the input elements ‘1’ and ‘0’ for each received element:
j 1 2 3 4 5 6 7 8
(P ( y j / 0), P ( y j / 1) ) (0.3,0.2) (0.4,0.1) (0.1,0.4) (0.4,0.1) (0.4,0.1) (0.4,0.1) (0.4,0.1) (0.4,0.1)
In the row code, bits 1, 2 and 5 are related as in the trellis seen in the figure. The same
happens to bits 3, 4 and 6, which constitute a parity check code that is independent
Turbo codes. Detailed solutions to problems
from the code of bits 1, 2 and 5. We can calculate the values γ i ( u' ,u ) for each of these
two codes. Coefficient for the row code of bits 1, 2 and 5 will be denoted with a
superindex γiRA ( u' , u ) , whereas coefficients of the row code of bits 3, 4 and 6 will be
denoted as γiRB ( u' , u ) . The trellis for the sub-code 125 is then:
0 0 0
1 1 1 1
0
i=1:
i=2:
i=5:
Forward recursive calculation of the values αiRA ( u ) is started by setting the initial
conditions α0RA ( 0 ) = 1 , α0RA ( m ) = 0 ; m ≠ 0 :
U 1 1
α1RA ( 0 ) = ∑ α0RA ( u' ).γ1RA ( u' ,u ) = ∑ α0RA ( u' ).γ1RA ( u' ,u ) = α0RA ( 0 ).γ1RA ( 0 ,0 ) + α0RA (1 ).γ1RA (1,0 )
u' =0 u' =0
= 1x0.15 + 0 x0 = 0.15
α2RA ( 0 ) = α1RA ( 0 ).γ 2RA ( 0 ,0 ) + α1RA (1 ).γ2RA (1,0 ) = 0.15 x0.2 + 0.1x0.05 = 0.035
α2RA (1 ) = α1RA ( 0 ).γ2RA ( 0 ,1 ) + α1RA (1 ).γ2RA (1,1 ) = 0.15 x0.05 + 0.1x0.2 = 0.0275
α5RA ( 0 ) = α 2RA ( 0 ).γ5RA ( 0 ,0 ) + α2RA (1 ).γ5RA (1,0 ) = 0.035 x0.4 + 0.0275 x 0.1 = 0.01675
Backward recursive calculation of the values βiRA ( u ) is done by setting the contour
conditions β5RA ( 0 ) = 1 , β5RA ( m ) = 0 ; m ≠ 0 :
Once the values γiRA ( u' , u ) , αiRA ( u ) and βiRA ( u ) have been determined, then the
values λRA RA
i ( u ) and σ i ( u' , u ) can be calculated as:
The coefficients λRA i ( u ) determine the estimates for input symbols ‘1’ and ‘0’ when
there is only one branch or transition of the trellis arriving at a given node, which then
defines the value of that node. This happens for instance in the trellis of figure at nodes
λ1RA ( 0 ) and λ1RA (1 ) :
λ1RA ( 0 ) 0.01275
RA RA = = 0.76119
λ1 ( 0 ) + λ1 (1 ) 0.01275 + 0.004
Turbo codes. Detailed solutions to problems
λ1RA (1 ) 0.004
RA RA = = 0.23881
λ ( 0 ) + λ1 (1 ) 0.01275 + 0.004
1
Coefficients σ iRA ( u' , u ) are then utilized for determining the soft decisions when there
are two or more transitions or branches arriving at a given node of the trellis, and when
these branches are assigned the different input symbols:
These values allow us to determine soft decisions for the corresponding nodes. For
instance, for position i = 2 , the trellis transition probabilities involved in the calculation
of a soft decision for ‘0’ are:
Which is a soft decision for ‘0’ at position 2. The soft decision for ‘1’ at that position is
then 1 − 0.8352 = 0.1642 . For position i = 5 , the trellis transition probabilities involved
in the calculation of a soft decision for ‘0’ are:
σ 5RA ( 0 ,0 ) 0.014
= = 0.8358
σ 5RA ( 0 ,0 ) + σ 5RA (1,0 ) 0.014 + 0.00275
For the row code of bits 3, 4 and 6, whose trellis is seen in the following figure:
0 0 0
1 1 1 1
0
Turbo codes. Detailed solutions to problems
i=3:
i=4:
i=6:
Forward recursive calculation of the values αiRB ( u ) is started by setting the initial
conditions α2RB ( 0 ) = 1 , α2RB ( m ) = 0 ; m ≠ 0 :
α4RB ( 0 ) = α3RB ( 0 ).γ4RB ( 0 ,0 ) + α3RB (1 ).γ4RB (1,0 ) = 0.05 x0.2 + 0.2 x0.05 = 0.02
α4RB (1 ) = α3RB ( 0 ).γ4RB ( 0 ,1 ) + α3RB (1 ).γ4RB (1,1 ) = 0.05 x0.05 + 0.2 x0.2 = 0.0425
α6RB ( 0 ) = α4RB ( 0 ).γ6RB ( 0 ,0 ) + α4RB (1 ).γ6RB (1,0 ) = 0.02 x0.4 + 0.0425 x0.1 = 0.01225
Backward recursive calculation of the values βiRB ( u ) is done by setting the contour
conditions β6RB ( 0 ) = 1 , β6RB ( m ) = 0 ; m ≠ 0 :
Once the values γiRB ( u' , u ) , αiRB ( u ) and βiRB ( u ) have been determined, then the
values λRB RB
i ( u ) and σ i ( u' , u ) can be calculated as:
λRB RB RB
3 ( 0 ) = α 3 ( 0 ). β 3 ( 0 ) = 0 .05 x 0 .085 = 0 . 00425
Turbo codes. Detailed solutions to problems
λRB RB RB
3 (1 ) = α 3 (1 ). β 3 (1 ) = 0.2 x 0.04 = 0.008
coefficients λRA
i ( u ) determine the estimates for input symbols ‘1’ and ‘0’ when there
is only one branch or transition of the trellis arriving at a given node, which then defines
the value of that node. This happens for instance in the trellis of figure at nodes λRB
3 (0 )
and λRB
3 (1 ) :
λRB
3 (0 ) 0.00425
RB RB = = 0.34694
λ ( 0 ) + λ3 (1 ) 0.00425 + 0.008
3
λRB
3 (1 ) 0.008
= = 0.65306
λRB
3 ( 0 ) + λRB
3 (1 ) 0.00425 + 0.008
Coefficients σ iRB ( u' , u ) are then utilized for determining the soft decisions when there
are two or more transitions or branches arriving at a given node of the trellis, and when
these branches are assigned the different input symbols:
These values allow us to determine soft decisions for the corresponding nodes. For
instance, for position i = 4 , the trellis transition probabilities involved in the calculation
of a soft decision for ‘0’ are:
Which is a soft decision for ‘0’ at position 4. The soft decision for ‘1’ at that position is
then 1 0.65306 = 0.34694 . For position i = 6 , the trellis transition probabilities
involved in the calculation of a soft decision for ‘0’ are:
σ 6RB ( 0 ,0 ) 0.008
RB RB = = 0.65306
σ 6 ( 0 ,0 ) + σ 6 (1,0 ) 0.008 + 0.00425
Turbo codes. Detailed solutions to problems
These estimates are converted into the apriori information LbC i(1 ) for the second
(Column) decoder. The two independent column codes have the same type of trellis,
Turbo codes. Detailed solutions to problems
which is also the same as for the two independent row codes. We now take into
account the permutation rule by properly assigning the apriori information.
− LbCi( 1 ) / 2
(1 ) e − bi LbCi( 1 ) / 2
PbC ( bi = ±1 ) =
i e
− LbCi( 1 )
1+e
these a priori probabilities are the updated information for the second decoder. They
can be calculated using the above expression, and they are equal to:
since parity bits for the first decoder are different form those of the second decoder,
probabilities for the redundancy bits of the second decoder are:
In the column code, bits 1, 3 and 7 are related as in the trellis seen in the figure below.
The same happens to bits 2, 4 and 8, which constitute a parity check code that is
independent from the code of bits 1, 3 and 7. For these two codes we can calculate the
values γ i ( u' ,u ) . Coefficient for the column code of bits 1, 3 and 7 will be denoted with
a superindex γiCA ( u' , u ) , whereas coefficients of the row code of bits 2, 4 and 8 will be
denoted as γiCB ( u' , u ) .
0 0 0
1 1 1 1
0
By taking into account that the trellis involves bits 1, 3 and 7, we are also directly taking
into account the permutation rule of the whole turbo code.
Turbo codes. Detailed solutions to problems
With these updated probabilities, we can now calculate coefficients γiCA ( u' , u ) for the
column code. Thus, for instance, and for i = 1:
i=1:
i=3:
γ3CA ( 0 ,0 ) = 0.0680
γ3CA ( 0 ,1 ) = 0.1280
γ3CA (1,0 ) = 0.1280
CA
γ23 (1,1 ) = 0.0680
i=7:
γ7CA ( 0 ,0 ) = 0.4000
γ7CA (1,0 ) = 0.1000
Forward recursive calculation of the values αiCA ( u ) is started by setting the initial
conditions α0CA ( 0 ) = 1 , α0CA ( m ) = 0 ; m ≠ 0 , thus:
α3CA ( 0 ) = 0.022064
α3CA (1 ) = 0.020464
α7CA ( 0 ) = 0.011872
Backward recursive calculation of the values βiCA ( u ) is done by setting the contour
conditions β7CA ( 0 ) = 1 , β7CA ( m ) = 0 ; m ≠ 0 :
β3CA ( 0 ) = 0.4000
β3CA (1 ) = 0.1000
β1CA ( 0 ) = 0.0400
β1CA (1 ) = 0.0580
Once the values γiCA ( u' , u ) , αiCA ( u ) and βiCA ( u ) have been determined, then the
values λCA CA
i ( u ) and σ i ( u' , u ) can be calculated as:
Turbo codes. Detailed solutions to problems
λCA
i ( 0 ) = 0.00816
λCA
i (1 ) = 0.003712
Coefficients σ iCA ( u' , u ) are then utilized for determining the soft decisions when there
are two or more transitions or branches arriving at a given node of the trellis, and when
these branches are assigned the different input symbols:
σ 3CA ( 0 ,0 ) = 0.0055488
σ 3CA (1,0 ) = 0.0032768
σ 3CA ( 0 ,1 ) = 0.0026112
σ 3CA (1,1 ) = 0.0004352
σ7CA ( 0 ,0 ) = 0.0088256
σ7CA (1,0 ) = 0.0030464
0 0 0
1 1 1 1
0
By taking into account that the trellis involves bits 2, 4 and 8, we are also directly taking
into account the permutation rule of the whole turbo code.
With these updated probabilities, we can now calculate coefficients γiCB ( u' , u ) for the
column code.
i=2:
γ2CB ( 0 ,0 ) = 0.2240
γ2CB ( 0 ,1 ) = 0.0440
i=4:
γ4CB ( 0 ,0 ) = 0.1280
γ4CB ( 0 ,1 ) = 0.0680
γ4CB (1,0 ) = 0.0680
γ4CB (1,1 ) = 0.1280
i=8:
Turbo codes. Detailed solutions to problems
γ8CB ( 0 ,0 ) = 0.4000
γ8CB (1,0 ) = 0.1000
Forward recursive calculation of the values αiCB ( u ) is started by properly setting the
initial conditions:
α2CB ( 0 ) = 0.2240
α2CB (1 ) = 0.0440
α4CB ( 0 ) = 0.031664
α4CB (1 ) = 0.020864
α8CB ( 0 ) = 0.014752
Backward recursive calculation of the values βiCB ( u ) is done by setting the contour
conditions β8CB ( 0 ) = 1 , β8CB ( m ) = 0 ; m ≠ 0 :
β4CB ( 0 ) = 0.4000
β4CB (1 ) = 0.1000
β2CB ( 0 ) = 0.0580
β2CB (1 ) = 0.0400
Once the values γiCB ( u' , u ) , αiCB ( u ) and βiCB ( u ) have been determined, then the
values λCB CB
i ( u ) and σ i ( u' , u ) can be calculated as:
λCB
2 ( 0 ) = 0.012992
λCB
2 (1 ) = 0.001760
Coefficients σ iCB ( u' , u ) are then utilized for determining the soft decisions when there
are two or more transitions or branches arriving at a given node of the trellis, and when
these branches are assigned the different input symbols:
σ iCB ( u' , u )
σ 4CB ( 0 ,0 ) = 0.0114688
σ 4CB (1,0 ) = 0.0011968
σ 4CB ( 0 ,1 ) = 0.0015232
σ 4CB (1,1 ) = 0.0005632
σ 8CB ( 0 ,0 ) = 0.0126656
σ 8CB (1,0 ) = 0.0020864
Turbo codes. Detailed solutions to problems
With all the values already calculated, an estimate or soft decision can be made for
each step i of the decoded sequence.
We will calculate directly the LLRs of the decoding of the second decoder:
LLRC1(1 ) = −0.7877
LLRB2(1 ) = −1.9990
LLRC 3( 1 ) = −0.0162
LLRC4(1 ) = −1.4869
LLRC7( 1 ) = −1.0637
LLRC8(1 ) = −1.8034
A decision taken at this point by the column decoder generates the decoded vector
d = (0 0 0 0 0 0 ) .
The second decoder makes use of a priori information that should be subtracted from
these estimations to calculate the extrinsic information that the second decoder passes
to the first one.
The information to be subtracted from the logarithmic estimates of the second decoder
in the first iteration is evaluated as:
and:
The extrinsic information that is going to be passed from the second decoder as a priori
information of the first decoder is determined by doing:
These estimates are converted into the apriori information Lb R i( 2 ) for the second
iteration of the first decoder. Thus,
− LbRi( 2 ) / 2
(2) e −bi Lb Ri( 2 ) / 2
Pb R i ( bi = ±1 ) = − LbRi( 2 )
e
1+e
these a priori probabilities are the updated information for the first decoder. They can
be calculated using the above expression and they are equal to:
since parity bits for the second decoder are different form those of the first decoder,
probabilities for the redundancy bits of the second decoder are:
The decoding needs to continue with the following iteration. The iterative decoding
performs as detailed above, and we summarize the resulting estimates for each
iteration as follows. The first decoder performs the second iteration generating the
estimates:
LLRA1( 2 ) = −0.9379
LLRA2( 2 ) = −1.7782
LLRA3( 2 ) = −0.3204
LLRA4( 2 ) = −1.8107
LLRA5( 2 ) = −1.4102
Turbo codes. Detailed solutions to problems
LLRA6( 2 ) = −0.7999
The decoded vector is now a code vector for the row code. The second decoder
performs its second iteration and the resulting estimates are:
LLRC1( 2 ) = −1.1136
LLRC 2( 2 ) = −0.3910
LLRC3( 2 ) = −1.9536
LLRC4( 2 ) = −1.7190
LLRC5( 2 ) = −1.1987
LLRC6( 2 ) = −1.9394
The decoded message is the same for both decoders at the second iteration, so that
the whole decoded vector is the code vector d = c = (0 0 0 0 0 0 0 0 ) , the
decoded information bits are 0000, and the decoding process was successfully
completed.
A rather inefficient, and unnecessarily more complex way of solving this decoding is by
means of the construction of a trellis for the whole row code, such as is seen in the
following figure.
0 0 0 0 0 0
1 1 1
1 1 1 1 1
0 0
0 0 1
1
1
0
1
0
This trellis can be used also for the column code provided that bits 2 and 3 are
permuted. It can be verified that the decoding using this trellis results into the same
values of estimates, but utilizing a much more complex set of calculations in
comparison to the method described above.
Turbo codes. Detailed solutions to problems
7.4)
The 1/3-rate turbo code encoded shown in the following figure has two constituent
encoders of rate ½.
c (1 )
m
c(2 )
3-bit PR c(3 )
Interleaver
c (1 )
m
c(2 )
The following figures show the trellis of each of the constituent encoders and the way
its minimum Hamming free distance is calculated:
00 00 00
0 0 0 0
11 11 11
10 10 10
1 1 1 1
01 01 01
Turbo codes. Detailed solutions to problems
The minimum Hamming free distance of each constituent code is then d free = 3 .
The minimum Hamming free distance of the whole code is determined in a simplified
way. We look for the minimum weight sequence that starts at, and returns to the all-
zero state, considering input sequences of three bits. For this, both encoders have to
end at the all-zero state.
Let us see as an example the output sequence of the turbo code for the input
m = (001) . The fist bit to be input is the rightmost bit. The permutation rule
1 2 3
3 1 2
With: m = (001)
With m = (110 ) :
With m = (101) :
With: m = (011)
With m = (111) :
In spite of that some cases do not end at the all-zero state, this very simple method
indicates that the minimum Hamming free distance is 4, which is determined by those
inputs for which the sequence of the whole code starts and ends at the all-zero state.
This is of course not rigorous, because there could be longer sequences that could end
at having smaller weights, so that the code could have an even smaller minimum
Hamming free distance. As a handmade exercise however, the solution is considered
proper enough. The case m = (111) is also one in which the minimum Hamming free
distance is 4, but here this happens in the middle of the input sequence, that is, the
machine ends at the all-zero state after the second bit is input. Looking at length 6
input sequences in the two cases where both states are non-zero and the distance is
less than 4 after inputting the first three bits confirms that the free distance is in fact 4.
We apologise for the mistake seen in the Answers to Problems for this item, which
wrongly indicates that the minimum Hamming free distance of the whole code is 5
instead of 4.
7.5)
Unfortunately there is a text error in the description of this problem. Since the
terminated code case was presented as an example in Chapter 7 of the book, we
wanted to propose here a turbo code whose first encoder is not terminated. The text
should say:
“ the input or message vector is:
m = (− 1 − 1 − 1 + 1 − 1 + 1 − 1 − 1 + 1 + 1 − 1 + 1 + 1 − 1 + 1 − 1) . This
input vector makes the first encoder sequence be non-terminated. “
We describe the encoder of each constituent code, and the corresponding trellis in the
following two figures:
Turbo codes. Detailed solutions to problems
c (1 )
x1
Polar
format
x2
m
c(2 )
S i −1 Si
0/00
0 00
1/11 1/11
2 10
0/00
1/10
1 01
0/01
0/01
3 11
1/10
The decimal number at the left of each state identifies the decimal representation of
each state. The following lists will describe the calculated values of the different
parameters of the turbo decoding.
The input of the second decoder is determined by a proper permutation (as in Example
7.3 on page 243) of the input of the first encoder. We describe these input vectors
below:
m1rst = (− 1 − 1 − 1 + 1 − 1 + 1 − 1 − 1 + 1 + 1 − 1 + 1 + 1 − 1 + 1 − 1)
m 2nd = (− 1 − 1 + 1 + 1 − 1 + 1 + 1 − 1 − 1 − 1 − 1 + 1 + 1 − 1 + 1 − 1)
After being punctured and transmitted, and corrupted by AWGN, the received
sequence, as tabulated in the following table, is then applied to the decoder:
Turbo codes. Detailed solutions to problems
The decoding of this received vector is performed below. The received vector for the
first decoder is:
The values γ i ( u' ,u ) are first calculated and then α i ( u ) and β i ( u ) can be also
determined.
Values of γ i ( u' ,u ) for the first decoder in the first iteration, are described in the
following list:
i=1
Turbo codes. Detailed solutions to problems
2.0078 0 0.4980 0
0.4980 0 2.0078 0
0 0.8375 0 1.1940
0 1.1940 0 0.8375
2.0078 is for instance the value of γ1 ( 0 ,0 ) , and 0.4980 is the value of γ1 ( 0 ,2 ) . The
other values of the list are not needed to be calculated for i=1.
i=2
1.0123 0 0.9878 0
0.9878 0 1.0123 0
0 0.9878 0 1.0123
0 1.0123 0 0.9878
i=3
1.1423 0 0.8754 0
0.8754 0 1.1423 0
0 0.8263 0 1.2103
0 1.2103 0 0.8263
i=4
0.2588 0 3.8643 0
3.8643 0 0.2588 0
0 3.8643 0 0.2588
0 0.2588 0 3.8643
i=5
0.7950 0 1.2579 0
1.2579 0 0.7950 0
0 0.1638 0 6.1044
0 6.1044 0 0.1638
i=6
0.2365 0 4.2291 0
4.2291 0 0.2365 0
0 4.2291 0 0.2365
0 0.2365 0 4.2291
i=7
1.4903 0 0.6710 0
0.6710 0 1.4903 0
0 0.8029 0 1.2455
0 1.2455 0 0.8029
Turbo codes. Detailed solutions to problems
i=8
2.8831 0 0.3469 0
0.3469 0 2.8831 0
0 0.3469 0 2.8831
0 2.8831 0 0.3469
i=9
0.5269 0 1.8979 0
1.8979 0 0.5269 0
0 1.3852 0 0.7219
0 0.7219 0 1.3852
i = 10
0.5727 0 1.7462 0
1.7462 0 0.5727 0
0 1.7462 0 0.5727
0 0.5727 0 1.7462
i = 11
14.0060 0 0.0714 0
0.0714 0 14.0060 0
0 0.1835 0 5.4492
0 5.4492 0 0.1835
i = 12
0.5413 0 1.8473 0
1.8473 0 0.5413 0
0 1.8473 0 0.5413
0 0.5413 0 1.8473
i = 13
1.6000 0 0.6250 0
0.6250 0 1.6000 0
0 10.3632 0 0.0965
0 0.0965 0 10.3632
i = 14
1.7164 0 0.5826 0
0.5826 0 1.7164 0
0 0.5826 0 1.7164
0 1.7164 0 0.5826
Turbo codes. Detailed solutions to problems
i = 15
0.1799 0 5.5600 0
5.5600 0 0.1799 0
0 12.9598 0 0.0772
0 0.0772 0 12.9598
i = 16
3.1441 0 0.3181 0
0.3181 0 3.1441 0
0 0.3181 0 3.1441
0 3.1441 0 0.3181
The following list describes the calculated values of α i ( u ) for the first decoder, in the
first iteration:
Once the values γ i ( u' ,u ) , α i (u ) and β i (u ) have been determined, then the values
of the LLR estimates can be calculated. The following list describes these values:
1 -1.5365936 -1 -1
2 -0.076558632 -1 -1
3 -0.87707531 -1 -1
4 +2.8030878 +1 +1
5 -1.7221617 -1 -1
6 +2.8949539 +1 +1
7 -0.65338000 -1 -1
8 -2.1014182 -1 -1
9 +0.99084643 +1 +1
10 +1.1271298 +1 +1
11 -4.4088385 -1 -1
12 +1.3086825 +1 +1
13 +1.7894979 +1 +1
14 -1.2174239 -1 -1
15 +4.3467953 +1 +1
16 -2.2910284 -1 -1
In this case, the error event was such that the first decoder could successfully decode
the received vector. We could stop decoding here. In practice, however, we do not
Turbo codes. Detailed solutions to problems
know when the number of iterations is enough to arrive at the correct decision, and we
usually set a given number of iterations to be performed.