0% found this document useful (0 votes)
6 views26 pages

Mod 6 Turbo

Uploaded by

Rentala Charitha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views26 pages

Mod 6 Turbo

Uploaded by

Rentala Charitha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 26

Turbo Codes

Applications of Turbo Codes


Worldwide Applications & Standards
⚫ Data Storage Systems

⚫ DSL Modem / Optical Communications

⚫ 3G (Third Generation) Mobile Communications

⚫ Digital Video Broadcast (DVB)

⚫ Satellite Communications

⚫ IEEE 802.16 WiMAX

⚫ …
Basic Concept of Turbo Codes
Invented in 1993 by Alian Glavieux, Claude Berrou
and Punya Thitimajshima

Basic Structure:
a) Recursive Systematic Convolutional (RSC)
Codes
b) Parallel concatenation and Interleaving
c) Iterative decoding
Recursive Systematic Convolutional Codes

Non Systematic Recursive Systematic


Convolutional Code Convolutional Code
c1 c1

+ +

+ +

c2 c2

g1=[1 1 1] g2=[1 0 1] g1=[1 1 1] g2=[1 0 1]

G=[g1 g2] G=[1 g2/g1]


Non-Recursive Encoder Trellis Diagram
00 00 00 00 00 00
s0 (0 0)
11 11 11 11 11 11

s1 (1 0) 11 11 11 11
00 00 00 00

10 10 10 10 10
s2 (0 1) 01 01 01 01 01

01 01 01 01

s3 (1 1) 10 10 10 10
Systematic Recursive Encoder Trellis Diagram
00 00 00 00 00 00
s0 (0 0)
11 11 11 11 11 11

s1 (1 0) 11 11 11 11
00 00 00 00

10 10 10 10 10
s2 (0 1) 01 01 01 01 01

01 01 01 01

s3 (1 1) 10 10 10 10
Non-Recursive & Recursive Encoders
⚫ Non Recursive and
recursive encoders
both have the same
trellis diagram
structure

⚫ Generally Recursive
encoders provide
better weight
distribution for the
code

⚫ The difference
between them is in
the mapping of
information bits to
codewords
Parallel Concatenation with Interleaving
xk xk

RSC Encoder 1 yk(1)

RSC Encoder 2 yk(2)


1/3 Turbo Encoder

⚫ Two component RSC Encoders in parallel separated by an Interleaver


⚫ The job of the interleaver is to de-correlate the encoding process of the
two encoders
⚫ Usually the two component encoder are identical
xk : Systematic information bit
yk(1) : Parity bit from the first RSC encoder
yk(2) : Parity bit from the second RSC encoder
Interleaving
Permutation of data to de-correlate encoding process of both encoders
If encoder 1 generates a low weight codeword then hopefully the
interleaved input information would generate a higher weight codeword
By avoiding low weight codewords the BER performance of the turbo
codes could improve significantly
Turbo Decoder

rk xk*
RSC Decoder 1

π-1 π

RSC Decoder 2

Iteratively exchanging information between decoder 1 and


decoder 2 before making a final deciding on the transmitted
bits
Turbo Codes Performance

Principles of Turbo Codes: Keattisak Sripiman ➔ http:www.kmitl.ac.th/dslabs/TurboCodes


Turbo Decoding
⚫ Turbo decoders rely on probabilistic decoding of
component RSC decoders

⚫ Iteratively exchanging soft-output information


between decoder 1 and decoder 2 before making
a final deciding on the transmitted bits

⚫ As the number of iteration grows, the decoding


performance improves

12
Turbo Coded System Model
u=[u1, …, uk, …,uN] xs ys

RSC x1p y1p

DEMUX
Encoder 1 x y Turbo u*

MUX
Channel
Decoding
N-bit
π
Interleaver
RSC x2p y2p
Encoder 2

u = [u1, …, uk, …,uN] : vector of N information bits


xs = [x1 , …, xk , …,xN ]
s s s : vector of N systematic bits after turbo-coding of u
x1p = [x1 , …, xk , …,xN ]
1p 1p 1p : vector of N parity bits from first encoder after turbo-coding of u
x2p = [x12p, …, xk2p, …,xN2p] : vector of N parity bits from second encoder after turbo-coding of u
x = [x1 , x1 , x1 , …, xN , xN , xN ] : vector of length 3N of turbo-coded bits for u
s 1p 2p s 1p 2p

y = [y1s, y11p, y12p, …, yNs, yN1p, yN2p] : vector of 3N received symbols corresponding to turbo-coded bits of u
ys = [y1s, …, yks, …,yNs] : vector of N received symbols corresponding to systematic bits in xs
y1p = [y11p, …, yk1p, …,yN1p] : vector of N received symbols corresponding to first encoder parity bits in x1p
y2p = [y12p, …, yk2p, …,yN2p] : vector of N received symbols corresponding to second encoder parity bits in x2p
u* = [u1*, …, uk*, …,uN*] : vector of N turbo decoder decision bits corresponding to u

+1 Pr uk = +1 y   Pr uk = −1 y  Pr[uk|y] is known as the a posteriori probability


uk * = 
−1 otherwise of kth information bit

13
Log-Likelihood Ratio (LLR)
The LLR of an information bit uk is given by:
 Pr uk = +1 
L uk  = ln 
 Pr u = −1 
 k 

Given that Pr[uk=+1]+Pr[uk=-1]=1


Pr uk = +1
= e  k
Lu

1 − Pr uk = +1

Pr uk = +1 = e  k  1 − Pr uk = +1


Lu

e  k
Lu
1
Pr uk = +1 = =
1+ e  k  1+ e  k 
Lu −L u

e  k
−L u
 Pr uk = −1 =
1+ e  k 
−L u

14
Maximum A Posteriori Algorithm
u=[u1, …, uk, …,uN] xs ys

RSC x1p y1p

DEMUX
Encoder 1 x y Turbo u*

MUX
Channel
Decoding
N-bit
π
Interleaver
RSC x2p y2p
Encoder 2

 Pr uk = +1 y  
L uk y  = ln  
 Pr uk = −1 y  
  

+1 L uk y   0
( )
uk * = sign L uk y  = 
−1 otherwise

15
Some Definitions & Conventions
u=[u1, …, uk, …,uN] xs ys

RSC x1p y1p

DEMUX
Encoder 1 x y Turbo u*

MUX
Channel
Decoding
N-bit
π
Interleaver
RSC x2p y2p
Encoder 2

uk ykp
y = [y a ,y ,y a ,...,y b ,y ,y b ]
b
a
s
a
1p 2p s
b
1p 2p
Sk-1 Sk
s0 s0
y = yN1
s1 s1
( s',s )  Sk −1 = s',Sk = s
s2 s2
Pr Sk = s Sk −1 = s'  = Pr uk ( s ',s) uk=-1

uk=+1
s3 s3
16
Derivation of LLR
uk ykp
Define S(i)
as the set of pairs of states Sk-1 Sk
(s’,s) such that the transition from s0 s0
Sk-1=s’ to Sk=s is caused by the input
uk=i, i=0,1, i.e.,
s1 s1

S(0) ={(s0 ,s0), (s1 ,s3), (s2 ,s1), (s3 ,s2)}


s2 s2
uk=-1
S(1) ={(s0 ,s1), (s1 ,s2), (s2 ,s0), (s3 ,s3)}
uk=+1
s3 s3

Pr uk = i yN1  =  Pr Sk −1 = s',Sk = s yN1  Remember Bayes’ Rule


Pr  A,B
(i)
S

Pr B A  =
 Pr S k −1 = s',Sk = s, yN1  Pr  A 
Pr uk = i yN1  = S( i )

Pr  yN1 

17
Derivation of LLR
 Pr S k −1 = s',Sk = s, yN1 
Pr uk = i yN1  = S( i )

Pr  yN1 

Define

σ k ( s',s ) = Pr Sk −1 = s',Sk = s, yN1 

 Pr u = +1 yN  
 1 
L uk y1  = ln  
N k

  N 

 Pr uk = −1 y1  
  

  σ k ( s',s ) Pr  yN1     σ k ( s',s ) 


 (1)     S(1) 
L uk y1  = ln  S
N
N 
= ln  σ ( s',s ) 

  ( )

 (0) σ k s',s Pr  y
 
1

S
k 

S  (0)

18
Derivation of LLR
σ k ( s',s ) = Pr Sk −1 = s',Sk = s, yN1 

σ k ( s',s ) = Pr Sk −1 = s',Sk = s, yk-1


1 ,y k , yN

k+1 

σ k ( s',s ) = Pr  yNk+1 Sk −1 = s',Sk = s, y1k-1,y k  Pr Sk −1 = s',Sk = s, y1k-1,y k 

yNk+1 depends only on Sk and is independent on Sk-1, yk and y1


k-1

 σ k ( s',s ) = Pr Sk −1 = s',Sk = s, yk-1


1 ,y 
k Pr 
 k+1 Sk = s 
yN

σ k ( s',s ) = Pr Sk −1 = s', yk-1


1 
 Pr  S
 k = s,y k S k −1 = s', yk-1
1 
 Pr 
 k+1 Sk = s 
yN

k-1
Sk , yk depends only on Sk-1 and is independent on y1

σ k ( s',s ) = Pr Sk −1 = s', yk-1


1 
 Pr S
 k = s,y k S k −1 = s'   k+1 Sk = s 
 Pr  yN

19
Derivation of LLR

σ k ( s',s ) = Pr Sk −1 = s', yk-1


1 
 Pr S
 k = s,y k S k −1 = s'   k+1 Sk = s 
 Pr  yN

αk ( s ) = Pr Sk = s, yk1 

βk ( s ) = Pr  yNk+1 Sk = s 

γk ( s',s ) = Pr Sk = s,y k Sk −1 = s' 

σ k ( s',s ) = αk −1 ( s' ) γ k ( s',s ) βk ( s )

  αk −1 ( s' ) γk ( s',s ) βk ( s ) 
 (1) 
L uk y1  = ln  S
N

 
 αk −1 ( s' ) γk ( s',s ) βk ( s ) 
 S( 0 ) 

20
Derivation of LLR

S0 Sk-1 Sk Sk+1 SN
s0

s1

s2

s3
αk −1 ( s' ) γk ( s',s ) βk ( s )

y= y1 yk-1 yk yk+1 yN
21
Computation of αk(s)
αk ( s ) = Pr Sk = s, yk1  =  Pr S k −1 = s',Sk = s, yk1 
all s '

Example

αk ( s0 ) = Pr Sk = s0 , yk1  =  Pr S k −1 = s',Sk = s0 , yk1 


all s '

αk ( s0 ) = Pr Sk = s0 , yk1  = Pr Sk −1 = s0 ,Sk = s0 , yk1  + Pr Sk −1 = s2 ,Sk = s0 , yk1 

αk ( s ) =  Pr S k −1 = s', yk-1  


1  Pr Sk = s,y k Sk −1 = s', y1 
k-1

all s '

αk ( s ) =  Pr S k −1 = s', yk-1   


1  Pr Sk = s,y k Sk −1 = s' 
all s '

αk ( s ) =  α ( s' ) γ ( s',s )
k −1 k
all s '

22
Computation of αk(s)

αk ( s ) =  α ( s' ) γ ( s',s )
k −1 k Forward Recursive Equation
all s '

Given the values of γk(s’,s) for all index k, the probability αk(s) can be
forward recursively computed. The initial condition α0(s) depends on
the initial state of the convolutional encoder

α0 ( s ) = Pr S0 = s, y10  = Pr S0 = s 

The encoder usually starts at state 0


 α0 ( s 0 ) = 1
α0 ( s ) = 0 s  s0

23
Computation of βk(s)
βk ( s ) = Pr  yNk+1 Sk = s 

βk ( s ) =  Pr Sk +1 = s'',yk +1 Sk = s  Pr  yNk+2 Sk = s,Sk +1 = s'',y k +1 


s ''

βk ( s ) =  Pr  yNk+2 Sk +1 = s''  Pr Sk +1 = s'',y k +1 Sk = s 


s ''

βk ( s ) =  βk +1 ( s'' ) γ k +1 ( s,s'' ) Backward Recursive Equation


s ''

Given the values of γk(s’,s) for all index k, the probability βk(s) can be
backward recursively computed. The initial condition βN(s) depends
on the final state of the trellis
First encoder usually finishes at state s0  βN ( s0 ) = 1, βN ( s ) = 0 s  0

Second encoder usually has open trellis  βN ( s ) = 1 s


24
Computation of γk(s’,s)

γk ( s',s ) = Pr Sk = s,y k Sk −1 = s' 

γk ( s',s ) = Pr  y k Sk −1 = s',Sk = s  Pr Sk = s Sk −1 = s' 

γk ( s',s ) = Pr  yk Sk −1 = s',Sk = s  Pr a uk 

γk ( s',s ) = Pr  y k x k  Pr a uk 

Note that yk =  yk s yk p  ,xk =  xk s xk p 

Pr  yk xk  = Pr  yk s xk s  Pr  yk p xk p 

25
Computation of γk(s’,s)
(y ) (y )
2 2
k
s
− xk s k
p
− xk p
1 − 1 −
Pr  yk xk  = e 2σ 2
 e 2σ 2

2πσ 2 2πσ 2
(y ) (y )
2 2
k
s
− xk s k
p
− xk p
1 − −
Pr  yk xk  = e 2σ 2
e 2σ 2
2πσ 2
 Pr uk = +1 
L uk  = ln 
 Pr u = −1 
a

 k 
e  k
−La u
1
Pr a uk = +1 = −L uk 
a ,Pr a uk = −1 = −La uk 
1+ e 1+ e
 −L uk  2 a
e
 Pr uk  = 
uk La uk  2
a
e
 1 + e−La uk  
 
 ( yk s − xk s ) ( yk p − xk p )   −La uk  2 
2 2

γk ( s',s ) =  1 −


 e uk La uk  2
  1 + e−La uk  
e 2σ 2
e 2σ 2
e
 2πσ 2
  
 
26

You might also like