Comm ch10 Coding en PDF
Comm ch10 Coding en PDF
Meixia Tao
Shanghai Jiao Tong University
Chapter 10: Channel Coding
Selected from Chapter 13.1 13.3 of Fundamentals of
Communications Systems, Pearson Prentice Hall 2005,
by Proakis & Salehi
Meixia Tao @ SJTU
Topics to be Covered
Source
A/D
converter
Source
encoder
Channel
encoder
Absent if
source is
digital
User
D/A
converter
Modulator
Noise
Source
decoder
Channel
decoder
Channel
Detector
Example
We want to transmit data over a telephone link using a
modem under the following conditions
Link bandwidth = 3kHz
The modem can operate up to the speed of 3600 bits/sec at an
error probability
is not met.
Channel Coding
Coding techniques are classified as either block codes or
convolutional codes, depending on the presence or
absence of memory
A block code has no memory
Information sequence is broken into blocks of length k
Each block of k infor. bits is encoded into a block of n coded bits
No memory from one block to another block
Block Codes
An (n,k) block code is a collection of M = 2k codewords of length n
Each codeword has a block of k information bits followed by a
group of r = n-k check bits that are derived from the k information
bits in the block preceding the check bits
Channel Encoder
Message
n bit codewords
k
k bits
Codeword
message
Meixia Tao @ SJTU
10
Generator Matrix G
11
Systematic Codes
The form of G implies that the 1st k components of any
codeword are precisely the information symbols
This form of linear encoding is called systematic
encoding
Systematic-form codes allow easy implementation and
quick look-up features for decoding
For linear codes, any code is equivalent to a code in
systematic form (given the same performance). Thus
we can restrict our study to only systematic codes
12
13
14
Solution
Let
15
message blocks
codeword
Message
0
0
0
0
0
0
0
0
1
1
1
1
1
1
1
1
0
0
0
0
1
1
1
1
0
0
0
0
1
1
1
1
0
0
1
1
0
0
1
1
0
0
1
1
0
0
1
1
0
1
0
1
0
1
0
1
0
1
0
1
0
1
0
1
0
0
0
0
0
0
0
0
1
1
1
1
1
1
1
1
0
0
0
0
1
1
1
1
0
0
0
0
1
1
1
1
0
0
1
1
0
0
1
1
0
0
1
1
0
0
1
1
0
1
0
1
0
1
0
1
0
1
0
1
0
1
0
1
0
1
1
0
0
1
1
0
1
0
0
1
1
0
0
1
0
0
1
1
1
1
0
0
1
1
0
0
0
0
1
1
0
1
1
0
1
0
0
1
0
1
1
0
1
0
0
1
16
17
Error Syndrome
Received codeword
where e = Error vector or Error Pattern
it is 1 in every position where data word is in error
Example
18
= Error Syndrome
But
19
But if
= Error syndrome s
Meixia Tao @ SJTU
20
Cyclic Codes
A code
is cyclic if
0001101
1000
1000110
0100
0100011
Meixia Tao @ SJTU
21
Important Parameters
Hamming Distance between codewords ci and cj:
d(ci, cj) = # of components at which the 2 codewords differ
22
23
Hard-Decision Decoding
Minimum Hamming Distance Decoding
Given the received codeword r, choose c which is closest to r
in terms of Hamming distance
To do so, one can do an exhaustive search
too much if k is large.
Syndrome Decoding
Syndrome testing:
with
24
Standard Array
Let the codewords be denoted as {c1 , c2 ,, cM } with
the all-zero codeword
being
2n k
2k
c1
c2
e1
e2
e2nk 1
e1 c2
e2 c2
e2nk 1 c2
Syndrome s
cM
coset
e1 cM
e2 cM
e2nk 1 cM
0
s = e1 H T
s = e2 H T
s = e2nk 1 H T
Error patterns
Meixia Tao @ SJTU
25
Hard-Decoding Procedure
Find the syndrome by r using s=rHT
Find the coset corresponding to s by using the
standard array
Find the cost leader and decode as
Exercise: try (7,4) Hamming code
26
27
28
29
5
B
dmin
dmin
(a)
(b)
B
t
e
(c)
30
31
Convolutional Codes
A convolutional code has memory
It is described by 3 integers: n, k, and L
Maps k bits into n bits using previous (L-1) k bits
The n bits emitted by the encoder are not only a function of
the current input k bits, but also a function of the previous
(L-1)k bits
k/n = Code Rate (information bits/coded bit)
L is the constraint length and is a measure of the code
memory
n does not define a block or codeword length
32
Convolutional Encoding
A rate k/n convolutional encoder with constraint length L
consists of
kL-stage shift register and n mod-2 adders
33
Encoder Structure
(rate k/n, constraint length L)
input bit
sequence m
+
1
modulo-2 adder
encoded sequence U
Typically, k=1 for binary codes. Hence, consider rate 1/n codes
Meixia Tao @ SJTU
34
Trellis diagram
35
Connection Representation
Specify n connection vectors,
each of the n mod-2 adders
for
36
Or
100
0
11 10 11
37
38
1/11
a=00
0/11
00
0/10
b=10
1/01
1/00
d=11
1/10
c=01
0/01
10
01
11
Meixia Tao @ SJTU
0
1
0
1
0
1
0
1
00
10
01
11
00
10
01
11
00
11
10
01
11
00
01
10
39
Example
Assume that m =11011 is the input followed by L-1 = 2 zeros to
flush the register. Also assume that the initial register contents
are all zero. Find the output sequence U
Input
bit mi
Register
contents
State at
time ti
State at
time ti+1
000
100
110
011
101
110
011
001
00
00
10
11
01
10
11
01
State ti
00
10
11
01
10
11
01
00
-1
1
0
1
1
0
0
State ti+1
-1
1
0
1
1
0
0
Output sequence: U = 11 01 01 00 01 01 11
Meixia Tao @ SJTU
40
Trellis Diagram
The trellis diagram is similar to the state diagram,
except that it adds the dimension of time
The code is represented by a trellis where each
trellis branch describes an output word
41
Trellis Diagram
state
00
00
00
00
00
a=00
11
11
11
11
11
11
11
11
b=10
00
10
10
c=01
00
00
10
10
01
01
01
01
01
01
01
d=11
10
10
10
0
1
42
a = 00
b = 10
00
11
b2
00
00
01
d = 11
11
00
10
10
c = 01
b3
10
00
11
00
10
01
01
11
11
0
1
43
Update
Polynomials representation
State diagram representation
Trellis diagram representation
44
such that
45
i-th branch of Z
Then the problem is to find a path through the trellis such that
46
and P(
and
variance
Hence
(since p<0.5)
Thus
48
49
BS & MS in MIT
PhD in University of Southern California
Invention of Viterbi algorithm in 1967
Co-founder of Qualcomm Inc. in 1983
Andrew Viterbi
(1935- )
Meixia Tao @ SJTU
50
Coded sequence U:
11
01
01
00
01
Received sequence Z:
11
01
01
10
01
Branch metric
a=00
2
0
b=10
2
c=01
0
d=11
0
2
1
2
0
0
2
51
Viterbi Decoder
Basic idea:
If any 2 paths in the trellis merge to a single state, one of
them can always be eliminated in the search
52
Path metric = 4
a=00
2
0
b=10
2
Path metric = 1
1
1
1
c=01
0
d=11
0
2
1
1
0
0
2
53
Viterbi Decoding
At time ti, there are
nodes at time ti
54
Example
55
56
Distance Properties
dfree= Minimum Free distance = Minimum distance of any
pair of arbitrarily long paths that diverge and remerge
A code can correct any t channel errors where (this is an
approximation)
00
00
00
00
00
a=00
11
11
b=10
11
11
11
11
11
11
00
10
10
c=01
00
00
10
10
01
01
d=11
10
01 01
Meixia Tao @ SJTU
10
01
01
10
01
57
Transfer Function
The distance properties and the error rate performance of
a convolutional code can be obtained from its transfer
function
Since a convolutional code is linear, the set of Hamming
distances of the code sequences generated up to some
stages in the trellis, from the all-zero code sequence, is the
same as the set of distances of the code sequences with
respect to any other code sequence
Thus, we assume that the all-zero path is the input to the
encoder
58
11
a=00
input
10
b=10
01
c=01
11
10
e=00
output
d=11
10
DNL
59
60
61
Generator
Polynomials
(5,7)
dfree
(15,17)
(23,35)
(53,75)
(133,171)
10
(247,371)
10
(561,753)
12
10
(1167,1545)
12
62
Generator
Polynomials
(5,7,7)
dfree
(13,15,17)
10
(25,33,37)
12
(47,53,75)
13
(133,145,175)
15
(225,331,367)
16
(557,663,711)
18
10
(1117,1365,1633)
20
63
Inner coding
(conv.)
Inner
interleaving
Inner coding
(conv.)
Inner
interleaving
Block Codes
BER = 10-6
Outer coding
(RS)
Outer
interleaving
Service-specific coding
64
Input bits
Conv. Encoder
r=1/2, K=7
Puncturing
Baseband
Modulator
OFDM
65
66
Exercise
Find out the coding techniques adopted in LTE
67