FEC Basic (Forward Error Correction)
FEC Basic (Forward Error Correction)
Yuh-Ming Huang
Computer Science and Information Engineering
National Chi Nan University
[email protected]
http://www.csie.ncnu.edu.tw/~ymhuang
1
Outline
Introduction
Block codes
Convolutional codes
Turbo coding, LDPC
Joint source-channel coding
Conclusions
References
2
Introduction
3
General Model of a Communication System
Transmitter Part
x(t ) Prefilter x[n] Source Channel
Modulator
Source Sampler Encoder Encoder
Digital Communication
Compression
Data
Processing
Digital Signal
Coding
Error Control
Physical
Channel
Receiver Part
Introduction 4
A coded system on an
additive white Gaussian noise channel
Introduction
E/N0 (dB) 9
Hard-decision decoding
The output of the matched filter for each
signaling interval is quantized in two levels,
denoted as 0 and 1.
e.g. algebraic decodings :using algebraic
structures of the codes
A hard decision of a received signal results in
a loss of information, which degrades
performance
Introduction 10
Soft-decision decoding
If the outputs of the matched filter are unquantized
or quantized in more than two levels, we say that the
demodulator makes soft decisions. A sequence of
soft-decision outputs of the matched filter is
referred to as a soft-decision received sequence.
Decoding by processing this soft-decision received
sequence is called soft decision decoding.
The decoder uses the additional information
contained in the unquantized (or multilevel quantized)
received samples to recover the transmitted
codewords, soft-decision decoding provides better
error performance than hard-decision decoding.
Introduction 11
In general, soft-decision maximum likelihood
decoding (MLD) of a code has about 3 dB of
coding gain over algebraic decoding of the
code; however, soft-decision decoding is much
harder to implement than algebraic decoding
and requires more computational complexity.
These decoding algorithms can be classified
into two major categories:
– Reliability-based (or probabilistic)
– Code structure-based
Introduction 12
Code rate R=k/n
Coding gain : the reduction in the Eb/N0
required to achieve a specific bit error
probability (rate) (BER) for a coded
communication system compared to an
uncoded system.
Coding threshold : There always exists an
Eb/N0 below which the code loses its
effectiveness and actually makes the
situation worse.
Eb : the signal energy per information bit
Eb = E / R
13
Bit-error performance of a coded communication
system with the (23,12) Golay code
14
Error Control Strategies
Forward Error Correction (FEC)
– one-way system
o e.g. magnetic tape storage system
o deep-space communication system
Automatic Repeat Request (ARQ)
– error detection and retransmission
– two-way system
o e.g. telephone channel
– types
o stop-and wait ARQ
o continuous ARQ
e.g. go-back-N ARQ ; select-repeat ARQ 15
Introduction
Block codes
16
Definition : An linear block code C = (n,k) is a k-dimensional
subspace of GF(q) , i.e.
(1)x, y C, x y C,
(2)a GF (q), a x C.
Hamming distance d(x,y) is the number of places in which they
differ.
minimum distance of C is the hamming distance of the pair of
codewords with smallest hamming distance, i.e. d* = min
d(xi,xj), xi,xj C, i≠ j.
Binary Block Code (n,k)
– divide the information sequence into message blocks of k
information bits each.
o i.e. u=(u1,u2,…,uk) message
o v=(v1,v2,…,vn) codeword
– Memoryless
– combinational logic circuit
Block codes 17
K bits n bits
。。。 。。。
Encoder
Input u (1011)
u0 u1 u2 u3
To channel
⊕ ⊕ ⊕
v0 v1 v2
v 100 1011
Block codes
parity-check 19
Standard Arrray for an (n,k) linear code
decoding sphere
code
v1=0 v2 … vi … v2k
words
e2 e2 + v2 … e2 + vi … e2 + v2k
e3 e3 + v2 … e3 + vi … e3 + v2k one coset
t errors
…
…
el el + v2 … el + vi … el + v2k
el+1 el+1 + v2 … el+1 + vi … el+1 + v2k
…
…
> t errors
e2n-k e2n-k + v2 … e2n-k + vi … e2n-k + v2k
coset
leaders 2k x
2n-k (row)
Block codes 20
011100 100011
e.g. C=(6,3) G = 101010 H= 010101 s=rHT
110001 001110
single error
Syndrome Coset
s leader vs vi vj
(0,0,0) 000000 011100 101010 110001 110110 101101 011011 000111
(1,0,0) 100000 111100 001010 010001 010110 001101 111011 100111
(0,1,0) 010000 001100 111010 100001 100110 111101 001011 010111
el r x
(0,0,1) 001000 010100 100010 111001 111110 100101 010011 001111
(0,1,1) 000100 011000 101110 110101 110010 101001 011111 000011
(1,0,1) 000010 011110 101000 110011 110100 101111 011001 000101
(1,1,0) 000001 011101 101011 110000 110111 101100 011010 000110
(1,1,1) 100100 111000 001110 010101 010010 001001 111111 100011
23
A (7, 4) CYCLIC CODE GENERATED BY g(x) = 1+x+x3
Messages Code Vectors Code polynomials
(0000) 0000000 0 = 0•g(x)
(1000) 1101000 1+x+x3 = 1•g(x)
systematic (0100) 0110100 x+x2+x4 = x•g(x)
(1100) 1011100 1+x2+x3+x4 = (1+x)•g(x)
encoding (0010) 1110010 1+x+x2+x5 = (1+x2)•g(x)
(1010) 0011010 x2+ x3+x5 = x2•g(x)
7-4 (0110) 1000110 1+x4+x5 = (1+x+x2)•g(x)
(1110) 0101110 x+x3+x4+x5 = (x+x2)•g(x)
(0001) 1010 001 1+x2+x6 = (1+x+x3)•g(x)
Block codes 24
Consider the (7, 4) cyclic code generated by g(X) = 1 + X + X3. Suppose that
the message u = (1 0 1 1) is to be encoded. As the message digits are
shifted into the register, the contents in the register are as follows:
Input Register contents
0 0 0 (initial state)
1 1 1 0 (first shift)
1 1 0 1 (second shift)
0 1 0 0 (third shift)
=k
1 1 0 0 (fourth shift)
After four shifts, the contents of the register are (1 0 0). Thus, the complete
code vector is (1 0 0 1 0 1 1) and the code polynomial is 1 + X3 + X5 + X6.
Gate
Parity digits
(1 0 1 1) Code word
Message Xn-ku(X)
Encoder for the (7, 4) cyclic code generated by g(X) = 1 + X + X3 25
Block codes
Buffer register
r(X) r’(X)
Multiplexer
Input Output
Gate
Gate
0 0 1
Gate
27
Convolutional Code (n,k,m)
– u,v : two sequences of blocks
– memory order m
n and k (e.g. k=1,
– sequential logic circuit
n=2) are usually
R = k/n code rate
small
K bits n bits
。。。 。。。
Encoder
(255,223), Reed-
previous Solomon code with 8-bit
m
symbols
blocks
Convolutional codes 28
* (2,1,2) Convolutional code
let u=(1 1 0 1 0 0 0 …)
v=(11,10,00,10,01,01,00,…)
u …0001011
v
…0001011
…0001011
⊕ …0001011
…0001011
…0001011
u=(…,m-1,m0,m1,…)
…0001011
v=(…,C-1(1), C-1(2), C0(1), C0(2),
C1(1), C1(2),…)
Convolutional codes 29
Linear Time Invariant System
1 n 0 1 n k
(n) (n k ) L : (n) g (n )
0 n 0 0 n k
where δ(n): impulse sequence and g(n): impulse response sequence
u(n ) u(k ) (n k )
k
v(n ) L{u(n )} L{ u(k ) (n k )}
k
time
linear
invariant
u(k ) L{ (n k )}
k
u(k ) g (n k ) u(n ) * g (n )
k
Convolutional codes 30
Ci(1)=∑gk(1)▪mi-k=mi C(1) = m g(1)
k
Ci =∑gk ▪mi-k=m⊕m
(2) (2) C(2) = m g(2)
i i-1⊕mi-2
k
linear convolution
where g0(1) = 1 , others = 0
g0(2) = g1(2) = g2(2) = 1 , others = 0
Impulse response
Convolutional codes 31
State diagram for a (2,1,2) encoder with g(1)=(111) g(2)=(101)
Convolutional codes 32
Trellis diagram for a (2,1,2)
encoder with input (11101)
Convolutional codes 33
Maximum Likelihood Decoding
Given that r is received, the conditional error probability of
the decoder is defined as
P( E | r ) P( vˆ v | r ).
The error probability of the decoder is given by
P( E ) P( E | r)P(r), where p(r) is the probability
r
of the received sequence r.
To minimize P( E ), we must minimize P( E | r ) for all r.
P( r | v ) P( v )
maximize P( v | r ) , (1)
P( r )
i.e. vˆ is chosen as the most likely codeword given that r is received.
Convolutional codes 34
Suppose all information sequences, and hence all codewords,
are equally likely, i.e. P(v) is the same for all v.
To maximize (1) is equivalent to maximizing P(r/v). For a DMC,
r l 01 02 12 11 rl 01 02 12 11
vl vl
-0.4 -0.52 -0.7 -1.0 0 10 8 5 0
0
1 0 5 8 10
1 -1.0 -0.7 -0.52 -0.4
c2 [log p( rl | vl ) c1 ]
with c1 =1 and c2 =17.3
log p( rl | vl )
Convolutional codes 36
Hard-decision decoding Viterbi algorithm for a (3,1,2)
convolutional code with g(1)=(110) g(2)=(101) g(3)=(111) 37
Convolutional codes
Soft-decision decoding Viterbi algorithm for a (3,1,2)
convolutional code with g(1)=(110) g(2)=(101) g(3)=(111) 38
Convolutional codes
Soft-decision decoding Viterbi algorithm for a (3,1,2)
convolutional code with g(1)=(110) g(2)=(101) g(3)=(111) 39
Convolutional codes
Turbo coding
40
The basic turbo encoding structure
u
v(0)
v(1)
u’
43
When using entropy coding over a noisy channel, it
is customary to protect the highly vulnerable
bitstream with error correcting code. However, a
technique which utilizes the residual redundancy at
the output of the source coder to provide error
protection for entropy coded systems is feasible.
Real world source coding algorithms usually leave a
certain amount of redundancy within the coded bit
stream. Shannon [1948] already mentioned that this
redundancy can be exploited at the receiver side to
achieve a higher robustness against channel errors
symbol codeword
A 000
B 0110
C 1011
Joint source/channel coding 45
Example
transmitted codeword sequence:000 0110
received vector y = 000 1110
n : number of total bits (n=7)
d(a,b):hamming distance between a and b
Mi: the path metric of the surviving path at
state si
mj: the j-th branch metric
m3 = d ( 1011, 0001 ) = 2
m1 = d ( 000, 000 ) = 0
y=0001110
S3 M3 = 0
S6 M6 = 3
y=0001110
m1 = d ( 000, 111 ) = 3
S3
M3 = 0 m2 = d ( 0110, 1110 ) = 1
S7 M7 = 1
0110
M3 = 0
S7 M7 = 1
so decoded as : S0 S3 S7
( i.e. 000 0110 )
A B
Code rate
(5/4.2054)*4/70.6794
(7,4)
53
References
[1] Shu Lin and Daniel J. Costello, “Error Control Coding –
Fundamentals and Applications,” Second Edition, 2004.
[2] Todd K. Moon, “Error Correction Coding – Mathemathical
Methods and Algorithms,” 2005.
[3] C. Guillemot and P. Christ,”Joint source-channel coding as an
element of a QOS framework for ‘4G’ wireless multimedia,”
Computer Communication, vol. 27,pp. 762-779,2004.
[4] V. Buttigieg and P. G. Farrell, “Variable-length error-
correcting codes,” IEE Proc.-Commun.”, vol. 147, pp. 211-215,
Aug 2000.
[5] V. Buttigieg and P. G. Farrell, “A Maximum A-Posteriori
(MAP) Decoding Alhorithm for Variable-Length Error-
Correcting Codes,” Codes and cyphers: Cryptohraphy and
coding IV, pp. 103-119, 1995.
54