0% found this document useful (0 votes)
50 views54 pages

FEC Basic (Forward Error Correction)

This document provides an introduction to forward error correction (FEC) techniques. It discusses block codes and how they work by encoding k information bits into an n-bit codeword using a generator matrix. Block codes add parity-check bits to allow detection or correction of errors that may occur when the codeword is transmitted over a noisy channel. The minimum distance of a block code determines its error correction capability. Convolutional codes and more advanced techniques like turbo codes and LDPC codes are also mentioned.

Uploaded by

takahashikeyaki
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
50 views54 pages

FEC Basic (Forward Error Correction)

This document provides an introduction to forward error correction (FEC) techniques. It discusses block codes and how they work by encoding k information bits into an n-bit codeword using a generator matrix. Block codes add parity-check bits to allow detection or correction of errors that may occur when the codeword is transmitted over a noisy channel. The minimum distance of a block code determines its error correction capability. Convolutional codes and more advanced techniques like turbo codes and LDPC codes are also mentioned.

Uploaded by

takahashikeyaki
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 54

FEC Basic

(Forward Error Correction)

Yuh-Ming Huang
Computer Science and Information Engineering
National Chi Nan University
[email protected]
http://www.csie.ncnu.edu.tw/~ymhuang

1
Outline

 Introduction
 Block codes
 Convolutional codes
 Turbo coding, LDPC
 Joint source-channel coding
 Conclusions
 References

2
Introduction

3
General Model of a Communication System
Transmitter Part
x(t ) Prefilter x[n] Source Channel
Modulator
Source Sampler Encoder Encoder

Digital Communication
Compression
Data
Processing
Digital Signal

Coding
Error Control
Physical
Channel
Receiver Part

y (t ) Reconstruction y[n] Source Channel


Demodulator
Filter Decoder Decoder
Sink

Introduction 4
A coded system on an
additive white Gaussian noise channel

Introduction Discrete Memory Channel (DMC) 5


AWGN (Additive White Gaussian Noise) Channel
 A 0 is transmitted as + E and a 1 is transmitted as
- E , where E is the signal energy per channel bit.
 ri   1 E  ni , where si is the transmitted
si

bit, ri is the received bit, and ni is a noise sample


of a Gaussian process with single-sided noise
power per hertz N0 .
 The variance of ni is N0/2 and the signal-to-noise
ratio (SNR) for the channel is E/N0 .
( ri ( 1) si E )2
1 
Pr( ri | si )  e N0
, where si  0 or 1, ri  real number.
 N0
( x   )2
1 
[n( x; ; ) 2
e 2 2
with  2  N 0 / 2]
2
Introduction 6
Introduction 7
Binary Symmetric Channel
BSC is characterized by a probability p of
bit error such that the probability p of a
transmitted bit 0 being received as a 1 is
the same as that of a transmitted 1 being
received as a 0.
When BPSK modulation is used on an AWGN
channel with optimum coherent detection
and binary output quantization
 0
p   pr (ri |1)dri   pr (ri | 0)dri
0 
( ri  E )2 y2
 1   1 
= e N0
dri   e 2
dy
0  N0 2 E / N0 2
Introduction 8
Probability of error for BPSK signaling
Probability of bit error (p)

Introduction
E/N0 (dB) 9
Hard-decision decoding
The output of the matched filter for each
signaling interval is quantized in two levels,
denoted as 0 and 1.
e.g. algebraic decodings :using algebraic
structures of the codes
A hard decision of a received signal results in
a loss of information, which degrades
performance

Introduction 10
Soft-decision decoding
 If the outputs of the matched filter are unquantized
or quantized in more than two levels, we say that the
demodulator makes soft decisions. A sequence of
soft-decision outputs of the matched filter is
referred to as a soft-decision received sequence.
Decoding by processing this soft-decision received
sequence is called soft decision decoding.
 The decoder uses the additional information
contained in the unquantized (or multilevel quantized)
received samples to recover the transmitted
codewords, soft-decision decoding provides better
error performance than hard-decision decoding.

Introduction 11
In general, soft-decision maximum likelihood
decoding (MLD) of a code has about 3 dB of
coding gain over algebraic decoding of the
code; however, soft-decision decoding is much
harder to implement than algebraic decoding
and requires more computational complexity.
These decoding algorithms can be classified
into two major categories:
– Reliability-based (or probabilistic)
– Code structure-based

Introduction 12
Code rate R=k/n
Coding gain : the reduction in the Eb/N0
required to achieve a specific bit error
probability (rate) (BER) for a coded
communication system compared to an
uncoded system.
Coding threshold : There always exists an
Eb/N0 below which the code loses its
effectiveness and actually makes the
situation worse.
Eb : the signal energy per information bit
Eb = E / R
13
Bit-error performance of a coded communication
system with the (23,12) Golay code

14
Error Control Strategies
 Forward Error Correction (FEC)
– one-way system
o e.g. magnetic tape storage system
o deep-space communication system
 Automatic Repeat Request (ARQ)
– error detection and retransmission
– two-way system
o e.g. telephone channel
– types
o stop-and wait ARQ
o continuous ARQ
 e.g. go-back-N ARQ ; select-repeat ARQ 15
Introduction
Block codes

16
 Definition : An linear block code C = (n,k) is a k-dimensional
subspace of GF(q) , i.e.
(1)x, y  C, x  y  C,
(2)a  GF (q), a  x  C.
 Hamming distance d(x,y) is the number of places in which they
differ.
 minimum distance of C is the hamming distance of the pair of
codewords with smallest hamming distance, i.e. d* = min
d(xi,xj), xi,xj  C, i≠ j.
 Binary Block Code (n,k)
– divide the information sequence into message blocks of k
information bits each.
o i.e. u=(u1,u2,…,uk) message
o v=(v1,v2,…,vn) codeword
– Memoryless
– combinational logic circuit

Block codes 17
K bits n bits
。。。 。。。
Encoder

Input u (1011)

u0 u1 u2 u3

To channel

⊕ ⊕ ⊕
v0 v1 v2
v 100 1011

1 (u0,u2,u3) 0 (u0,u1,u2) 0 (u1,u2,u3) 18


Block codes
Hamming Code with k = 4 n = 7 d*=3
Messages(u) Code words (v)
(0000) (0000000)
1101000 (1000) (1101000)
v g0
G= 0 1 1 0 1 0 0 (0100)
(1100)
v (0110100)
(1011100)
g1
1110010 (0010)
v (1110010)
g2
(1010) (0011010)
1010001
4x7 (0110) (1000110)
(1110) (0101110)
1001011 (0001) v (1010001) g3
H= 0 1 0 1 1 1 0 (1001) (0111001)
(0101) (1100101)
0010111 (1101) (0001101)
3x7 (0011) (0100011)
(1011) (1001011)
GHT=0 (0111) (0010111)
(1111) (1111111)

v = uG=1▪g0 + 0▪g1 + 1▪g2 +1▪g3 information

Block codes
parity-check 19
Standard Arrray for an (n,k) linear code

decoding sphere
code
v1=0 v2 … vi … v2k
words
e2 e2 + v2 … e2 + vi … e2 + v2k
e3 e3 + v2 … e3 + vi … e3 + v2k one coset
 t errors


el el + v2 … el + vi … el + v2k
el+1 el+1 + v2 … el+1 + vi … el+1 + v2k


> t errors
e2n-k e2n-k + v2 … e2n-k + vi … e2n-k + v2k

coset
leaders 2k x
2n-k (row)
Block codes 20
011100 100011
e.g. C=(6,3) G = 101010 H= 010101 s=rHT
110001 001110
single error
Syndrome Coset
s leader vs vi vj
(0,0,0) 000000 011100 101010 110001 110110 101101 011011 000111
(1,0,0) 100000 111100 001010 010001 010110 001101 111011 100111
(0,1,0) 010000 001100 111010 100001 100110 111101 001011 010111
el r x
(0,0,1) 001000 010100 100010 111001 111110 100101 010011 001111
(0,1,1) 000100 011000 101110 110101 110010 101001 011111 000011
(1,0,1) 000010 011110 101000 110011 110100 101111 011001 000101
(1,1,0) 000001 011101 101011 110000 110111 101100 011010 000110
(1,1,1) 100100 111000 001110 010101 010010 001001 111111 100011

correctable error pattern not correctable error pattern


r = vj + x = el + (vi +vj) = el + vs
erroneous decoding
noise = el+vi ,vi ≠ 0 21
Block codes
22
A (7, 4) CYCLIC CODE GENERATED BY g(x) = 1+x+x3
Messages Code Vectors Code polynomials
(0000) 0000000 0 = 0•g(x)
(1000) 1101000 1+x+x3 = 1•g(x)
(0100) 0110100 x+x2+x4 = x•g(x)
(1100) 1011100 1+x2+x3+x4 = (1+x)•g(x)
(0010) 0011010 x2+x3+x5 = x2•g(x)
(1010) 1110010 1+x+x2+x5 = (1+x2)•g(x)

non-systematic (0110) 0101110 x+x3+x4+x5 = (x+x2)•g(x)

Encoding (1110) 1000110 1+x4+x5 = (1+x+x2)•g(x)


v(x) = u(x)g(x) (0001) 0001101 x3+x4+x6 = x3•g(x)
(1001) 1100101 1+x+x4+x6 = x3•g(x)
(0101) 0111001 x+x2+x3+x6 = (x+x3)•g(x)
(1101) 1010001 1+x2+x6 = (1+x+x3)•g(x)
(0011) 0010111 x2+x4+x5+x6 = (x2+x3)•g(x)
(1011) 1111111 1+x+x2+x3+x4+x5+x6 = (1+x2+x4)•g(x)
(0111) 0100011 x+x5+x6 = (x+x2+x3)•g(x)
(1111) 1001011 1+x3+x5+x6 = (1+x+x2+x3)•g(x)

23
A (7, 4) CYCLIC CODE GENERATED BY g(x) = 1+x+x3
Messages Code Vectors Code polynomials
(0000) 0000000 0 = 0•g(x)
(1000) 1101000 1+x+x3 = 1•g(x)
systematic (0100) 0110100 x+x2+x4 = x•g(x)
(1100) 1011100 1+x2+x3+x4 = (1+x)•g(x)
encoding (0010) 1110010 1+x+x2+x5 = (1+x2)•g(x)
(1010) 0011010 x2+ x3+x5 = x2•g(x)
7-4 (0110) 1000110 1+x4+x5 = (1+x+x2)•g(x)
(1110) 0101110 x+x3+x4+x5 = (x+x2)•g(x)
(0001) 1010 001 1+x2+x6 = (1+x+x3)•g(x)

X3(x3+x2+1) (1001) 0111001 x+x2+x3+x6 = (x+x3 )•g(x)


(0101) 1100101 1+x+x4+x6 = (1+x3)•g(x)
modulo
(1101) 0001101 x3+x4+x6 = x3•g(x)
X3+x+1 (0011) 0100011 x+x5+x6 = (x+x2+x3)•g(x)

=1 (1011) 1001011 1+x3+x5+x6 = (1+x+x2+x3)•g(x)


(0111) 0010111 x2+ x4+x5+x6 = (x2+x3)•g(x)
(1111) 1111111 1+x+x2+x3+x4+x5+x6 = (1+x2+x5)•g(x)

Block codes 24
Consider the (7, 4) cyclic code generated by g(X) = 1 + X + X3. Suppose that
the message u = (1 0 1 1) is to be encoded. As the message digits are
shifted into the register, the contents in the register are as follows:
Input Register contents
0 0 0 (initial state)
1 1 1 0 (first shift)
1 1 0 1 (second shift)
0 1 0 0 (third shift)
=k
1 1 0 0 (fourth shift)

After four shifts, the contents of the register are (1 0 0). Thus, the complete
code vector is (1 0 0 1 0 1 1) and the code polynomial is 1 + X3 + X5 + X6.
Gate

Parity digits
(1 0 1 1) Code word
Message Xn-ku(X)
Encoder for the (7, 4) cyclic code generated by g(X) = 1 + X + X3 25
Block codes
Buffer register
r(X) r’(X)
Multiplexer
Input Output

Gate
Gate

0 0 1

Gate

Decoding circuit for the (7, 4) cyclic code generated by


g(X) = 1 + X + X3.
Block codes 26
Convolutional Codes

27
 Convolutional Code (n,k,m)
– u,v : two sequences of blocks
– memory order m
n and k (e.g. k=1,
– sequential logic circuit
n=2) are usually
 R = k/n code rate
small

K bits n bits
。。。 。。。
Encoder

(255,223), Reed-
previous Solomon code with 8-bit
m
symbols
blocks

Convolutional codes 28
* (2,1,2) Convolutional code

 let u=(1 1 0 1 0 0 0 …)
v=(11,10,00,10,01,01,00,…)

u …0001011
v
…0001011
…0001011
⊕ …0001011
…0001011
…0001011
 u=(…,m-1,m0,m1,…)
…0001011
 v=(…,C-1(1), C-1(2), C0(1), C0(2),
C1(1), C1(2),…)

Convolutional codes 29
Linear Time Invariant System
1 n  0 1 n  k
 (n)    (n  k )   L :  (n)  g (n )
0 n  0 0 n  k
where δ(n): impulse sequence and g(n): impulse response sequence

 u(n )   u(k ) (n  k )
k 


 v(n )  L{u(n )}  L{  u(k ) (n  k )}
k 

time
linear 
invariant 
  u(k ) L{ (n  k )}
k 
  u(k ) g (n  k )  u(n ) * g (n )
k 

Convolutional codes 30
Ci(1)=∑gk(1)▪mi-k=mi C(1) = m g(1)
k
Ci =∑gk ▪mi-k=m⊕m
(2) (2) C(2) = m g(2)
i i-1⊕mi-2
k
linear convolution
where g0(1) = 1 , others = 0
g0(2) = g1(2) = g2(2) = 1 , others = 0

Impulse response

Convolutional codes 31
State diagram for a (2,1,2) encoder with g(1)=(111) g(2)=(101)

Convolutional codes 32
Trellis diagram for a (2,1,2)
encoder with input (11101)

Convolutional codes 33
Maximum Likelihood Decoding
Given that r is received, the conditional error probability of
the decoder is defined as
P( E | r ) P( vˆ  v | r ).
The error probability of the decoder is given by
P( E )   P( E | r)P(r), where p(r) is the probability
r
of the received sequence r.
To minimize P( E ), we must minimize P( E | r ) for all r.
P( r | v ) P( v )
 maximize P( v | r )  , (1)
P( r )
i.e. vˆ is chosen as the most likely codeword given that r is received.

Convolutional codes 34
 Suppose all information sequences, and hence all codewords,
are equally likely, i.e. P(v) is the same for all v.
 To maximize (1) is equivalent to maximizing P(r/v). For a DMC,

P( r | v )   P(ri | vi )  log P(r | v)   log P(ri | vi )


i i

 max  c2 [log p(ri | vi )  c1],


i
where c1 is any real number and c2 is any positive real number.
 p when ri  vi
Let P( ri | vi )   & d ( r, v ) be the Hamming distance
1  p when ri  vi
Then, log P( r | v )  d ( r, v ) log p  [n  d ( r, v )]log(1  p )
= d ( r, v) log( p /1  p)  n log(1  p )
 max log P( r | v )  min d ( r, v ) when log( p /1  p)  0 for p<1/2
Convolutional codes 35
metric tables

r l 01 02 12 11 rl 01 02 12 11
vl vl
-0.4 -0.52 -0.7 -1.0 0 10 8 5 0
0
1 0 5 8 10
1 -1.0 -0.7 -0.52 -0.4

c2 [log p( rl | vl )  c1 ]
with c1 =1 and c2 =17.3
log p( rl | vl )
Convolutional codes 36
Hard-decision decoding Viterbi algorithm for a (3,1,2)
convolutional code with g(1)=(110) g(2)=(101) g(3)=(111) 37
Convolutional codes
Soft-decision decoding Viterbi algorithm for a (3,1,2)
convolutional code with g(1)=(110) g(2)=(101) g(3)=(111) 38
Convolutional codes
Soft-decision decoding Viterbi algorithm for a (3,1,2)
convolutional code with g(1)=(110) g(2)=(101) g(3)=(111) 39
Convolutional codes
Turbo coding

40
The basic turbo encoding structure

u
v(0)

v(1)
u’

Turbo coding v(2) 41


 Of all practical error correction methods known to date,
turbo codes andlow-density parity-check codes come closest
to approaching the Shannon limit, the theoretical limit of
maximum information transfer rate over a noisy channel.
 Turbo codes make it possible to increase data rate without
increasing the power of a transmission, or they can be used
to decrease the amount of power used to transmit at a
certain data rate. Its main drawbacks are the relative high
decoding complexity and a relatively high latency, which
makes it unsuitable for some applications. For satellite use,
this is not of great concern, since the transmission distance
itself introduces latency due to the limited speed of lights.
 Prior to Turbo codes, because practical implementations of
LDPCs had not been developed, the most widespread
technique that approached the Shannon limit combined
Reed-Solomon error correction with Viterbi-decoded short
constraint length convolutional codes, also known as RSV
codes.
* Extracted from Wikipedia, the free encyclopedia
Turbo coding 42
Joint source-channel coding

43
When using entropy coding over a noisy channel, it
is customary to protect the highly vulnerable
bitstream with error correcting code. However, a
technique which utilizes the residual redundancy at
the output of the source coder to provide error
protection for entropy coded systems is feasible.
Real world source coding algorithms usually leave a
certain amount of redundancy within the coded bit
stream. Shannon [1948] already mentioned that this
redundancy can be exploited at the receiver side to
achieve a higher robustness against channel errors

C. Guillemot and P. Christ,”Joint source-channel


coding as an element of a QOS framework for ‘4G’
wireless multimedia,” Computer Communication,
vol. 27,pp. 762-779,2004. 44
Trellis diagram for Variable-Length Error-
Correcting (VLEC) code C1
1011
0110
 The trellis diagram for the
VLEC code C1
000

symbol codeword
A 000
B 0110
C 1011
Joint source/channel coding 45
Example
transmitted codeword sequence:000 0110
received vector y = 000 1110
n : number of total bits (n=7)
d(a,b):hamming distance between a and b
Mi: the path metric of the surviving path at
state si
mj: the j-th branch metric

Joint source/channel coding 46


Modified Viterbi decoding
algorithm (cont.)
 Current state s0
 M0 = 0,
m2 = d ( 0110, 0001 ) = 3
S0 S4 M4 = 3 2

m3 = d ( 1011, 0001 ) = 2

m1 = d ( 000, 000 ) = 0
y=0001110
S3 M3 = 0

Joint source/channel coding 47


Modified Viterbi decoding
algorithm (cont.)
 Current state s3
S0 S4 M4 = 2

S6 M6 = 3
y=0001110
m1 = d ( 000, 111 ) = 3
S3

M3 = 0 m2 = d ( 0110, 1110 ) = 1

S7 M7 = 1

Joint source/channel coding m3 = d ( 1011, 1110 ) = 2 48


Modified Viterbi decoding
algorithm (cont.)
 Current state s4
S0 S4 M4 = 2
y=0001110
000
S6 M6 = 3
m1 = d ( 000, 110 ) = 2
S3

0110
M3 = 0
S7 M7 = 1

Joint source/channel coding 49


Modified Viterbi decoding
algorithm (cont.)
 Current state s6
 Current state s7 stop (# of transmitted bits
(n=7) is known)

so decoded as : S0 S3 S7
( i.e. 000 0110 )

A B

Joint source/channel coding 50


Code rate
5/7.71 
0.6485

Average codeword length=7.71 Average codeword length=4.2054 51


Levenshtein distance (minimum number of symbol insertions,
deletions, and substitutions)/number of symbols transmitted

Code rate
(5/4.2054)*4/70.6794

(7,4)

*Here, it don’t have the assumption that all source


52
symbols (i.e. all codewords) are equally likely
Conclusions

 Joint source-channel coding as an element of a


QOS framework for ‘4G’ wireless multimedia

53
References
[1] Shu Lin and Daniel J. Costello, “Error Control Coding –
Fundamentals and Applications,” Second Edition, 2004.
[2] Todd K. Moon, “Error Correction Coding – Mathemathical
Methods and Algorithms,” 2005.
[3] C. Guillemot and P. Christ,”Joint source-channel coding as an
element of a QOS framework for ‘4G’ wireless multimedia,”
Computer Communication, vol. 27,pp. 762-779,2004.
[4] V. Buttigieg and P. G. Farrell, “Variable-length error-
correcting codes,” IEE Proc.-Commun.”, vol. 147, pp. 211-215,
Aug 2000.
[5] V. Buttigieg and P. G. Farrell, “A Maximum A-Posteriori
(MAP) Decoding Alhorithm for Variable-Length Error-
Correcting Codes,” Codes and cyphers: Cryptohraphy and
coding IV, pp. 103-119, 1995.

54

You might also like