0% found this document useful (0 votes)
53 views38 pages

Asst. Prof. Anindita Paul: Mintu Kumar Dutta Sudip Giri Saptarshi Ghosh Tanaka Sengupta Srijeeta Roy Utsabdeep Ray

The document discusses various error control coding techniques including block codes, convolutional codes, turbo codes, and LDPC codes. It provides information on key concepts such as error models, channel capacity, Hamming codes, convolutional codes, turbo code structure and decoding, LDPC code structure and decoding, and factors that influence code performance such as minimum distance and block size.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
53 views38 pages

Asst. Prof. Anindita Paul: Mintu Kumar Dutta Sudip Giri Saptarshi Ghosh Tanaka Sengupta Srijeeta Roy Utsabdeep Ray

The document discusses various error control coding techniques including block codes, convolutional codes, turbo codes, and LDPC codes. It provides information on key concepts such as error models, channel capacity, Hamming codes, convolutional codes, turbo code structure and decoding, LDPC code structure and decoding, and factors that influence code performance such as minimum distance and block size.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 38

17

th

May,2014

Asst. Prof. Anindita Paul

Mintu kumar Dutta (Roll No.- L135)
Sudip Giri (Roll No.- L141)
Saptarshi Ghosh (Roll No.- R38)
Tanaka Sengupta (Roll No.- R41)
Srijeeta Roy (Roll No.- R77)
Utsabdeep Ray (Roll No.- R102)
Error Control Coding
Error Models
Noisy Channel Coding Theorem
Shannon Limit
Block Codes
BER
Parity Check Code
Hamming Code
Channel capacity
Convolutional codes
Turbo codes
LDPC codes
EXIT chart analysis of turbo codes
Error Control Coding (ECC)
Redundancy are added to the data at the transmitter to
permit error detection or correction at the receiver.
Done to prevent the output of erroneous bits despite
noise and other imperfections in the channel
The positions of the error control coding and decoding
are shown in the transmission model
Binary Symmetric Memoryless Channel
Assumes transmitted symbols are binary
Errors affect 0s and 1s with equal
probability (i.e., symmetric)
Errors occur randomly and are
independent from bit to bit (memoryless)
OUT
IN
0 0
1 1
1-p
1-p
p
p
Claude Shannon, A mathematical theory of
communication, Bell Systems Technical J ournal,
1948.
Every channel has associated with it a capacity C.
Measured in bits per channel use (modulated symbol).
The channel capacity is an upper bound on
information rate r.
There exists a code of rate r < C that achieves reliable
communications.
Reliable means an arbitrarily small error probability.



We will consider only binary data
Data is grouped into blocks of length k bits (dataword)
Each dataword is coded into blocks of length n bits
(codeword), where in general n>k
This is known as an (n,k) block code
A vector notation is used for the datawords and codewords,
Dataword d = (d
1
d
2
.d
k
)
Codeword c = (c
1
c
2
..c
n
)
The redundancy introduced by the code is quantified by the code rate,
Code rate = k/n
i.e., the higher the redundancy, the lower the code rate
.BER is defined as the ratio of no. of wrong bits
over the no. of total bits.

Turbo Code
1993
Convolution
Codes
0 1 2 3 4 5 6 7 8 9 10 -1 -2
0.5
1.0
Eb/No in dB
C
o
d
e

R
a
t
e

r

Uncoded
BPSK
5
10

=
b
P
S
p
e
c
t
r
a
l

E
f
f
i
c
i
e
n
c
y

arbitrarily low
BER:
LDPC Code
A simple parity-check code is a
single-bit error-detecting
code in which
n = k + 1 with d
min
= 2.
Even parity (ensures that a codeword has an even
number of 1s) and odd parity (ensures that there
are an odd number of 1s in the codeword)

Hamming Code is type of Error Correcting Code
(ECC)
Provides error detection and correction mechanism
Adopt parity concept, but have more than one parity
bit
In general hamming code is code word of n bits
with m
data bits and r parity (or check bits)
i.e. n = m + r
Can detect D(min) 1 errors
Can correct errors
Hence to correct k errors, need D(min) = 2k + 1
need a least a distance of 3 to correct a single bit
error
Convolution codes are error detecting codes used to
reliably transmit digital data over unreliable
communication channel system to channel noise.


We generate a convolution code by putting a source stream through a
linear filter. This filter makes use of a shift register, linear output
functions and possibly linear feedback.

In a shift register, the information bits roll from right to left.

In every filter there is one input bit and two output bits. Because of each
source bit having two transmitted bits, the codes have rate .


First consider a K=12 convolutional code.
d
min
= 18
|
d
= 187 (output weight of all d
min
paths)
Now consider the original turbo code.
C. Berrou, A. Glavieux, and P. Thitimasjshima, Near Shannon limit
error-correcting coding and decoding: Turbo-codes, in Proc. IEEE
Int. Conf. on Commun., Geneva, Switzerland, May 1993, pp. 1064-
1070.
Same complexity as the K=12 convolutional code
Constraint length 5 RSC encoders
k = 65,536 bit interleaver
Minimum distance d
min
= 6
a
d
= 3 minimum distance code words
Minimum distance code words have average information weight of
only

f
d
= 2
0.5 1 1.5 2 2.5 3 3.5 4
10
-8
10
-6
10
-4
10
-2
10
0
E
b
/N
o
in dB
B
E
R
Convolutional Code
CC free distance asymptote
Turbo Code
TC free distance asymptote
Convolutional code:





Turbo code:

( )
|
|
.
|

\
|
~
o
b
b
N
E
Q P 18 187
( )
|
|
.
|

\
|
~

o
b
b
N
E
Q P 6 10 2 . 9
5
187
min
=
d
c
18
min
= d
6
min
= d
65536
2 3
~
~ min min
min

= =
k
w a
c
d d
d
Turbo codes get their name because the
decoder uses feedback, like a turbo engine.
0.5 1 1.5 2
10
-7
10
-6
10
-5
10
-4
10
-3
10
-2
10
-1
10
0
E
b
/N
o
in dB
B
E
R
1 iteration
2 iterations
3 iterations
6 iterations
10 iterations
18 iterations
K = 5
constraint length
r = 1/2
code rate
L= 65,536
interleaver size
number data bits
Log-MAP algorithm

Latency vs. performance
Frame (interleaver) size L
Complexity vs. performance
Decoding algorithm
Number of iterations
Encoder constraint length K
Spectral efficiency vs. performance
Overall code rate r
Other factors
Interleaver design
Puncture pattern
Trellis termination
0.5 1 1.5 2 2.5
10
-7
10
-6
10
-5
10
-4
10
-3
10
-2
10
-1
K = 5
Rate r = 1/2
18 decoder iterations
AWGN Channel
0.5 1 1.5 2 2.5 3
10
-8
10
-6
10
-4
10
-2
10
0
E
b
/N
o
in dB
B
E
R
K=1024
K=4096
K=16384
K=65536
Turbo codes have extraordinary performance at low
SNR.
Very close to the Shannon limit.
Due to a low multiplicity of low weight code words.
However, turbo codes have a BER floor.
This is due to their low minimum distance.
Performance improves for larger block sizes.
Larger block sizes mean more latency (delay).
However, larger block sizes are not more complex to
decode.
The BER floor is lower for larger frame/interleaver sizes
The complexity of a constraint length K
TC
turbo code is
the same as a K = K
CC
convolutional code, where:
K
CC
~ 2+K
TC
+ log
2
(number decoder iterations)
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
10
-7
10
-6
10
-5
10
-4
10
-3
10
-2
10
-1
10
0
Eb/No in dB
B
E
R
BER of 640 bit turbo code in AWGN
L=640 bits
AWGN channel
10 iterations
1 iteration
2 iterations
3 iterations
10 iterations

0 0.5 1 1.5 2 2.5 3
10
-7
10
-6
10
-5
10
-4
10
-3
10
-2
10
-1
10
0
Eb/No in dB
B
E
R
BER of 640 bit turbo code
max-log-MAP
constant-log-MAP
log-MAP
10 decoder iterations
Fading
AWGN
V
n
= n-dimensional vector space over {0,1}

A (n, k) linear block code with dataword length k, codeword
length n is a k-dimensional vector subspace of V
n


A codeword c is generated by the matrix multiplication c =
uG, where u is the k-bit long message and G is a k by n
generator matrix

The parity check matrix H is a n-k by n matrix of ones and
zeros, such that if c is a valid codeword then, cH
T
= 0

Each row of H specifies a parity check equation. The code
bits in positions where the row is one must sum (modulo-2)
to zero


Low-Density Parity-Check (LDPC) codes are a class of linear
block codes characterized by sparse parity check matrices
H
H has a low-density of 1s

LDPC codes were originally invented by Robert Gallager in
the early 1960s but were largely ignored until they were
rediscovered in the mid-1990s by MacKay

Sparseness of H can yield large minimum distance d
min
and
reduces decoding complexity

Can perform within 0.0045 dB of Shannon limit




Like Turbo codes, LDPC can be decoded iteratively
Instead of a trellis, the decoding takes place on a Tanner graph
Messages are exchanged between the v-nodes and c-nodes
Edges of the graph act as information pathways
Hard decision decoding
Bit-flipping algorithm
Soft decision decoding
Sum-product algorithm
Also known as message passing/ belief propagation algorithm
Min-sum algorithm
Reduced complexity approximation to the sum-product algorithm
In general, the per-iteration complexity of LDPC codes is less than it is for
turbo codes
However, many more iterations may be required (max~100;avg~30)
Thus, overall complexity can be higher than turbo
LDPC codes, especially irregular codes exhibit error floors at high
SNRs
The error floor is influenced by d
min
Directly designing codes for large d
min
is not computationally
feasible
Removing short cycles indirectly increases d
min
(girth conditioning)
Not all short cycles cause error floors
Trapping sets and Stopping sets have a more direct influence on the
error floor
Error floors can be mitigated by increasing the size of minimum
stopping sets
Tian,et. al., Construction of irregular LDPC codes with low error
floors, in Proc. ICC, 2003
Trapping sets can be mitigated using averaged belief propagation
decoding
Milenkovic, Algorithmic and combinatorial analysis of trapping sets
in structured LDPC codes, in Proc. Intl. Conf. on Wireless Ntw.,
Communications and Mobile computing, 2005
LDPC codes based on projective geometry reported to have very low
error floors
Kou, Low-density parity-check codes based on finite geometries: a
rediscovery and new results, IEEE Tans. Inf. Theory, Nov.1998
A linear block code is encoded by performing the
matrix multiplication c = uG
A common method for finding G from H is to first
make the code systematic by adding rows and
exchanging columns to get the H matrix in the form H
= [P
T
I]
Then G = [I P]
However, the result of the row reduction is a non-sparse P
matrix
The multiplication c =[u uP] is therefore very complex
This is especially problematic since we are interested
in large
n (>10
5
)
An often used approach is to use the all-zero
codeword in simulations
Richardson and Urbanke show that even for large n,
the encoding complexity can be (almost) linear
function of n
Efficient encoding of low-density parity-check codes, IEEE
Trans. Inf. Theory, Feb., 2001
Using only row and column permutations, H is
converted to an approximately lower triangular matrix
Since only permutations are used, H is still sparse
The resulting encoding complexity in almost linear as a
function of n
An alternative involving a sparse-matrix multiply
followed by differential encoding has been proposed
by Ryan, Yang, & Li.
Lowering the error-rate floors of moderate-length high-rate
irregular LDPC codes, ISIT, 2003

We now compare the performance of the maximum-length
UMTS turbo code against four LDPC code designs.
Code parameters
All codes are rate
The LDPC codes are length (n,k) = (15000, 5000)
Up to 100 iterations of log-domain sum-product decoding
Code parameters are given on next slide
The turbo code has length (n,k) = (15354,5114)
Up to 16 iterations of log-MAP decoding
BPSK modulation
AWGN and fully-interleaved Rayleigh fading
Enough trials run to log 40 frame errors
Sometimes fewer trials were run for the last point (highest SNR).
Code #1:
Mackay 2A
Code #4:
IRA
Code #2:
R&U
Code #3:
JWT
turbo
BPSK/AWGN Capacity:
-0.50 dB for r = 1/3
0 0.2 0.4 0.6 0.8 1 1.2
10
-7
10
-6
10
-5
10
-4
10
-3
10
-2
10
-1
Eb/No in dB
B
E
R

-2 0 2 4 6 8 10 12 14 16 18 20
0
1
2
3
4
5
6
7
8
Eb/No in dB
C
a
p
a
c
i
t
y

(
b
i
t
s

p
e
r

s
y
m
b
o
l
)

256QAM
64QAM
16QAM
16PSK
8PSK
QPSK
Capacity of 2-D modulation in AWGN
BPSK
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
I
v
I
z

gray
SP
MSP
MSEW
Antigray
K=3 Conv code
16-QAM
AWGN
6.8 dB
adding curve for a FEC code
makes this an extrinsic information
transfer (EXIT) chart
PCCC (turbo) codes
can be analyzed with an
EXIT chart by plotting
the mutual information
transfer characteristics
of the two decoders.
Figure is from:
S. ten Brink,
Convergence Behavior
of Iteratively Decoded
Parallel Concatenated
Codes, IEEE Trans.
Commun., Oct. 2001.
It is now possible to closely approach the
Shannon limit by using turbo and LDPC codes.
Binary capacity approaching codes can be
combined with higher order modulation using the
BICM principle.
These code are making their way into standards
Binary turbo: UMTS, cdma2000
Duobinary turbo: DVB-RCS, 802.16
LDPC: DVB-S2 standard.
Software for simulating turbo and LDPC codes
can be found at www.iterativesolutions.com

You might also like