0% found this document useful (0 votes)
9 views74 pages

Classnote PRM ChannelCoding31!10!2024

nfr

Uploaded by

Unknown
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views74 pages

Classnote PRM ChannelCoding31!10!2024

nfr

Uploaded by

Unknown
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 74

Channel Coding

Basic Digital Communication


transformations.

Waveform coding deals with transforming waveforms into “better waveforms,” to make the detection process less
subject to errors.
Structured sequences deals with transforming data sequences into “better sequences,” having structured
redundancy (redundant bits). The redundant bits can then be used for the detection and correction of errors.
Text Books/References
• Text Books :
1) Principles of communication, 4th ed., Simon Haykin
2) Digital Communications: Fundamentals & Applications,
2nd ed., B. Sklar
• Reference Books:
1) Principles of Digital communication and coding, Viterbi
and Omura
2) Information theory and reliable communication, Gallager
Antipodal and Orthogonal Signals

Cross-correlation = -1

• The inner or dot product of two different vectors


in the orthogonal set must equal zero.
• The cross-correlation between two signals is a
measure of the distance between the signal
vectors

Cross-correlation = 0
Parity-
Check
Code
Motivation for Channel Coding
0 (1-p) 0
p
B B*
p
1 (1-p) 1

Physical Channel

• Pr{B*≠B}=p
• For a relatively noisy channel, p (i.e., probability of error) may have a value of
10-2
• For many applications, this is not acceptable
– Examples:
• Speech Requirement: Pr{B*≠B}<10-3
• Data Requirement: Pr{B*≠B}<10-6
• Channel coding can help to achieve such a high level of performance

6
Channel Coding
B1B2.. Bk W1W2.. Wn 0 (1-p) 0 W*1W*2.. W*n B*1B*2.. B*k
p
Channel Channel
Encoder p Decoder
1 (1-p) 1

Physical Channel
Physical
Channel

• Channel Encoder: Mapping of k information bits in to an n-bit code word by


introducing redundancy.
• Channel Decoder: Inverse mapping of n received code bits back to k
information bits
• Code Rate r=k/n
• r<1

7
Channel Coding

Use majority decision rule

The process of introducing systematic redundancy to enable detection


and correction of errors over the channel is termed as channel coding.
The code is known as Error Control code.
Courtesy:Prof. Aditya K. Jagannatham
Channel Coding
• Purpose
– Deliberately add redundancy to the transmitted
information, so that if the error occurs, the receiver can
either detect or correct it.
• Source-channel separation theorem
– If the delay is not an issue, the source coder and channel
coder can be designed separately, i.e. the source coder
tries to pack the information as hard as possible and the
channel coder tries to protect the packet information.
• Popular coder
– Linear block code
– Cyclic codes (CRC)
– Convolutional code (Viterbi, Qualcom)
– LDPC codes, Turbo code
Linear Block Codes
2k Binary Block 2k
k-bit Messages Encoder n-bit DISTINCT
codewords

• Information is divided into blocks of length k


• r parity bits or check bits are added to each block (total length n = k + r)
• Code rate R = k/n
• An (n, k) block code is said to be linear if the vector sum of two
codewords is a codeword
• Tradeoffs between
– Efficiency
– Reliability
– Encoding/Decoding complexity
• All arithmetic is performed using Modulo 2 Addition (XOR) 
0 0 1 1
0 1 0 1
10
0 1 1 0
Linear Block Codes
• The uncoded k data bits be represented by the m vector:
m=(m1, m2, …, mk)
The corresponding codeword be represented by the n-bit c vector:
c=(c1, c2, …ck, ck+1, …, cn-1, cn)
• Each parity bit consists of weighted modulo 2 sum of the data bits
represented by  symbol for Exclusive OR or modulo 2-addition

11
Hamming Distance
Cont’d…
Why Linear?
• Linear Block Codes
– Stores k linearly independent codewords
– Encoding process through linear combination of
codewords g0, g1,…, gk-1 based on input message
u=[u0, u1,…, uk-1]

 g0   g 00 g 01 ... g 0 ,n −1 
g   g g11 ... g1,n −1 
 1   10 
 .   . . . 
G=  = . . . 

 .  
 .   . . . 
   
gk −1   g k −1, 0 g k −1,1 ... g k −1,n −1 

Generator Matrix 15
Systematic Codes
• For a systematic block code the dataword appears
unaltered in the codeword – usually at the start
• The generator matrix has the structure,
k R R=n-k
1 0 .. 0 p11 p12 .. p1R 
0 1 .. 0 p21 p22 .. p2 R 
G= = I | P
.. .. .. .. .. .. .. .. 
 
0 0 .. 1 p k 1 pk 2 .. pkR 

• P is often referred to as parity bits


Linear Block Codes
▪ Linear Block Code
The block length C of the Linear Block Code is
C=mG
where m is the information codeword block length, G is the
generator matrix.
G = [Ik | P]k × n

where Pi = Remainder of [xn-k+i-1/g(x)] for i=1, 2, .., k, and I is


unit or identity matrix. g(x): Code generator polynomial.
▪ At the receiving end, parity check matrix can be given as:
H = [PT | In-k ], where PT is the transpose of the matrix P.
18
Linear Block Codes: Example
Example: Find linear block code encoder G if code generator
polynomial g(x)=1+x+x3 for a (7, 4) code; n = total number
of bits = 7, k = number of information bits = 4, r = number
of parity bits = n - k = 3
 x3 
p1 = Re  3
= 1 + x → 110
1 + x + x 
1000 | 110
 x4  0100 | 011
p2 = Re  3
= x + x 2
→ 011
1 + x + x  G=  = [ I | P]
0010 | 111
 x5   
p3 = Re  3
= 1 + x + x 2
→ 111 0001 | 101 
1 + x + x 
I is the identity matrix
 x6  P is the parity matrix
p4 = Re  3
= 1 + x 2
→ 101
1 + x + x 
pi= Remainder of [xn-k+i-1/g(x)] for
19
i=1, 2, .., k,
Linear Block Codes: Example
◼ The Generator Polynomial can be used to determine the
Generator Matrix G that allows determination of parity
bits for a given data bits of m by multiplying as follows:

1000110
0100011
c = m.G = [1011]   = [1011| 100]
0010111
 
 0001101

Data Data Parity

◼ Other combinations of m can be used to determine all other


possible code words
20
Block Coding, Decoding
Message Generator Parity
Code
vector m matrix G Vector C
Air Code check Null
Vector C matrix HT vector 0
Transmitter Receiver
Operations of the generator matrix and the parity check matrix
◼ Consider a (7, 4) linear block code, given by G as
1000 | 110 T
0100 | 011 1011 | 100 
G=  P   
H =
T
=
  1110 | 010
 n −k  0111 | 001
0010 | 111 I
 
 0001 | 101   
For convenience, the code vector is expressed as

c = m | c p  Where c p = mP is an (n-k)-bit parity check vector

21 ◼ H = [PT | In-k ]
Block Codes: Check Error
P 
Define matrix HT as H =  
T

 I n −k 

Received code vector x = c  e, here e is an error


vector, the matrix HT has the property

P 
T

cH = m | c p 
I


 n −k 
= mP  c p = c p  c p = 0

22
Block Codes: Linear Block Codes
✓ The transpose of matrix HT is H = PT I n −k 

➢ Where In-k is a n-k by n-k unit matrix and PT is the


transpose of parity matrix P.

✓ H is called parity check matrix.


✓ Compute syndrome as s = x HT =( c  e ) * HT
= cHT  eHT = eHT

23
Linear Block Codes
➢ If S is 0 then message is correct else there are errors in it,
from common known error patterns the correct message
can be decoded.
• For the (7, 4) linear block code, given by G as

1000 | 111 
0100 | 110 1110 | 100 
G=  
H = 1101 | 010
0010 | 101
 
0001| 011 1011 | 001

➢ For m = [1 0 1 1] and c = mG = [1 0 1 1| 0 0 1]. If there is


no error, the received vector x = c, and s = cHT = [0, 0, 0]
24
Error Correction:Example
• Let c suffer an error such that the received vector x =c
 e =[ 1 0 1 1 0 0 1 ]  [ 0 0 1 0 0 0 0 ]
=[ 1 0 0 1 0 0 1 ]
111 
Then, Syndrome 110 
s = xHT  
101 
 
011
= 1001 | 001   = [101] = ( eH T )
− − 
 
100 
010
 
001
➢ This indicates error position, giving the corrected
vector as [1011001]
25
Error Correction:Example-2
• Consider one of the valid code words of the
(7,4) block code, v = 1100101 and the transpose
of the (7,4) parity check matrix.
1 0 0
0 1 
0

0 0 1
 
vH  = 1 1 0 0 1 0 1 1
T
1 0
0 1 1
 
1 1 1
1 0 1

1 0 0
 
0 1 0
0 0 1
 
1 1 0 0 1 0 1 1 1 0 = 1  0  0  0  0  0  1 = 0
0 1 1
  Modulo-2 Addition
1 1 1 00=0
 
1 0 10 1 0 =1
  0 1=1
0 1 0
0 0 1 11= 0
 
1 1 0 0 1 0 1 1 1 0 = 0  1  0  0  1  0  0 = 0
0 1 1
 
1 1 1
 
1 0 1
1 0 0
 
0 1 0
0 0 1
 
1 1 0 0 1 0 1 1 1 0 = 0  0  0  0  1  0  1 = 0
0 1 1
 
1 1 1
 
1 0 1
1 0 0
  Syndrome
0 1 0
0 0 1
 
vH  = 1 1 0 0 1 0 1 1 0 = 0 0 0
T
1
0 1 1
 
1 1 1 The null vector
  indicates no error.
1 0 1
◼ Suppose the valid code words, v =
1100101 gets corrupted to 1000101.

1 0 0
 
0 1 0
0 0 1
 
vH  = 1 0 0 0 1 0 1 1
T
1 0
0 1 1
 
1 1 1
 
1 0 1
1 0 0
 
0 1 0
0 0 1
 
1 0 0 0 1 0 1 1 1 0 = 1  0  0  0  0  0  1 = 0
0 1 1
  Modulo-2 Addition
1 1 1 00=0
 
1 0 10 1 0 =1
  0 1=1
0 1 0
0 0 1 11= 0
 
1 0 0 0 1 0 1 1 1 0 = 0  0  0  0  1  0  0 = 1
0 1 1
 
1 1 1
 
1 0 1
1 0 0
 
0 1 0
0 0 1
 
1 0 0 0 1 0 1 1 1 0 = 0  0  0  0  1  0  1 = 0
0 1 1
 
1 1 1
 
The second row 1 0 1
corresponds to the 1 0 0
second bit from left.   Syndrome
0 1 0
0 0 1
 
vH  = 1 0 0 0 1 0 1 1 0 = 0 1 0
T
1
0 1 1 Match syndrome with
 
1 1 1 corresponding code in
 
1 0 1 parity check matrix
Note:

◼ Each row in the transposed parity check


matrix corresponds to a bit in the code
word.
– By matching the non-zero syndrome with the
contents contained in the rows of the transposed
parity matrix, the corresponding corrupted bit
can be detected and and subsequently
corrected.
– The process will convert the erroneous code
1000101 to the correct code 1100101.
1 1 0 1 0 0 0
Encoding Circuit 0
G=
1 1 0 1 0 0

1 1 1 0 0 1 0
 
1 0 1 0 0 0 1

Input u Message Register


u0 u1 u2 u3
To channel
Output v
+ + +

v0 v1 v2
Parity Register

[u0 u1 u2 u3] Encoder [v0 v1 v2 u0 u1 u2 u3]


33 Circuit
Syndrome Circuit
r0 r1 r2 r3 r4 r5 r6

+ + +
1 0 0
0 0
s0 s1 s2 
1

0 0 1
s 0 = r 0 + r3 + r5 + r 6 HT = 1 1 0

s 1 = r1 + r3 + r 4 + r5 0 1 1
 
s 2 = r 2 + r 4 + r5 + r 6 1 1 1
1 0 1 
34
Exercise-1
Advantages and disadvantages of linear
codes
Advantages
1. Minimal distance h(C) is easy to compute if C is a linear code.
2. Linear codes have simple specifications.
• To specify a linear [n,k] –code, it is enough to list k codewords.
3. There are simple encoding/decoding procedures for linear
codes.

Disadvantages
1. Transmission bandwidth requirement is more.
2. Extra bits reduces bit rate of transmitter and also reduces its power.
Hamming Code
Question: If the received
Non-Systematic Hamming code is
0001110 with even parity
form of then detect and correct
Hamming Code error.
Answer: ???
Example: Hamming Code

Encoding Process

Decoding Process

Syndrome = p4p2p1
Overcome limitations of LBC
• The implementation of linear block codes requires
circuits capable of performing matrix multiplication and
of comparing the result of various binary numbers.
– Although integrated circuits have been developed to
implement the most common codes, the circuitry can become
quite complex for very long blocks of code.

• A special case of the block code, the cyclic code can be


electronically implemented relatively easily.
Cyclic Codes

Definition of Cyclic Codes:


An (n,k) linear code C is called a cyclic code if every
cyclic shift of a codeword in C is also a codeword in C

Ref Book: Simon Haykins,”Digital Communication”John Wiley and


Sons,2013.
Cyclic Codes

47
Cont’d…
Cyclic Code: Encoder
Cyclic Code: Example
• Example : Find the codeword c(x) if m(x) = 1 + x + x2 and
g(x) = 1 + x + x3, for (7, 4) cyclic code
• We have n = total number of bits = 7, k = number of
information bits = 4, r = number of parity bits = n - k = 3
  m( x ) x n − k 
c p ( x) = rem 
 g ( x ) 
x + x + x 
5 4 3

= rem 3  =x
 x + x +1 
Then,
c( x) = m( x) x n −k + c p ( x) = x + x 3 + x 4 + x 5
55
= 0111010
Systematic: Cyclic Code

Here
p=x
q = n-k
Systematic: Cyclic Code

Here
p=x
q = n-k
Systematic: Cyclic Code
Non-systematic: Cyclic Code
Cont’d…
Cyclic Code: Pros & Cons
Advantages of Cyclic Codes .
• They have an excellent mathematical structure. This makes the design of
error correcting codes with multiple-error correction capability relatively
easier.
• The encoding and decoding circuits for cyclic codes can be easily
implemented using shift registers.
• The error correcting and decoding methods of cyclic codes are simpler and
easy to implement. These methods eliminate the storage (large memories)
needed for lookup table decoding. Therefore the codes become powerful and
efficient.

Disadvantages of cyclic codes


• Even though the error detection is simpler, the error correction is slightly
more complicated. This is due to the complexity of the combinational logic
circuit used for error correction.
Cyclic Redundancy Check (CRC)

Used in
Ethernet,
ARQ

69
Cyclic Redundancy Checker
Thank You!

You might also like