0% found this document useful (0 votes)
36 views42 pages

8IT01 UNIT II Error Controling and Coding

Error Control Coding

Uploaded by

pvingole
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
36 views42 pages

8IT01 UNIT II Error Controling and Coding

Error Control Coding

Uploaded by

pvingole
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 42

8 IT 01

DIGITAL AND WIRELESS COMMUNICATION:

UNIT II
Error Controlling and Coding
by
Prof. (Dr) Prashant V Ingole
UNIT 1: Error Controlling and Coding
• Methods of controlling error,
• linear block codes, matrix description of linear block codes,
• error detection and error correction capabilities of linear block
codes,
• single error correcting Hamming codes,
• Cyclic codes,
• syndromes calculation,
• error detection,
• Introduction to Convolution codes.
Error Controlling and Coding
• In a transmission system when information flows from one point
to another point, noise is encountered in the system
• This noise limits the transmission Rate also called as Channel
Capacity and introduces error
• The probability of error for a particular signalling scheme is a
function of signal –to- noise ratio at the receiver input and the
information rate.
• In practical systems the maximum signal power and the
bandwidth of the channel are restricted to some to some fixed
values
• Also noise power spectral density is fixed for a particular
operating environment
• With all these constraints it is often not possible to arrive at a
signalling scheme which will yield an acceptable probability of
error for a given application.
Error Controlling and Coding
• With all these constraints it is often not possible to arrive at a
signalling scheme which will yield an acceptable probability of
error for a given application.

• The only practical alternative for reducing the probability of error


is to use error control coding

• Use of Parity

• Use of CRC
Methods for Controlling Errors
• We have a digital communication system for transmitting the
binary output of a source encoder over a noisy channel at a rate
bits/sec.
• Due to channel noise, the bit stream recovered by the receiver
differs occasionally from the transmitted sequence .
• It is desired that the probability of error P() be less than some
prescribed value.
• Actual data transmission over a channel is accomplished by a
Modem that can operate at rate .
• Probability of error for the modem will depend on the data rate
• Engineer/ designer is required to use a particular modem due to
cost and other practical constraints such as the signalling scheme.
• Its error probability is higher than the desired error probability.
• Hence we are asked to design an error an error control coding
scheme so that the overall probability of error is acceptable
Methods for Controlling Errors
Input message Coded output
Bit rate = Channel Bit rate =
Modulator
Encoder
Block of k n bit codewords
message bits
K n-k
Message check
Bits bits Noisy Channel
with Capacity C

Block of k
message bits n bit codewords

Channel
Dmodulator
Decoder
Output Code received
Bit rate = Bit rate =
Errors Control coding Example
Suppose we want to transmit a data over a telephone link that has a
usable bandwidth of 3000Hz and a maximum S/N at the output of
13 dB at a rate of 1200 bits/sec with a probability of error less than .
We are given a DPSK modem that can operate at speed of 1200,
2400 or 3600 bits/sec with error with error probabilities 2(), 4() and
8(). We are asked to design an error control coding scheme that
would yield an overall probability of error <.
• In terms of notations in figure 1 we have C=13000 ,
, respectively.
Here so according to Shannon’s Theorem we should be able to
transmit data with an arbitrarily small probability of error
Errors Control coding Example
• Consider an error control coding scheme for this problem
wherein triplets 000 and 111 are transmitted when respectively.
These triplets 000 and 111 are called codewords. These triplets of
0’s and 1’s are certainly redundant and two of the three bits in
each triplets carry no new information.
• Data comes out of the channel encoder at a rate of 3600 bits/sec,
and at this data rate the modem has an error probability of 8.
• The decoder at the receive looks at the received triplets a
extracts the information bearing digits using a majority logic
coding scheme shown below:

Received 000 001 010 100 011 101 110 111


Triplets
o/p message bits 0 0 0 0 1 1 1 1
Errors Control coding Example
Received 000 001 010 100 011 101 110 111
Triplets
o/p message bits 0 0 0 0 1 1 1 1

• Notice that 000 and 111 are the only valid codewords. The reader
can verify that the information bearing bit is received correctly if
no more than one of the bits in the triplet is affected by the
channel noise.
• Now without error control coding the lowest achievable error
probability for the given modem is (2)(). However, with error
control coding the receiver output bit is different from the input
message bit only if two or more of the bits in triplets are in error
due to channel noise.
• It is possible to detect and correct errors by adding extra bits
called check bits to the message stream
Methods for Controlling Errors
• Channel encoder in earlier example corrected possible errors in
the received sequence. This method of controlling errors at the
receiver through attempts to correct noise induced errors is
called the forward-acting error correction method.
• If system consideration permit, error can be handled by entirely
different manner:
• The decoder, upon examining demodulator output, accepts the
received sequence if it matches a valid message sequence. If not,
the decoder discards the received sequence and notifies the
transmitter (Over reverse channel) that error have occurred and
that the received message must be retransmitted. Thus the
decoder attempts to detect errors but does not attempt to
correct the errors. This method of error control is called error
detection.
• Error detection scheme yield a lower overall probability of error
than error correction scheme.
Methods for Controlling Errors
• In our previous example, if the decoder is used for the error
detection only , then it will accept 000 and 111 and reject all
other triplets and request retransmission.
• Now an information bit will be incorrectly decoded at the
receiver only when all three bits in a code word are in error.
• Thus
where is the lowest probability of error so with
which is much lower than the probability of
error for the error correction scheme.
It is important to note that the error detection and retransmission of
messages require a reverse channel that may not be available in
some applications.
Furthermore error detection schemes slow down the effective rate
of data transmission since the transmitter has to wait for an
acknowledgment from receiver before transmitting the next
message
Methods for Controlling Errors
• Forward acting error correction and the error detection are
usually used separately , but occasionally it may be necessary to
use both in a particular system.

Types of Errors

• Gaussian Noise and Random Errors

• Shot Noise and Impulse

Types of Codes
• Block Codes

• Convolutional Codes
Classification of Error Control Codes

Error Control Codes

Block Code Convolutional Code

Non-Binary Binary

Non-Systematic Systematic Non-Systematic Systematic


Linear Block Codes
• Block codes/ coding scheme is a coding scheme in which each
block of k message bits is encoded into a block of n>k bits, by
adding n-k check bits derived from the k message bits.

• If any two block codes on doing some operation (XOR) generates


a valid code then it is called as a linear block code.

• The n bit block of the channel encoder output is called a


code-word and codes in which the message bits appear at the
beginning of a code-word is called systematic codes.

• If each of the codewords can be expressed as linear combination


of k linearly independent code vectors, then the code is called a
systematic linear block codes
Matrix Description of Linear Block Codes
• The encoding operation in a Linear Block Coding Scheme consists
of following two steps:
1. The information sequence is segmented into message blocks;
each block containing a k successive information bits.
2. The encoder transforms each message block into a larger block of
n bits according to some set rules,
• These n-k additional bits are generated from linear combination
of the message bits and we can describe the encoding operations
using matrices.
• Let us denote message block as a k tuple vector D =(d1, d2,…,dk)
where each message bit can be ‘0’ or ‘1’. Thus we will have 2k
distinct message blocks ex for k=2, we will have four combinations of
D = (00, 01,10,11) for k=3 such D= (000,001,010,011,100,101,…, 111).
• Each message block is transformed to a code-word C of length n
bits C =(c1, c2,…,cn) by the encoder and there are 2k distinct code-
words, one unique code-word for each message block.
Matrix Description of Linear Block Codes
• This set of 2k codewords is called a (n,k) block code. They are also
called as code vectors. The rate efficiency of this code is k/n.
• In a systematic linear block code the first k bits of the code-word
are the message bits that is ci = di , i =1, 2, 3, ---, k.
• The last n-k bits in the code-word are check bits generated from
the k message bits according to some predetermined rule:
• ck+1=p11d1 + p21d2 + …… + pk1dk
• ck+2=p12d1 + p22d2 + …… + pk2dk
• :
• cn=p1 n-kd1 + p2 n-kd2 + …… + pk n-kdk
• The coefficients pij in the above set of equations are 1 or O as
they are governed by modulo 2 operations.
• So c1= [d1 d2 d3 … dk ck+1 ck+1 …..cn]
• c2= [d1 d2 d3 … dk ck+1 ck+1 …..cn]
• :
Matrix Description of Linear Block Codes
• We can combine above set of equations in to matrix form as

[c1 c2 c3 ….. cn ] = [d1 , d2 , …….dk]kxn


In condensed form it is C=DG
where G is called as a generator matrix and G is formed as

where is an Identity matrix and P is an arbitrary matrix k by n-k


When P is specified, it defines the (n,k) block code completely.
An important step in the design of a (n,k) block code is the
selection of a P matrix so that the code generated by G has certain
desirable properties such as: ease of implementation, ability to
correct random and burst errors, high rate efficiency etc.
Example of Matrix Description of LBC
• The generator matrix for a (6,3) linear block code is given below,
find all code vectors of this code

Solution: message block size in this problem k=3 and the size of the code vectors
n is 6. There are eight ( possible message blocks : (0,0,0), (0,0,1), (0,1,0), (0,1,1),
(1,0,0) , (1,0,1), (1,1,0), (1,1,1).
The code vector for the message blocks can be calculated by equation
C=DG. It gives following output codes

S. No Messages Code Vectors


1 0 0 0 0 0 0 0 0 0
2 0 0 1 0 0 1 1 1 0
3 0 1 0 0 1 0 1 0 1
4 0 1 1 0 1 1 0 1 1
5 1 0 0 1 0 0 0 1 1
6 1 0 1 1 0 1 1 0 1
7 1 1 0 1 1 0 1 1 0
8 1 1 1 1 1 1 0 0 0
Matrix Description of LBC and Parity Check Matrix
• Associated with each (n, k) block code is a parity check matrix H
which is defined as :
(1)

(2)

Parity check matrix can be used to verify that the code was
generated by the generator matrix. C is the code-word in a (n,k)
block code generated by G iff (3)

• Generator Matrix G: Encoding operation


• Parity Check Matrix H: Decoding operation
Matrix Description of LBC and Syndrome
We can construct the corrupted codeword as R=C+E
Say send code is [0000000] and the 4th bit error is [0001000]
So we get corrupted received code as
R= [0 0 0 0 0 0 0] + [0 0 0 1 0 0 0] =[0 0 0 1 0 0 0]
So we can calculate the exact bit where error occurred using
formula

If we find S=[0 0 0 1 0 0 0].H = [0 1 1]


Similarly we can calculate for other bits. So we can construct an error detection
syndrome table in which after the received code is checked using the matrix H we
can deduce if the received code is without error or not and if error has occurred
during transmission which bit is received in error.
Matrix Description of LBC and Syndrome

S. No Error Pattern Syndrome Comment


1 0000000 0 0 0 All zeros means no error (cleanly received)
2 1000000 1 1 1 Error in first bit , First Row of
3 0100000 1 1 0 Error in second bit , second Row of
4 0010000 1 0 1 Error in third bit , third Row of
5 0001000 0 1 1 Error in fourth bit , fourth Row of
6 0000100 1 0 0 Error in fifth bit , fifth Row of
7 0000010 0 1 0 Error in sixth bit , sixth Row of
8 0000001 0 0 1 Error in seventh bit , seventh Row of
Error Detection and Error Correction
Capabilities of LBC
• Some basic terminologies that are used while defining the error
control capabilities of a Linear Block Code are as follows:
• Weight: Weight of a code a vector C is defined as a number of
non zero components in C.
Ex: In a (6,3) code if message is (1 0 1), w = 2
message is (0 0 0), w = 0
message is (1 1 1), w = 3
In a (7,3) code if message is (0 1 0 0), w = 1
message is (0 1 1 0), w = 2
message is (1 0 1 1), w = 3
• Hamming distance between two code vectors and is defines as
the number of components in which they differ
Ex: In a (6,3) Hamming distance between code vectors (1 0 1) and (1 0 0 ) , d = 1
between code vectors (0 0 1) and (1 0 0 ) , d = 2
• Minimum distance ()of a code block is the smallest distance
between any pair of codewords in the code.
Matrix Description of LBC and Syndrome
Let C be a code vector that was transmitted over a noisy channel
and let R be the noise corrupted code received at the receiver.
So the vector R is the sum of original vector C and the error vector E
So R = C + E (4)
Receiver does not know C and E ; its function is to decode C from R
and the message block D from C.
The receiver does the decoding operation by determining an (n-k)
vector S defined as (5)
The vector S is called the error syndrome of R. By replacing R from
above equation S (5) can be rewritten as

We know from eq 3 that


So we have
Thus the syndrome of a received vector is zero if R is a valid code vector. If
it is non zero means error has occurred and correction is needed
Matrix Description of LBC and Syndrome
• Furthermore, S is related to the error vector E and the decoder used S
to detect and correct error by referring matrix
--------------------------------------------------------------------------------------------
Ex : Consider a (7,4) block code generated by
find generated codes and syndrome matrix
Solution: Here k = 4 and n = 7 so parity code consists of 3 bits

So
Now as k=4, the messages are represented as
Matrix Description of LBC and Syndrome
Generator matrix G is responsible for transmitting the code and another
matrix H is responsible for checking parity. It is placed at receiver.
The k=4 bit input messages are = (0000),(0001),(0010),(0011),(0100),
(0101),(0110),(0111),(1000),(1001),(1010),(1011),(1100),(1101),(1110),
(1111).
The code words can be formed by multiplying a selected message with
each column of the generator matrix. As C =D G
= [0 1 0 1 : 1 0 1]
Similarly
= [1 0 1 1 : 0 0 1]
Matrix Description of LBC and Syndrome
C =D G
= [0 0 1 1 : 1 1 0]
Similarly
= [1 1 0 1 : 0 1 0]

Complete code matrix will be

When these codes are transmitted over a noisy channel there is a possibility of an
error in these codes. When these codes are received, receiver does not know if
this code encountered an error during transmission. So receiver need to accept it
only after checking its validity using Parity check matrix H.
Matrix Description of LBC and Syndrome
Parity check matrix is H maintained by receiver and it is composed of
(n-k)xn
so
So
The error syndrome S can be calculated from as follows

Where error matrix is nothing but the place of error in the


codeword received in any code. Thus if send code word is
and Let us say due to noisy channel error comes in 1st bit so 0 0 0 0 0 0] & if it is
in 4th bit, 0 0 1 0 0 0]. If the received codeword is without error then 0 0 0 0 0 0
0].
Error Detection and Error Correction
Capabilities of LBC
• Theorem 1: The minimum distance of a Linear Block Code is
equal to the minimum weight of any nonzero word in the code.

S. No Codewords Weight
1 0 0 0 0 0 0 0
2 0 0 1 1 1 0 3
3 0 1 0 1 0 1 3
4 0 1 1 0 1 1 4
5 1 0 0 0 1 1 3
6 1 0 1 1 0 1 4
7 1 1 0 1 1 0 4
8 1 1 1 0 0 0 3
Error Detection and Error Correction
Capabilities of LBC
• Theorem 2: A Linear Block Code with a minimum distance can
correct up to errors and detect up to errors in each codeword
where denotes the largest integer no greater than .
Proof : Let C be the transmitted codeword and R be the received
codeword. Let C’ be any other codeword. Then the hamming
distance between codewords C and C’ d(C,C’) and the distance
d(C,R) and d(C’,R) satisfy
d(C,R)+d(C’,R) d(C,C’)
Since d(U,V)=weight of U+V, where addition is on a bit by bit basis in
modulo-2 arithmetic with no carry. If an error pattern of t’ errors
occurs then the Hamming distance between the transmitted vector
C and the received vector R is d(C,R)=t’.
Now since the code is assumed to have a distance of ,
d(C, C’)≥
Error Detection and Error Correction
Capabilities of LBC
• Theorem 2: A Linear Block Code with a minimum distance can
correct up to errors and detect up to errors in each codeword
where denotes the larges integer no greater than .
 Continued Now since the code is assumed to have a distance
of , d(C, C’)≥, Thus we have
• d(C, C’)≥ and d(C, R)= t’ so from earlier formula
We have d(C’, R)≥
The decoder will correctly identify C as the transmitted vector if
d(C,R) is less than d(C’,R) where C’ is any other codeword of the
code. For d(C,R)< d((C’,R), the number of errors t’ should satisfy

Thus a LBC with a minimum distance can correct up to t errors if


Single Error Correcting Hamming Codes
• In Linear Block Code, with minimum distance , can correct up to
errors. So a LBC capable of correcting single errors must have a
minimum distance . Such code are extremely simple to construct.

• When a single error occurs say in bit of the codeword, the


syndrome of the received vector is equal to the row of

• Hence if we choose the n rows of the (n) x (n-k) matrix to be


distinct, then the syndrome of all single errors will be distinct and
we can correct single errors.
Single Error Correcting Hamming Codes
• While considering it, there two important aspects:
1. We must not use the first row with all zeros because syndrome
of all 0’s corresponds to no error situation.
2. The last n-k rows of must be chosen so that we have an identity
matrix in .
• Each row in has (n-k) entries each of which could be 0 or 1.
Hence we can have distinct rows of (n-k) entries out of which we
can select distinct rows of .
• Since the matrix has n rows, for all of them to be distinct we
need or the number of parity bits in this (n,k) code satisfies the
inequality

• So, given a message block size k, we can determine the minimum


size n for the codewords from
Binary Cyclic Codes
• Binary cyclic codes from a subclass of linear block codes
described in the proceeding sections. Cyclic codes are attractive
for two reasons. First, encoding and syndrome calculations can be
easily implemented using simple shift registers with feedback
connections. Secondly, these codes have a fair amount of
mathematical structure that makes it possible to design codes
with careful error correcting properties.
• We saw in the preceding section that linear block codes can be
described well using the matrix representation. In this section we
will develop a polynomial representation for cyclic codes and use
this representation to derive procedures for encoding and
syndrome calculations. We will also look at special subclasses of
cyclic codes that are suitable for correcting burst-type errors in a
system
Algebraic structure of Cyclic Codes
 An (n,k) linear Block Code C is called a cyclic code if it has the
following property :
If an n-tuple V=(v, v, v , )
is a code vector of C then the n-tuple V=(v, v, v , )
obtained by shifting ? V Cyclically one place to the right is also
a code vector of C.
This property of cyclic codes allows us to treat the elements of each
code word as the coefficient of a polynomial of degree n-1
Ex: Code word V=(0 1 1 0 1) can be represented by a code
polynomial V(x)= 0+x+x2+x4 (note highest power as n-1)
The coefficients of the polynomial are 0’s and 1’s and they belong to
a binary field with the following rules of addition & multiplication
0+0 = 0 0.0=0
0+1= 1 0.1=0
1+0 = 1 1.0=0
1+1 = 0 1.1=1
Algebraic structure of Cyclic Codes
 The variable x is defined on the real line with
and so on.
In this notation we have

q where m>n
𝑚 𝑛
𝑝 ( 𝑥 ) 𝑞 ( 𝑥 )=∑ ∑ 𝑎𝑖 𝑏 𝑗 𝑥 𝑖+ 𝑗
𝑖=0 𝑗=0

It can be shown that is the remainder resulting from dividing by +1


that is
Algebraic structure of Cyclic Codes
There are a total of such polynomials corresponding to the such
data vectors and the code vectors corresponding to the polynomials
form a linear (n,k) code.
g(x), x g(x) ,
Each of these polynomial indicate a new cycle of rotation of the
code vector.
The polynomial g(x) is called as the generator polynomial.
V=(

The parity check polynomial r(x)is the remainder from dividing by


g(x) that means
and the code polynomial is given by
Introduction to Convolutional Codes
Terminologies:
k number of message symbols (as before)
n number of codeword symbols (as before)
r rate = k/n
m number of encoding cycles an input symbol is stored
K number of input symbols used by encoder to compute each
output symbol (decoding time exponentially dependent on K)
Definition:
A convolutional code may be defined by a set of n generating
polynomials for each input bit.
For the circuit under consideration:
g1(D) = 1 + D + D2
g2(D) = 1 + D2 k = 15, n = 30, r = ½, K = 3, m = 2
The set {gi(D)} defines the code completely. The length of the shift
register is equal to the highest-degree generator polynomial.
Introduction to Convolutional Codes
In Convolutional Code the message bit stream is encoded in a
continuous fashion as opposite of the block code. Convolutional codes
are easily generated with shift register connected in cascade in a
particular manner. 010111001010001
The main difference between block codes and convolutional code is in
block code a block of n digits generated by the encoder in a particular
time unit depends only on the block of k input message digits within
that that time unit. In a convolutional code, the block of n code digits
generated by the encoder in a time unit depends on not only the block
of k message digits within that time unit, but also on the preceding (N-
1) block of message digits (N>K), usually the values of n and k are small.
Like block codes, convolutional code can be designed to either detect
or correct errors. However, because data are usually retransmitted in
blocks, block code are better suited for error detection and
convolutional codes are mainly used for error correction.
Introduction to Convolutional Codes
• With all these constraints it is often not
output upper input
followed by lower input

flip flop
(stores one bit) k = 15, n = 30, r = ½, K = 3, m = 2
Introduction to Convolutional Codes

• Both flip flops set to 0 initially


• Input: 010111001010001
• Output: 00 11 10 00 01 10 01 11 11 10 00 10 11 00 11
• Flush encoder by clocking m = 2 times with 0 inputs
Summary of Error Control Coding
• Need of Error Controlling codes :
BER and errors in transmission are to be kept limited for
maintaining transmission efficiency and Bandwidth
Linear Block Code :
Matrix Representation and Algebraic Representation

Single Error Correcting Hamming Code

Cyclic Codes

Introduction to Convolutional
Error Control Coding

Thank You

Dear Students

You might also like