0% found this document useful (0 votes)
46 views22 pages

DC-mod 4 Notes - 1

Uploaded by

Sophie
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
46 views22 pages

DC-mod 4 Notes - 1

Uploaded by

Sophie
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

UNIT 5

INTRODUCTION TOERROR CONTROL CODING


51 INTRODUCTION
In unit 2, we discussed various kinds of coding techniques which help us to achieve
The disadvantage
lower value of average length, thereby increasing the coding efficiency. this, a single error
Due to
with this type of coding is that they are variable-length " codes.more than one block-code
which occurs due to the noise present in the channel, affects
that the output data rates measured
words. Another disadvantage of variable-length codes is
time-periods will fluctuate widely. When fixed-length" codes are used, a single
over short easily detected and corrected. To detect and
only that block which can be
error will affect techniques that rely on the systematic
for "error-control coding"
corect errors, we go in
addition of redundant" symbols.
discuss in detail, the exact meaning of error contro! coding, the
In this chapter, let us various ways of achieving it.
control coding, and also the
ecessity of error
CODING AND TYPESOF CODES
O RATIONALE FOR designing acost effective and reliable igital
available in
The two key system parameters power" and channel bandwidth". These two, along
Communication system, are signal signal energy to noise power ratio (E/N). This ratio,
the bit
With PSD of noise n" determinerate for various
digital modulation schemes. Practical aspects
error
Inturn, determines the bit [Refer section 4.6]. In practice, we find that it is impossible
place alimit on the value of (E/N) whatever modulation schemes that we adopt.
quality with
lo provide the acceptable dataavailabletoimprovethe data quality is error control coding".
option The functional blocks
only practical but calculated use of"redundancy".
Hence,Errorthe control codingis nothing transmitter and channel
coding are the 4channel encoder" at the
hat error control reason error control coding is also termed as channel
accomplish receiver. For
this

decoder"
encodiError
at the

ng". controlreduction improvesthe data quality to a great extent and


another great
coding for a fixed bit error rate. This reduction in (E,/N) reduces
in(EN) hardware
costs.
isthe hencethe
advantage and
he transmitted power of errorcontrol coding are (i) the increased bandwidth (ii) the system
disadvantages to implementation of decoding operations in the receiver.
The compler"due
more
becomes 291
Information Theory and
292
Let us now look intothe significance of
Coding
redundancy". In fact, the channel encoder at
T hb
L ot
it
ei
he

the transmitter systematically adds digits tothe transmitted message digits. These additional
digits carTry "no information", but make it possible for the channel decoder to detect and This
in

the information bearing digits". This reduces theoverall probabili: probab


error P, errors
correct therebyin achieving the desired goal. The additional digits which carry no information
are called redundant digits and the process of adding these digits is called "redundancy".
of error control coding and show thi Thequ
In the next section we shall consider a simple example he
modem
there is great reduction in the probability of error P: under two baei
are classified
There are several error-correcting codes and these codes
distinguishing feature for
categories namely "Block codes" and "convolutional codes". The in the later
this classification is the absence of memory in the former case and its presence Withou
case.
ferror.It is
Another way of classifying codes is as linear"or "non-linear". A linear code difers Merrorisle
from non-linear code by the property that any two code-wordsadded using modulo-2 arithmatic
(which will be discussed later) produces a third code-word in the code. The codes used inrample of
practical applications are almost always linear codes. Suppos
5.3 EXAMPLE OF ERROR CONTROL CODING (SN) of
Figure 5.1 shows the complete block diagram of a digital communication system gven amode
employing error control coding. The main functional blocks are the channel encoder, the is desire
channel decoder, modulator and demodulator and the noisy communication channel with a 10".
capacity Cbits/sec. The source generates a message block {b} at a rate of r, bits/sec and sign :
feeds it to the channel encoder. The channel encoder, then, adds (n - k) number of redundant
bits to these k-bit messages to form n-bit code-words. These (n-k) number of additional bits
also called check bits" do not carry any information but helps channel decoder to detect and
Correct errorS.
INPUT MESSAGE
CODED OUTPUT {d,)
{b,}) CHANNEL
BIT RATE =, bits/sec DECODER BIT RATE =r,=, (n/k) bits/sec
MODULATOR
n-bit Code-words
BLOCK OF k
MESSAGE BITS k MESSAGE
BITS
(n-k)
Check Bits NOISY
COMMUNICATION
From Sh
BLOCK OF k CHANNEL
MESSAGE BITS
OUTPUT MESSAGE n-bit Code-words
(b;} CHANNEL DEMODULATOR
BIT RATE =r, bits/sec DECODER
Fig. S.1 : Block diagram of
BIT RATE =r, bits/sec

communication system employing error control coding


Swncilhe a
to
getrodvction
Eror Control Coding 293

ebitrate ofFthe coded output block {d, will be r,=, (n/k) bits/sec. This is the rate at
The,
shich
the modem operates, to produce a message block få.} at the receiver. The channel
decoderthen decodes this message to get back the information block 6. at the receiver.
Thisinformation block {b,} occasionally difers from the transmitted block {b}. Thus
thepprobabilityofeerror
P. = P {b, b) . . (5.1)
must be less than the prescribed value. The probability of error "q," of
The quantity P
modemwwhich depends on bit rate r, is defined as
the
4. = P{ä, d} (5.2)
wihout error control coding, q will be much higher than the desired value of probability
oferror. It is required to design an error control
coding scheme so that the overall probability
desired value.
oferroris less than some
Eample of Error-Control Coding :
having bandwidth B= 3000
Suppose that we want to transmit data over atelephone link
probability of error less than 10. We are
Hz (S/N)of 13dB at a rate of 1200 bits/sec with
modem that can operate at a speed of 3600 bits/sec with error probability q, =8 x 10
given a
desired to design an error control coding scheme having overall probability of error P.
It is
<10.
Design :
Given = 1200 bits/sec, B = 3000 Hz
3600 bits/sec
= 8x 104
S
10 loB10 N= 13dB

S 101:3 = 19.953
N
From Shannon-}Hartley law, the channel capacity
S
C -B log, 1+ N.
19.953)
= 3000 1og,(1+
bits/sec
C = 13167
second theorem, it is possible to transmit
according to Shannon's to reduce P, to as small
a
tla Since r, and
with an r <C,
small probability of error.
i.e., it is possible
arbitrarily
ea5 possible by a suitable coding technique.
(5.3)
bits/sec
We have r, =
Information Theory and C
294
Coding
3600
=3
k 1200

Ifk1, then n= 3.
.: The number of check bits =n-k=2 bits.
triplets "000" or
Thus we have an error control coding scheme wherein the of 3 bi:sare
a
transmitted when b, = 0' andl' respectively. In these triplets, detinitely 2 out
red1ndant and hence they carry no information.
the channel may a
Let us suppose that the triplet '000 is transmitted. The noise in combinatione
any one bit, or any two bits or allthe three bits in his triplet, resulting in eight orre
which are listed in table 5.1. Let us suppose that the receiver decodes the received triplet hy
using a majority-logic decoding scheme [i.e., if the number of 0's in the received triplet is
more than the number of l's,then it is decoded as 0'. Ifit is the other way, then it is decodedtpu decod
as 1'). rCgar
000 001 010 100 011 101 110 CeIv
Received Triplets
Decoded Message 1
methc
Table 5.1 :Table showing majority logic decoding scheme
As seen from the table 5.1, when the noise affects "only one" of the binary digits in Mhen
"000", then the decoder recovers the data correctly. If two or more errors occur, then the data Smuc
is not recovered correctly.
From equation (5.1), the overall probability of error
P. = P{6, b,} Was
= P (number of errors > 2)
Let Xbe a random variable representing the number of errors, Then X is a binomial
random variable with parameters n =3and p =probability of abit is in error = 4,
P. = P(X 2)

.C, p° (U-p)"
X2

X2
C,(4.)*(1-q.)*
= 34'(l -4) t4,
=q(3-2q.)
= (8 x 10 )² [3 -(2) (8 x 10 ]
P. 0.02 x 10 which is much less than
i0.
Introductionto Error Control Coding 295

ie. Probability of error P is around 2% of thedesired value of 104.


From the above example, we see that without error control coding, the overall probability
foror will be equal to that of modem=8 x10. But with channel encoding itis reduced to
0.02 x104 Thus error control coding reduces the overall probability of error.

s4 METHODS OF CONTROLLING ERRORS


There are two different methods available for controlling errors in a communication
system.
() Forward-acting error correction method : The method of controlling errors at the
receiver through attempts to correct noise-induced errors is called the forward acting error
correction method.
(i) Error Detection Method : In this method, the decoder examines the demodulator
output,accepts the received sequence if it matches with a valid message sequence. If not, the
decoder discards the received sequence and notifies the transmitter (through a reverse channel)
regarding the error and requests for retransmission of the message tillthe correct sequence is
received. Thus the decoder attempts to detect errors but does not attempt to correct them.
Error detection method yields a lower overall probability of error than error correction
method. To illustrate this point, let us consider the previousexample of transmitting the triplet
except
000". If the decoder uses error detection method, then it would reject all other triplets
the receiver only
"000 and111'. Now, an information bit will be incorrectly decoded at
10)= 5.12 x 10-10 which
when all the three received bits are in error. Thus P = (q)=(8 x
e

correction method.
IS much lower than probability of error for error
requirements of reverse channel
The disadvantages of error detection method are the
transmission. (This is because the transmitter has
and slow down of the effective rate of data message).
before transmitting next
to wait for an acknowledgement from the receiver
5.5 TYPES OF ERRORS present in the
digital communication systems, errors are caused by the noise communication
In noise are encountered in
COmmunication channel. Usually, two kinds of
Impulse noise". Due to these, two types of errors
Channels namely Gaussian noise" and
0Ccur. errors that occur due to the presence of
white
transmission include thermal
() Random Error : The errors". Sources of Gaussian noise
random
aussian noise are refened to as receiving equipment, thermal noise in the channel and
transmitting and
d shot noise in the
adiation picked up by the receiving antenna. by long quiet intervals followed by
characterized
(i) Burst Error: Impulse
noise is
impulse noise are noise that arises due to lightning.
anplitude noise bursts, Examples of
such noise bursts occur, they affect more
3n man-made noise etc. When
WHChing transijents, Error".
the error caused is called Burst
than
one symbol and
296
Information Theory and Codino
5.6 TYPES OF CODES
As already mentioned in section 5.2, error control codes are divided into two broad
categories namely block codes" and"convolutional codes".
() Block Codes: Block code consists of (n - k) number of check bits (redundant hitel
code-words.These (n-k) numbe
being added to k number of information bits to form 'n' bit thesc
of check bits are "derived fromk informnation bits".[At the receiver, the check bits are used
n-bit code-words.
todetect and correct errors which may occur in the entire
Th
(ii) Convolutional Codes : In this code, the check bits are continuously interleaved nearb
with information bits. These check bits willhelp to correct errors not only in that particular
block but also in other blocks as well.
No
5.7 LINEAR BLOCK CODES
In channel encoder, ablock of 'k' message bits is encoded into a block of 'n' bits by
Imaini
adding (n k) number of check bits as shown in figure 5.2. Clearly n > k and such a code onot
formed is called (n, k) block code. These (n -k) check bits are "derived'" from k-message In
bits which will be shown in the next section.
tor (

CHANNEL
ENCODER The

MESSAGE CHECK-BITS
MESSAGE

k -k-BITS k

OR

CHECK-BITS MESSAGE The


k-(n-k) k

Fig. 5.2 : Illustrating the formation of linear block codes


A (n,k) block code is said to bea (n, k) linear blockcode" if it
satisfies the conaiuo
given below:)
Let Ci and C, be any two code-words (n-bits) belonging to a set of (n, k) block code.If
C, C, [ represents modulo-2 addition discussed in detail in
next section] is also a
code-word belonging to the same set of (n, k) block code, then such ablock code is called(1,
k)linear block code.
A(n, k) linear block code is said to be systematic" if the k-message bits appe either
at the beginning of the code-word or at the end" of the code-word as depictedin
figure 5.2.
ntroductionto Error Control Coding
297

58 MATRIX DESCRIPTION OF LINEAR BLOCK CODES


Let the message block of k-bits (code-words) be represented as a
g-tuple" called message-vector" given by row-vector" or
(D] = {d, d,, ..., d} ...(5.4)
where d,, d,, .., d are either 0's or 1's.Thus, there are 2k distinct message-vectors and
dl these message-vectors put together represent k-tuples in ak-dimensional sub-space over a
"yector-space" of alln-tuples over afield space called GALOIS FIELD" denoted by GF(2).
The channel encoder systematically adds (n - k) number of check-bits to form a (n, k)
eear hlock code. Then the 2* code-vectors can be represented by
C= {C, Czy .,C ... (5.5)
Note that only k-tuples out of n-tuples in equation (5.5) are valid code-vectors" and the
remaining (2"- 2*) code-vectors are invalid code-vectors"which form the error-vectors".
Also note that the ratio (k/n) is defined as the rate efficiency" of the (n, k) linear block code.
In a systematic linear block code, the message bits appear at the beginning of the code
vector (or at the end of the code vector which we shall consider later).
c; = d, for all i= 1, 2, ...,k ...
(5.6)
The remaining (n -k) bits are check bits. Hence, equations(5.5) and (5.6) can be combined
as

[C] = {C,, C ..., C7 C7 + p&+2**.****h (5.7)

k-message bits (n- k) check bits

These (n -k) number of check bits Ck+1 7+2 ......., C are derived from k-message bits
USing a predetermined rule as below :
=P,d, + P, d, + t P
C7+2 * Ppd, + P, d, t ..... (5.8)

Pi,n-k, t P.nkd, t... ..


+ P&.nd
C,

...are either 0'sor l's and the addition above is performed using
Wnere p,u» PPi» Pa
hodulo-2 arithmetic. actually the EXCLUSIVE
the addition of two binary digits is
|In modulo-2 arithmatic, given in the following table 5.2 (a). Table 5.2 (b)
two binary digits as
OR" operation of thosemultiplication and table 5.2
Comparing
(c) modulo-2 subtraction. modulo-2
Ows the modulo-2 modulo-2 subtraction is "same' as
conclude that
Dies 5.2 (a) and (c), one can
addition).
298 Information Theory and Coding

Modulo-2 Modulo-2 Modulo-2


addition Multiplication Subtraction
0.0=0 0-0=0
0+0= 0
0+1= 1 0.1=0 0-1= 1 With a borrow
of1
1+0= 1 1.0=0 1-0=I
1+|=0 1.1=1 1-l=0

(a) (b) (c)


Table 5.2 : Tables showing modulo-2 arithmatic
[Note : Throughout this chapter, +'is being usedfor indicating modulo-2 addition].
It is possible to combine equations (5.6), (5.7) and (5.8) and express the result in a
"matrix form" as
k-terms (n k) terms

Pi,n-k
010. . 0 P21 P22 P2.n-k
. . (5.9)

000 .... 1Pk1 Pk2 Pk,n-k


Or [C] = [D] [G] ...5.10)
Where [G]is called as GENERATOR MATRILX" of order (k x n)given by
[G] - 4 PJ cxn) ...(5.11)
where I, = unit matrix of order k or
[P] =arbitrary matrix called PARITY MATRIX" of order
k x (n- k).
and |= denotes the demarcation between the unit matrixI and parity
matrix P.
When equation (5.11) is used for writing the generator matrix [G], then the systematie
linear block code so obtained will have the message bits" at the beginning of the code
vector and check bits"at the end.
The generator matrix [G] can also be expressed as
[G] = [P I) ".... (5.12)
in which case the message-bits will be present at the end and
of the code-vector. [Throughout this book, both the check-bits at the beginnins
methods of representing [G] have be
used according to convenience].
The parity matrix [P] is suitably selected to correct random and
burst errors. Later on
thischapter, let us look into the constructive procedures for choosing [P]
applications. matrices for diftere
oductionto Error Control Coding
299
5.1: For a systematic (6, 3) linear block code, the parity matrix Pis
10 1 given by
P]=0 1 1 Find all possible
code-vectors.
Solution
Given n = 6and k =3 for (6, 3) linear block code.
Since k = 3. there are 2% = 23 =8
and (111). message vectors given by (000), (001), (010), (011),
100), (101), (110)
The code-vectors are found using equation (5.10)
given by
[C] =(D] [G]
where [G] = (I, P] from equation (5.11)

1 0 0: 1 0 1]
[G] =0 1 0: 0 1 1
|0 0 1:1 1 0
[C] = [D][G] = [d, d, d,][1 0 0 1 0 1
01 0 0 1 1
0 0 1 110|
=[d,, d,, d,, (dd), (d+d,), (4, +d))
For amessage of (d, d, d,) = (01 1)
We have [C] = [0 1110 1]
For a message of (d, d, d) = (10 1), [C] =[10101 1]
which are listed below table 5.3
In a similar wav, the other code-vectors can be found
Code Message-vector Code-vector for (6, 3)
linear block code
Name
000 000000
001110
001
010011
010
011101
011
100101
100
101011
101
110110
110
111000
111
code of example S.1
Code-vector table for (6, 3)
Table 5.3 :
Information Theory and Codine
300

It can be verified easily that the addition of any two code-vectors is a code-vector
5.3.
belonging to the same (6, 3) code, referring to table
in the above table.
For example, consider addition of C, and C,
C, +C, = (010011) + (110110)
= (100101) which is the code-vector C,.
combinations to prove that
In a similar way, the above property can be verified for other
block code.
(6, 3) code of example 5.1 is indeed a linear
PARITY CHECK MATRIX [H}:
From equation (5.11), we have the generator matrix given by
k columns (n- k)columns

(G] = 1, IP]=|1 00 ... 0 Pu Pi2 Pi,n-k


0 1 0 .... 0 P21 P22 P2,n-k
(5.13)

. . 1 Pk1 Pk2 Pk,n-k


Associated with the generator matrix [G] of equation (5. 13), is another matrix called
Parity Check Matrix-H" given by
....5.14)
(H) =|P"
k columns (n-k)columns
[H] = P11 P21 Pki 10 0 .... 0| (5.15)
P12 P2 Pk2 0 10 .... 0

Pi,n-k P2,n-k Pkn-k 0 0 0 .... 1|


H] matrix is a (n k) x (n) matrix and this matrix is used in error corection.
Bxample 5.2: IfC is avalidcode-vector (as calculated from equation (5.10) namely
DG), then prove that CH' =0where H' is the transpose of the parity check matrix H.
Solution
From equation (5.13), the i" row of the [G] matrix is given by
8. = [00 ... l... 0P, P P P,n-k
¡th element (k +j)h element
From equation (5.1S), the jth row of the [H] matrix is given by
ntroductionto Error Control
Coding 301

h, = P, P, ... Pij . Pkj 00... 1.. 0]


ith element (k + j)th element
Consider, g.h = [0 0 ... 1... 0P, Pa .. P..
P,n-Pij
P2j
ih element |Pj

Pkj
0

(ktj)" element 1

Modulo-2 multiplication of the row and column matrix yields


g,h' = (0) (P,)+ (0) (P3j)t ..+ (1) (P,) t...* (0) (o) +(P,) (0)
+ (p,) (0) t ..+P) () +..t (P,n- (0)
= P, tP,P,(1+1)-P,.0from table 5.2 (a)
.. g.h, = 0 (5.16)

Ihis equation is true for every values of iand j and hence, in matrix form, we have
[G](H]" = 0 ....$.17)
both sides of equation (5.17) by [D], we
get
Pre-multi(D]plyin[G]
g (HJ" [D] [0] =0
But from equation (5.10), [C]-[D][(G]
[C] [HJ" = 0 (5.18)
CH" =0
LINEAR BLOCK CODES
UNCODI N G
wth Expanding
sides,
CIRCUT FOR (n,
the matrix of
we get
k)
equation (5.9) and equating the corresponding elements on
302 Information Theory and

C, = d,
C, = d,

Ck+ *P,d, + P d,
&+2* P2d, + P d, + +P
. . 5.19)
C,=Pi,n-d, tP.d, t... +P,nk
The implementation of the above equation in a circuit fashion results in the encode. t
(n, k) linear block code. Sucha realization of encoder circuit is shown in figure 5.3 consieti
of a k-bit shift register, a n-segment commutator and (n - k) number of modulo-2 adders

k-bit Shift Register

Message
Input d,
n-scgmeat
. Commutatot

Pk,t P2, t Pi,-k Psl P21 Pu To


channel

Modulo-2
Adders

Fig. 5.3: Encoding circuit for (n, k)


The entire data d, d,. d, d, is linear block codes
shifted circles
Py P21 ... P7i P1,n-k intoorthe"short
are either "open circuit" k-bit circuit"
shift register. The small
depending Oneither 0
or 1'. For example, if p, =0,
if p,, = 1, then there is then there is no connection from to the modulo-2 adderand
connection. When the d, register,the
modulo-2 adders generate the "check-bits' which are fed intoshifted
message is the commutator segments
into the shift along
with message bits as shown in figure 5.3. When the commutator brush rotates and makes
contact with segments
channel. successively,
the code-vector bits will be through
transmitted
Example. 5.3: For the systematic (6, 3) code of example5.1 the code-vector Cfora messagt
input of (d, d, d,) is given by
ttion
toError Control Coding

[C] = (d, d, d, 303

Constructthe
slution
corresponding encoding(d,td,).
circuit.(d,+d). (d, +d,))
Theccode-vector bits are given by
Since.k=3, we require a 3-bit shift
1-k=6-3= and hence we register to move the message bits into it. We have
require 3 modulo-2 adders and a 6segment
entire encoding circuit is shown in figure 5.4. commutator. The

Message
Input d, d, d,
3-bit
Shift Register Commutator

+ To
channel

Fig. 5.4: Encoding circuit for (6, 3) linear code of example 5.1

SYNDROME ANDERROR CORRECTION


Let us Suppose that C (C, C,, .... c) be a validblock code-vector transmitted over a noisy
channel belonging to a (n, k) linear
code. Let R = fr, ; .. )be the
Communi cation Due to noise in the channel r, r, .... r,difference
teCceived vector. may be different from c, C, ....c,. The
between 'R and C!
as the
or "error pattern E" is defined
error-vector" E =R-C=R +C
arithmatic.
....(.5.20)

Ce subtraction is same as addition in modulo-2


vector by
.:. The error-vector 'e can be represented as a ... 5.21)
E = (e,e, .. e
is also an n-tuple where e, = I ifr, # c, and e =
equation (5.21), it is clear that 'E noise in the
From error-vector "E' represent the errors caused by
the
i C The l's present in
know C and E. In orderto
Channel.
equation (5.20), the
receiver knows only
decoding
R'and it
operation
does
by
not
determining an(n- k) vector
Tind E. then C, the receiver does the
andas
Sdefined
Information
304
Theory and
S = RH! Goding
=(s, S, ....s,
The (n - k) vectorSis called "error syndrome" of R.
From equation (5.20), R =C+E
Using this in equation (5.22), we get
S = (C+ E)H"
= CH'+E H"
But CH"=0from equation (5.18)
S = EH"
..5.24
The receiver finds E from equation (5.24) as S and H' are both known.
Then
equation (5.23) the transmitted code-vector C' can be found out. Note that the syndrofrom
of thereceived vector willbe zero if Risa valid code-vector. When R÷C, then Sz0 and t.
receiver then detects and corrects the error.
The following example clearly illustrates the method of single error correction.
Example 5.4 : Referring to the (6, 3) code of example 5.1, the received code-vector
R=[110010]. Detect and correct the single error that has occurred due to noise.
Solution
From example (5.1), we have
10 1| 1 0 1]
[P] = 0 1 1 :. P]' =|0 1 1| =[P]
|1 1 0 |1 1 0|
.:. From equation (5.14),

|1 0 1 1 0 01
01 1 010
||I0 0 0 1
|1 0
0 1 1
110
[HJ = 1 0 0
01 0

From equation (5.22), the


syndrome [S] is given by
[S) = (s, s, s,] =
RH=[1 305

100101[1 0 11 1
o1

10
0 1 0

Byusing modulo-2 00
multiplication andsinceaddition,0, the syndrome is found to
S = [1 0 0
[The first syndrome bit s, is
’ S# it be
found
s, = (1.1) from represents an error.
= 10 (1.0) (0.1)
0 O 0 (0.1)> (1.0)
= 1since 0 (0.0)
Ifit is 'even' then the the total number of 1's
corresponding syndrome bit will
present is 'odd'.
Consider s, = (1.0) be 0.
(1.1) >(0.1) D>(0.0) (1.1 (0.0)
0 1 0 0 1 0
= 0the
This syndrome vector S = number of 1's is 2 which is even
n the received (100) is present in the 4h row of H' number of l's].
vector R counting from left is in matrix and hence the 4th
i. Ihe corrected error.
mtable 5.3 code-yector 110110 which is a valid transmitted
is
corresponding to a message Vector 110. code-vector as seen
NDROME CALCULATION CIRCUIT
Letthe
received-vector R (rI,... The syndrome vector Sis then given by cquation
[S] = (s S Sh-=RHT

Sa-k= r,,.. JPPu P2 Pi,n-k


P21 P22 P2,n-k

PkI Pk2 Pk,n-k


0
1
:

Mhulipbying by using modulo-2 arithmatic, we get the syndrome


bits as
(S] = (s, S,
S,]=RH' =[1 100101[1 o 305

0 1

10 0
|0 0
Byusing modulo-2 multiplication and
S= [1 0 0] ’ addition, the
(The first syndrome bit s, is since S # 0, it syndrome is found to be
found from
s, = (1.1) represents an error.
(1.0) (0.1) (0.1)
= | 0 0
00 D0 (1.0) (0.0)
=l
Iit is 'even' then the since the total number of
1's present is
Consider s, corresponding
= (1.0) (1.1) syndrome bit will be '0'. 'odd'.
= 0 (0.1) (0.0) (1.1) (0.0)
1 0D0 1 0
=0 the
Ihis syndrome vector S = number of 1's is 2 which is even
n the received (100) is present in the 4h row of H' number of 1'sl.
vector R counting from left is in
matrix and hence the 4th
Ihe corrected error.
mtable 5.3 code-vector is 110110 which is a valid transmitted
corresponding to a message vector l110. code-vector as seen
NDROME CALCULATION CIRCUIT
Letthe
mas received-vector R =([, I, ... I The syndrome vector Sis then given by cquation

S] = s, S, . . .SkJ= RH"
Bs..s,J= [r, I, ...J|P Piz Pi,n-k
P21 P22 P2,n-k

PkI Pk2 Pk,n-k

1
"

Miuliplying by using modulo-2 arithmatic, we get the


syndrome bits as
ntroduc
Information
306 Theory and Cot
+ I,Pxi Fro

S, =rp2 t I, P2 :

Sh-k = Pi,nkt; P2, n-k


I, Pk, n-k +I, .$251
Equation (5.25) can be realized using the circuit shown in figure 5.5 which is called
received-vector bits are moved into.an-bit
"syndromne calculation circui". The
are either open circuit or short
shift registet
ason shown.
either Here
0' oralso
'1.the Assmall as thepPyreceived
sooncircles P21 circuiregister,
vector is shifted into the shift t th depending
modulo-2 adders generate the syndrome bits s,, S, ...S-k Knowing the syndrome vector S
the error can be easily detected and corrected as shown previously. The following example
circuit.
illustrates the particular case of obtaining the syndrome calculation

n-bit Shift Register

P,e ......

(P P2 Pi2 P&t Pz1

SModulo-2 Adders

Fig. 5.5 : Syndrome calculation circuit for (n, k)


linear block code
Example 5.5 : For the systematic (6, 3) code of example 5.1, the received vectof
R= (r, r, I,I, I, T).Construct the
Solution corresponding syndrome calculation circuit.
Example
For the (6,3)code, the matrix H' is
given by (refer example 5.4)
(HJ = i 0 1
) Find a
0
1 0 0 ) DrawN
)A Sin
0 0 1
er ors.
to
Error Control Coding
Mdctio

307
equation (5.22), S = s, s, s,] =RHT
From

0 1 1
11 0
1 0 0
0 1
|0 0 1
=

(r, tr,+ r), (r, t r, tr),


: The syndrome bits are (r,+,tr)]
S = I, tr,t
S, = I, + I,
S =I,tr,tr,
The syndrome calculation circuit can be
easily constructed as shown in figure 5.6.
Received
Vector R

S S
Fig. 5.6 : Syndrome calculation circuit for (6, 3) code of example 5.1
Example 5.6 :For asystematic (7, 4) linear block code, the parity matrix Pis given by
[1 1 1]
1 1 0
[P] = 1 0

0 1 1
Find all possible valid code-vectorS.
DrAaw the correspopding encoding circuit.
single error has occurred in cach of these received vectors. Detect and correct those
errors.
- [0111110]
(c) R =[1010000].
Draw the syndrome calculation
(b) Rp [1011100]
circuit.
Information
Theory annd
308
Going
Solution
equation (5.11) as
() Thegenerator matrix (G] is given by
(G]= [ P]=[! P]
1 0 0 0: 1 1
01 0 0 : I 1 0
0 010: l 0 1
0 0 1: 0 1 1

Using equation (5. 10), the code-vectors can be found as


(C] = [D) [GI
Asan example, let |D = [1101]|)
0 0 0 1 1 1]
0 1 00 1 1 0
(o0 0 1_01 0 L
0 01 0 1 1|
= [1-101010]
For D = [0 1 11]
(C] =[0 111]|1 0 0 0 1 1 1
01 0 0 1 1 0
00 1 0 1 0 1
00 0 1 0 1 1
- [0 11 1000]
In asimilar way, the other
valid code-vectors are found as
given in table J.4.
Message Code
Vector (D) Vectors (C) Message
Vector (D)
Code
Vector (C)
0000
0001 0000000 1000 1000111
0010 0001011 1001 1001100
0011 0010101 1010 1010010
0100 0011110 1011 1011001
0101 0100110 1100 1100001
0110 0101101 1101 "1101010
0111. 0110011 1110
0111000 1111
1110100
1111111
Table 5.4 :
Code-vector table for (7.
ctionto Error Control Coding

encoding circuit which 309


requires a 4-bit
figure 5.7. shift register,
The coommutator is
scgMent shown in 3
modulo-2 adders and a 7-
4-bit
ShiftRegister,
Message d, d, d,
Input
d,

Commutator

To
channel

Fig. 5.7 : Encoding circuit for (7, 4) linear block code of


example S.6
The code-vector bits in terms of the message bits are found using

[C] = (D] [G]=(4, d, d, d,] [1 0 0o1 1 1]


0 1 0 0/ 1 1 0
0 01 o, 1 0 1,
|0 0 0 1\0 1 1
[C] =[d,, d,, d,, d, (d, +d,+d,), (d, td,td,), (d,+d,+d,))
Given R, = [0 11111 0]
Ihe parity check matrix H is given by equation (5.14) as
H- P"-]=p"L]
1 1 1 01 0 0]
110 1 0 1 0
| 1 0 1 110 0 1
:.The syndrome S, is given by equation (5.22) as
= RA HT
1
= [0 1 1 1110][1 =[110]
1 0 1
01
1 0 0
0 1
0 0 1
Information
310 Theory and
’ This syndrome is located in the second
row of H' matrix. Hence the 2nd
bit
Coding
from left is in error. The corresponding error-vector is then
E, = [0100000]
given by
couting
vector is given by
.:. Thecorrected code-vector which is the transmitted
0000 01
CA =R +E, =[01 11110]+ (0 1
=[00 1)1}1 10] which is the valid code-vector
the message vector 0011 as seen from table 5.4.
(b) Given Rg =(1011100]
cor esponding o

S = R, H" I
=[1011100][1 1 1]
1 1 0
1 0 1
01 1
1 0 0
0 1 0
00 1

= [101] which is located in the 3rd row of H".


.:. The error vectorE, = [00 10000]
The corrected code-vector =CR Rt Ep
- [1011100]+[0010000]
- [1001100]
C,B is a validcode-vector cõrresponding to the message vector 1001 as seen
table 5.4. ou
(c) Given R =[1 010000)
S = R H= [10 1
0000][1 I1
1 1 0
1 0 1
0 1
1 0 04

01
= [0
10]’This is present in the 6h row of H'.
Control Coding
to Error
suction

311
errOr vectorEr = [O 00001 0]
The
Correctedcode-vector = Cç = Rc+ Ec
: = [101 0000] + [O 00 00 1 01
= [1010010j
againa valid code-vector corresponding to the message
Gis vector 1010 in table 5.4.
Syndrome Calculation Circuit:
Letthereceived vector be represented, in general, by

Thesyndrorme corresponding to the above received vector Ris


S =R H = r, , I, , I,
I, I,|1 1
1 | 0

0 1
1 0 0
0 1 0
0 0 1

S= (s,, S, S,]=[,tr,tr,tr),
The syndrome (,tr,tr,), (r,tr,tr,t,)]
calculation circuit can be easily constructed as shown in figure 5.8.
7-bit Shift Register
Received
Vector R

Fig. 5.8:Syndrome caleulation circuit of example S.6


ixample of aS.(5,7: 1)Repetition codes
1repetition code
represent si
is given by
mplest type of linear block codes. The generator

You might also like