0% found this document useful (0 votes)
33 views29 pages

DC Lab Manual Simulation Experiments

Uploaded by

robinmyow211
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
33 views29 pages

DC Lab Manual Simulation Experiments

Uploaded by

robinmyow211
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 29

Peoples Empowerment Group

ISB&M COLLEGE OF ENGINEERING


NANDE, PUNE
DEPARTMENT OF E&TC ENGINEERING
Academic Year 2022-23
Subject: Digital communication

Class: T.E. (E&TC)


Roll No.: Name of student:

Experiment No: 7
Title: Simulation study of Performance of M-ary PSK

Aim: Simulation study of Performance of M-ary PSK.

Software: Matlab software.

Theory:

By definition, in an M-ary digital modulation scheme, we send any one of M possible


signals s1(t),s2(t),......sM(t), during each signaling (symbol) interval of duration T. In
almost all applications,M=2m, where m is an integer. Under this condition, the symbol
duration,T=MTb, where Tb is the bit duration.M-ary modulation schemes are preferred
over binary modulation schemes for transmitting digital data over band-pass channels when
the requirement is to conserve bandwidth at the expense of both increased power and
increased system complexity. In practice, we rarely find a communication channel that has
the exact bandwidth required for transmitting the output of an information- bearing source
by means of binary modulation schemes. Thus, when the bandwidth of thechannel is less
than the required value, we resort to an M-ary modulation scheme for maximum bandwidth
conservation.

M-ARY PHASE-SHIFT KEYING To illustrate the capability of M-ary modulation


schemes for bandwidth conservation, consider first the transmission of information
consisting of a binary sequence with bit duration Tb. If we were to transmit this information
by means of binary PSK, for example, we would require a channel bandwidth that is
inversely proportional to the bit duration Tb .However, if we take blocks of m bits to
produce a symbol and use an M-ary PSK scheme M=2m with and symbol duration T=mTb,
then the bandwidth required is proportional to 1/(mTb). This simple argument shows that
the use of M-ary PSK provides a reduction in transmission bandwidth by a factor m=log2M
over binary PSK.In M-ary PSK, the available phase of 2π radians is apportioned equally
and in a discrete way among the M transmitted signals, as shown by the phase modulated
signal
M-ary Transmitter:

Input bit stream b(t) is applied to a serial to parallel converter which forms a symbol
of successive N input bits. i.e. output of serial to parallel converter
is a N-bit word. Output of DAC , m(t) is generated when the last N th bit is
received & will be hold till next N bits are received i.e for period NTb. This
analog signal m(t) is applied to modulator which then modulates the phase
of the carrier depending upon value of m(t).

M-ary PSK Receiver:

M-ary receiver is shown in figure. For carrier extraction, signal is raised to M th The
band-pass filter extracts the component Mfo. This frequency is divided
by M to get carrier frequency. These frequencies are applied to multipliers
whose output is then integrated over a period of Ts.Output of correlators
(multiplier + integrators) are applied to phase discriminator( not shown in
fig) & ADC to get digital data which is further converted to original serial
format by parallel to serial converter.
Probability of Error:

A probability of error calculation involves analysing the received phase at the receiver (in
the presence of noise), and comparing it to the actual phases. An exact solution is difficult
to compute, but for P < 10−3 an approximate probability of making a symbol error is

Stremler provides a table of the SNR requirements of M-ary PSK for fixed error rates. The
results indicate that for QPSK (M = 4) has definite advantages over coherent PSK (M =
2) — the bandwidth efficiency is doubled for only about a 0.3dB increase in SNR. For
higher-rate transmissions in bandlimited channels the choice M = 8 is often used. Values
of M > 8 are seldom used due to excessive power requirements.

Advantages of M-ary PSK:

– Bandwidth reduces as number of bits per symbol (N) increases.


– System is immune to amplitude variations due to
channel effect.

Disadvantages of M-ary PSK:

– Probability of error increases as N increases because


distance d decreases with increase in N.
Program:

% Simulation study of Performance of M-ary PSK


clc;
close all;
EbN0dB=-4:1:24;
EbN0lin=10.^(EbN0dB/10);
colors={'k-*','g-o','r-h','c-s','m-s','y-*'};
index=1;
m=1:1:5;
M=2.^m;
for i=M,
k=log2(i);
berErr = 1/k*erfc(sqrt(EbN0lin*k)*sin(pi/i));
plotHandle=plot(EbN0dB,log10(berErr),char(colors(index)));
set(plotHandle,'LineWidth',1.5);
index=index+1;
hold on;
end
legend('BPSK','QPSK','8-PSK','16-PSK','32-PSK');
axis([-4 24 -8 0]);
set(gca,'XTick',-4:1:24);
ylabel('Probability of BER Error - log10(Pb)');
xlabel('Eb/N0 (dB)');
title('Probability of BER Error log10(Pb) Vs Eb/N0');
grid on;
Conclusion:
Peoples Empowerment Group
ISB&M COLLEGE OF ENGINEERING
NANDE, PUNE
DEPARTMENT OF E&TC ENGINEERING
Academic Year 2022-23
Subject: Digital communication

Class: T.E. (E&TC)


Roll No.: Name of student:

Experiment No: 8
Title: Simulation study of Performance of M-ary QAM

Aim: Simulation study of Performance of M-ary QAM.


Software: Matlab software.
Theory:
QAM(Quadrature Amplitude Modulation) QAM is a combination of ASK and PSK Two
different signals sent simultaneously on the same carrier frequency ie, M=4, 16, 32, 64,
128, 256. As an example of QAM, 12 different phases are combined with two different
amplitudes. Since only 4 phase angles have 2 different amplitudes, there are a total of 16
combinations. With 16 signal combinations, each baud equals 4 bits of information (2 ^ 4
= 16). Combine ASK and PSK such that each signal corresponds to multiple bits. More
phases than amplitudes. Minimum bandwidth requirement of QAM is same as ASK or
PSK.
QAM Modulator

A basic QAM modulator circuit consists of a mixer, local oscillator, a 90˚ phase shifter,
and a summer block located close to the output port (see figure above). The signal input is
fed to I and Q parts of the circuit. A local oscillator generates a clean sinusoidal signal of
a fixed amplitude and frequency. The mixer circuit multiplies the incoming signal with the
oscillator signal to generate a high frequency carrier signal. While the in-phase signalis a
simple mixing of incoming signal and oscillator signal, the Quadrature waveform is
formed by shifting the oscillator signal by phase of 90˚, upon which mixing with the data
signal is carried out. The resulting two waveforms, the in-phase and Quadrature, are added
at the summer circuit to create a QAM modulated signal.

QAM Demodulator

In the QAM demodulation process, a balun is used to split the incoming modulated signal
to allow extraction of the in-phase and quadrature components of the signal. The signals
can be coherently extracted since the two components are orthogonal to each other. A low-
pass filter can be used to filter out the in-phase and quadrature signals separately. To
extract the in-phase signal, the received signal is first multiplied with a cosine signal and
then passed through a low-pass filter. A sine wave is multipled with received waveform
and then passed through the lowpass filter to extract the quadrature component.
Probability of error:

Applications of M-ary QAM Schemes

M-ary QAM schemes are used in a variety of applications. In the US, digital cable TV
uses 64-QAM and 256-QAM. In the UK, 64-QAM is used for digital terrestrial television
while 256-QAM is used for Freeview-HD systems.

Very dense constellation schemes such as 1024-QAM and 4096-QAM are used to achieve
high levels of spectral efficiency in homeplug powerline Ethernet devices and can deliver
data rates up to 500 Mbps. 1024-QAM scheme is particularly utilized for ultra high
microwave backhaul systems. If other signal processing techniques such as adaptive
equalization and channel coding are used along side 1024-QAM, a gigabit level capacity
can be easily achieved over the given channel bandwidth.

Program:
% Simulation study of Performance of M-ary QAM.
clc;
close all;
EbN0dB=-4:1:24;
EbN0lin=10.^(EbN0dB/10);
colors={'k-*','g-o','r-h'};
index=1;
m=2:2:6;
M=2.^m;
for i=M,
k=log2(i);
berErr = 2/k*(1-1/sqrt(i))*erfc(sqrt(3*EbN0lin*k/(2*(i-1))));
plotHandle=plot(EbN0dB,log10(berErr),char(colors(index)));
set(plotHandle,'LineWidth',1.5);
index=index+1;
hold on;
end
legend('4-QAM','16-QAM','64-QAM');
axis([-4 24 -8 0]);
set(gca,'XTick',-4:1:24);
ylabel('Probability of BER Error - log10(Pb)');
xlabel('Eb/N0 (dB)');
title('Probability of BER Error log10(Pb) Vs Eb/N0');
grid on;
Conclusion:
Peoples Empowerment Group
ISB&M COLLEGE OF ENGINEERING
NANDE, PUNE
DEPARTMENT OF E&TC ENGINEERING
Academic Year 2022-23
Subject: Digital communication

Class: T.E. (E&TC)


Roll No.: Name of student:

Experiment No: 10
Title: Simulation Study of performance of BPSK receiver in presence of noise

Aim: Simulation Study of performance of BPSK receiver in presence of noise.


Software: Matlab software.
Theory:

In binary phase shift keying, two output phase changes are possible for a single carrier
frequency. One output phase represents logic 1 and the represents for logic 0.The carrier
shifts between the two phase changes are 0 and 180.Other names of BPSK are phase
reversal keying and biphase modulation. BPSK is a form of suppressed-carrier, square-
wave modulation of a continuous wave (CW) signal.

BPSK modulation is the simplest of all the M-PSK techniques. An insight into the
derivation of error rate performance of an optimum BPSK receiver is essential as it serves
as a stepping stone to understand the derivation for other comparatively complex
techniques like QPSK,8-PSK etc..
Understanding the concept of Q function and error function is a pre-requisite for this
section of article.
The ideal constellation diagram of a BPSK transmission (Figure 1) contains two
constellation points located equidistant from the origin. Each constellation point is located
at a distance from the origin, where Es is the BPSK symbol energy. Since the number
of bits in a BPSK symbol is always one, the notations – symbol energy (Es) and bit energy
(Eb) can be used interchangeably (Es=Eb).
Assume that the BPSK symbols are transmitted through an AWGN channel characterized
by variance = N0/2 Watts. When 0 is transmitted, the received symbol is represented by a
Gaussian random variable ‘r‘ with mean=S0 = and variance =N0/2. When 1 is
transmitted, the received symbol is represented by a Gaussian random variable – r
with mean=S1= and variance =N0/2. Hence the conditional density function of the
BPSK symbol (Figure 2) is given by,
Figure 1: BPSK – ideal constellation

Figure 2: Probability density function (PDF) for BPSK Symbols

An optimum receiver for BPSK can be implemented using a correlation receiver or a


matched filter receiver (Figure 3). Both these forms of implementations contain a
decision making block that decides upon the bit/symbol that was transmitted based on the
observed bits/symbols at its input.

Figure 3: Optimum Receiver for BPSK


When the BPSK symbols are transmitted over an AWGN channel, the symbols appears
smeared/distorted in the constellation depending on the SNR condition of the channel. A
matched filter or that was previously used to construct the BPSK symbols at the
transmitter. This process of projection is illustrated in Figure 4. Since the assumed
channel is of Gaussian nature, the continuous density function of the projected bits will
follow a Gaussian distribution. This is illustrated in Figure 5.

Figure 4: Role of correlation/Matched Filter


After the signal points are projected on the basis function axis, a decision
maker/comparator acts on those projected bits and decides on the fate of those bits based
on the threshold set. For a BPSK receiver, if the a-prior probabilities of transmitted 0’s
and 1’s are equal (P=0.5), then the decision boundary or threshold will pass through the
origin. If the apriori probabilities are not equal, then the optimum threshold boundary
will shift away from the origin.

Figure 5: Distribution of received symbols


Considering a binary symmetric channel, where the apriori probabilities of 0’s and 1’s
are equal, the decision threshold can be conveniently set to T=0. The comparator, decides
whether the projected symbols are falling in region A or region B (see Figure 4). If the
symbols fall in region A, then it will decide that 1 was transmitted. It they fall in region
B, the decision will be in favor of ‘0’.
For deriving the performance of the receiver, the decision process made by the
comparator is applied to the underlying distribution model (Figure 5). The symbols
projected on the axis will follow a Gaussian distribution. The threshold for decision is
set to T=0. A received bit is in error, if the transmitted bit is ‘0’ & the decision output is
‘1’ and if the transmitted bit is ‘1’ & the decision output is ‘0’.
This is expressed in terms of probability of error as,

Or equivalently,

By applying Bayes Theorem↗, the above equation is expressed in terms of conditional


probabilities as given below,

Since a-prior probabilities are equal P(0T)= P(1T) =0.5, the equation can be re-written as

Intuitively, the integrals represent the area of shaded curves as shown in Figure 6. From
the previous article, we know that the area of the shaded region is given by Q function.

Figure 6a, 6b: Calculating Error Probability


Similarly,

From (4), (6), (7) and (8),

For BPSK, since Es=Eb, the probability of symbol error (Ps) and the probability of bit
error (Pb) are same. Therefore, expressing the Ps and Pb in terms of Q function and also in
terms of complementary error function :

% Simulation Study of performance of BPSK receiver in presence of noise


% Initialization of Data and variables
clc;
clear all;
close all;
nr_data_bits=8192;
b_data=(randn(1,nr_data_bits))>.5;
b=[b_data];
d=zeros(1,length(b));
% Generation of BPSK
for n=1:length(b)
if(b(n)==0)
d(n)=exp(j*2*pi);
end
if(b(n)==1)
d(n)=exp(j*pi);
end
end
disp(d)
bpsk=d;
% Plotting of BPSK Data
figure(1);
plot(d,'o');
axis([-2 2 -2 2]);
grid on;
xlabel('real');
ylabel('imag');
title('BPSK constellation');
% Addition of Noise
SNR=0:24;
BER1=[];
SNR1=[];
for SNR=0:length(SNR);
sigma=sqrt(10.0^(-SNR/10.0));
snbpsk=(real(bpsk)+sigma.*randn(size(bpsk)))+i.*(imag(bpsk)+sigma*randn(size(bpsk)));

% Plotting of BPSK data with Noise


figure(2);
plot(snbpsk,'o');
axis([-2 2 -2 2]);
grid on;
xlabel('real');
ylabel('imag');
title('Bpsk constellation with noise');
% Recovering of Data %receiver
r=snbpsk;
bhat=[real(r)<0];
bhat=bhat(:)';
bhat1=bhat;
ne=sum(b~=bhat1);
BER=ne/nr_data_bits;
BER1=[BER1 BER];
SNR1=[SNR1 SNR];
end
% Plotting of BER graph of BPSK
figure(3);
semilogy(SNR1,BER1,'-*');
grid on;
xlabel('SNR=Eb/No(db)');
ylabel('BER');
title('Simulation of BER for BPSK ');
legend('BER-simulated');
BPSK constellation Bpsk constellation with noise
2 2

1.5 1.5

1 1

0.5 0.5
imag

imag
0 0

-0.5 -0.5

-1 -1

-1.5 -1.5

-2 -2
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2
real real

Simulation of BER for BPSK

10-1 BER-simulated

10-2

10-3

10-4
0 2 4 6 8 10 12
SNR=Eb/No(db)

Conclusion:
Peoples Empowerment Group
ISB&M COLLEGE OF ENGINEERING
NANDE, PUNE
DEPARTMENT OF E&TC ENGINEERING
Academic Year 2022-23
Subject: Digital communication

Class: T.E. (E&TC)


Roll No.: Name of student:

Experiment No: 11
Title: Simulation study of various Entropies and mutual information in a communication system.

Aim: Simulation study of various Entropies and mutual information in a communication system

Software: Matlab/C.

Theory:-

First to know about entropy and mutual information, we will go


through ‘Information Theory’ as follows.

1. INFORMATION:

Consider a memory less source of n message M1, M2,….. Mn having


Probabilities P1, P2,…..Pn. Let these messages be equiprobable. So
Probability of each message will be P1=P2= ............. =Pn=1/n.
Now, the information is the minimum no. of digits
required to encode the message. Information in minimum no. of
digits required are log2N. This is nothing but the information I for
binary system.

So, I=log2N=log2 (1/Pi) bits.

2. ENTROPY:

The probability of occurrence of Mi is Pi. Hence the mean or


average information per message emitted by source is given by.
n
∑ Pi bits/symbol
i-1
The average information per message of a source M is called its
entropy, denoted by H(m)
Hence,
n
H(m)= ∑ Pi Ii
i-1
n
= ∑ Pilog2 (1/Pi) bits/msg.i-1

so
n
H (m)= - ∑ Pi logPi bits/msg.
i-1

3. CHANNEL MATRIX:

The matrix having conditional probabilities P(Yi/Xi) for a given channel and receiver is
called channel matrix.

Channel matrix=

P (Yi/Xi) represents the probability that Yi is received when Xi istransmitted. This is chart of
channel and receiver.

4. JOINT PROBABLITY:
The joint probability of Yj is denoted as P (Yj,Xi).the relation between
conditional probability and joit probability is given as,
P (Yj/xi)= P(Yj,Xi)/P(Xi)

i.e. P(Yj,Xi)=P(Xi)=P(Yj/Xi)
thus, joint probability can be obtained by multiplying conditional problem
of transmitted message.

The matrix containing joint probabilities P(Yj,Xi) is called joint


probability matrix. Thus, JPM=P (Xi). Channel matrix (dot matrix).

5. MUTUAL INFORMATION:
Mutual information is actual information that receivers after average loss
of information through channel.
The mutual information =information transmitted – avg. loss
information = actual information that receiver receives.
The average loss of information is denoted & defined by:
H(X/Y)= ∑ i ∑ j P(Xi,Yj)log 1/P(Xi/Yj) bits

So, I(X/Y)=H(X)-H(X/Y) bits/symbol


Now,

i-1
H(x) = ∑ P (Xi) log (1/P(Xi))
i=0

J-1
H(Y) = ∑ P (Yj) log (1/ (Yj))
j=0

J-1 J-1
H(X,Y)= ∑ ∑ P(Xi,Yi) log(1/p(Xi/Yj))
i=0 i=0

Where, P (Xi,Yj) can obtained from JMP I(X:Y)=H(X)+H(Y)-H(X,Y)

ALGORITHM:

1. Input no. of symbols n.


2. If equiprobable, then take Pi=1/n.
3. If symbol are not equiprobable, then probabilities from the users.
4. Take choice of units.
5. Take symbol rate r.
6. If p=0, ignore the symbol.
7. If entered probability is negative, give error message.
8. Check for sum of all probabilities=1.
9. Calculate self information=Ii.
10. Calculate entropy H.
11. Calculate mutual information.

Conclusion:
Peoples Empowerment Group
ISB&M COLLEGE OF ENGINEERING
NANDE, PUNE
DEPARTMENT OF E&TC ENGINEERING
Academic Year 2022-23
Subject: Digital communication

Class: T.E. (E&TC)


Roll No.: Name of student:

Experiment No: 12
Title: Simulation Study of Linear Block codes.

Aim: Simulation Study of Linear Block codes.

Software: Matlab/C.

Theory:-

First to know about entropy and mutual information, we will go


Theory:

A block code consists of a set of fixed length codeword. A linear code is a code that has
the following properties.

1) Any two words in the code can be added in module -2 arithmetic to produce a
third code word in the code.
2) The all zero word is always a code word.
3) The minimum hamming distance between two code words
Any code C is a subspace of vector space GF (q)ⁿ . Any set of vectors can be used to
generate code space.

Consider a (7,4) n,k linear block code in which k bits of n code are always identical to
message sequence to be transmitted. Accordingly (n-k) bits are referred to as parity check
bits.

We can define a generator matrix as linear combination of rows used to generate code
words of C. The rows are linearly independent. Generator matrix is not unique for a
linear block code.

Generator matrix is given as:

G = [I | P]

Where I -> Identity matrix of the order K x K


And P -> coefficient matrix of the order n x (n-k)
The generator matrix converts a vector of length K to a vector of length n. If the input
vector (uncoded symbols be represented by i), then coded symbols are given as

C = iG

C-> code word


I -> information word

Thus code words are obtained by simply multiplying the input vector by the generator
matrix. It is possible to detect a valid code word using parity check matrix H. The size of
parity check matrix is (n-k) x n. It simply detects whether an error has occurred or not.

H is defined as
H = [ PT | I ]

For a parity check matrix if multiplication of required vector with H T yields a non zero
matrix vector then an error has occurred.

For Zero error CHT = 0

Error detection & error correction:

For detecting errors,


dmin >= S+1

Where d min = d * = Minimum hamming distance


between any 2 non zero vector

S = dmin – 1

For correcting t errors,


dmin >= 2t +1

t = (dmin-1)/2 where t is numbers of errors.

Syndrome decoding:

Consider parity matrix H of any (n,k) code then for any received vector v = GF(q) ⁿ

S = v HTis called syndrome of u syndrome gives us symptoms of error.

Steps for syndrome decoding are as follows:

1. Determine syndrome S = v HT of received word v.


2. Locate the syndrome in syndrome column.
3. S = ( C+e)HT

C = original code word


e = error pattern
4. Subtract error vector from syndrome to get original codeword

Algorithm:

1. Accept values of n & K from user.


2. Accept generator matrix G from user.
3. Separate coefficient matrix P from G.
4. For 2^k times multiply information word to G to get 2^k code words
C = iG
5. Determine parity check matrix as H = [ PT | I ]
6. Find H transpose.
7. Correct the error bit & display corrected word.
8. Calculate S & t as S = dmin - 1
t = (dmin – 1)/2
9. Ask user to enter received code word V.
10. Multiply V to HT. This will give the syndrome vector.
11. Compare syndrome vector to rows in HT. If any row matches the syndrome vector
then that row indicates the no of bit in which error is present.
12. Display this bit error position.
13. Correct the error bit & display corrected word.
14. Calculate S & t as S = dmin - 1
t = (dmin – 1)/2
13. Received data is corrected in correct () function & displayed as corrected data.
14. In the same function message bits for corrected data are displayed as decoded
message.
LBC Encoding and Decoding Program

clc
clear
close
disp('linear block coding and decoding');
n=input('enter the no. of code bits');
k=input('enter the no. of data bits');
m=n-k;
p=input('enter the coeff matrix of size k by m');
g=[eye(k) p];
ht=[p;eye(m)];
d=input('enter the data vector of size k');
c=d*g;
c=rem(c,2);
disp('generator matrix is');
g
disp('code vector is');
c
r=input('enter the recieved vector');
s=r*ht;
s=rem(s,2);
disp('syndrome vector is');
s
for i=1:1:n
if(s==ht(i,:))
e(1,i)=1;
else
e(1,i)=0;
end
end
disp('the error is');
e
cm=r+e;
cm=rem(cm,2);
disp('corrected recieved vector is');
cm
dm=zeros(1,k);
for i=1:1:k
dm(1,i)=cm(1,i);
end
disp('most likely data word transmitted is');
dm
Conclusion:
Peoples Empowerment Group
ISB&M COLLEGE OF ENGINEERING
NANDE, PUNE
DEPARTMENT OF E&TC ENGINEERING
Academic Year 2022-23
Subject: Digital communication

Class: T.E. (E&TC)


Roll No.: Name of student:

Experiment No: 13
Title: Simulation Study of cyclic codes.

Aim: Simulation Study of cyclic codes.

Software: Matlab/C.

Theory:-
Encoding:
Cyclic codes forms a subclass of linear block codes. Indeed many of the important linear
block codes discovered to date are either cyclic codes or closely related to cyclic code. An
advantage of cyclic codes over most other types of codes is that they are easy to encode.
Furthermore, cyclic codes posses a well defined mathematical structure n topples.
Cn-1, Cn-2,……………………………………………………………………. C1C0 denote a codeword of an (n,k)
linear block code. Codeword can be represented by a polynomial of degree less thanor
equal to n-1.
C(x) = Cn-1xn-1 + Cn-2xn-2 + .............................. C1x + C0
Xn-1 represents MSB, x0 represent LSB. Why to represents codeword by a polynomials
1. These are algebraic codes. Hence algebraic operations such as addition,
multiplication, division, subtraction etc. becomes very simple.
2. Positions of the bits are represented with the help of powers of x in a polynomial.

Generator Polynomial: The polynomial xn+1 and its factor play a major role in the
generation of cyclic codes. Let g(x) be a polynomial of degree n-k that is factor of x n+1 as
such g(x) is the polynomial of least degree in the code g(x) may be expanded as follows.
nl 1
g(x)  x n  k  gix i  1
i1
The polynomial g(x) is called the generator polynomial of a cyclic code. A cyclic code is
uniquely determined by the generator polynomial g(x) in that each code polynomial in the
code can be expressed in the form of polynomial product.
C(x) = a(x) g(x)
Suppose we are given the generator polynomial g(x) and to encode the message sequence
(Mk-1, Mk-2,……………………………………………………………………. M1M0) into an (n,k) systematic
code.
G= n-k
Code vector ( systematic code in polynomial form.)
[Mk-1, Mk-2,……………………………………………………………………. M1M0, Cn-1 Cn-
2, ...................................................................................................... C1C0]
V(x) = Mk-1xk+g-1 + Mk-1xk+g-2 + ………….. Mk-1xg+1 + M0xg + Cg-1xg-1
V(x) = m(x) + C (x)
M(x)/G(x)= C (x)/G (x) + m(x)
Numerator/ Denominator = Quotient ± Remainder / Denominator
C(p) = rem [x9 m(x)/ G (x)]
Example (7,4) code vector
One example information bit 1010
Polynomial representation = x3 + x+1
Select G(x) = x3 + x+1
n-k=3
C(x) = Rem [x3 [x3 + x ] / x3 + x+1]
V(x) = x6 + x4+x + 1
Binary Representation = 1010011
Generator Matrix
1000 101
G= 0100 111
0001 110

Similarly,
Information bit Code Vector
0000 00000000
0001 00010111
0010 00101100
0011 00111101
0100 01001110
0101 01011000
0110 01100001
0111 01110010
1000 10000101
1001 10011100
1010 10100011
1011 10110000
1100 11000010
1101 110 10011
1110 11100100
1111 11111111
Algorithm:
1. Accept n value and k value from user.
2. Accept generator polynomial from user in binary form.
3. Display code vectors on CRT screen.

How to generate cyclic code.

G(x) = x3 + x+1
Binary form = 1011 accept from user.
Example:
Information bit 1001
1001000
+ 101 1
0000110
Parity bit = 1 1 0
Code vector = 1 0 0 1 1 1 0
Information bit 0 0 1 1
0011000
1011
0000101
Parity bit = 1 0 1
Code vector = 0 0 1 1 1 0 1

Decoding:

The decoding process of cyclic code is same for both systematic and non systematic
cyclic codes. Every valid codewords polynomial c(x) is a multiple of g(x). When this code
word is transmitted there may be some errors introduced. Hence the received code. Word
polynomial r(x) may not be same as c(x).
If received codeword is same as transmitted codeword then r(x) mod g(x) -0.
Otherwise it will be nonzero polynomial consider r(x)/g(x) it can be written as, consider
as, r(x)/g(x) = g(x) + s(x) / g(x)
Where g(x) is quotient polynomial and s(x) is remainder polynomial also called as
syndrome polynomial. Degree g(x) will be k-1 and that of s(x) will be n-k-1.
R(x) can be written in terms of c(x)
R(x) = c(x) + e(x)
Where, e(x) is an error polynomial decided by the bit error pattern in r(x).
R(x)/ g(x) = c(x) + e(x) / g(x)
= c(x) / g(x) ± e(x) / g(x)
Remainder [r(x)/ g(x)] = rem [ c(x) / g(x)] + rem [{e(x)/ g(x)]
But remainder after division of c(x) and g(x) will be zero.
rem [r(x) / g(x) ] = rem [e(x) / g(x)]
comparing above equation
s(x) = rem [ e(x)/ g(x)]
The syndrome polynomial of error polynomial e(x) is same as received word polynomial.
If our aim is to only detect errors then the received codeword polynomial is divided by
g(x).
Parity check matrix
1 1 1 0 1 0 0
H= 0 1 1 1 0 1 0
1 1 0 1 0 0 1
If the remainder i. e. syndrome polynomial is zero, there will be no error and if it is non
zero, then there will be error. If it is required to correct those errors, then the procedure will
be,
1) Prepare a table of error patterns and syndromes.
2) Find syndrome after dividing received word polynomial r(x) and g(x).
3) Select error pattern corresponding to the syndrome.
4) Add error pattern to the received code vector.

Example:
For (7, 4) code vector find syndrome patterns.
Error Pattern Syndrome value
1) 1 0 0 0 0 0 0 0 1 0 1
2) 0 1 0 0 0 0 0 0 1 1 1
3) 0 0 1 0 0 0 0 0 1 1 0
4) 0 0 0 1 0 0 0 0 0 1 1
5) 0 0 0 0 1 0 0 0 0 1 0 0
6) 0 0 0 0 0 1 0 0 0 1 0
7) 0 0 0 0 0 0 0 1 0 0 1

g(x) = x3 + x + 1 factor of x7+ 1


one example of error pattern
e(x) = 1 0 0 0 0 0 0 0
e(x) polynomial = x6
s(x) = rem [x6 / x3 + x + 1]
After division received code vector = 1 0 0 1 1 0 1
Find which location error is conten.
Received code vector
Polynomial = x6 + x3 + x + 1
S(x) = [ r(x)/ g(x)] rem
S(x) = x + 1
Binary form = 0 1 1
Above table 0 1 1 value in 4th row. 4th location error.
Corrected code vector = 1 0 0 1 1 0 1 ± 0 0 0 1 0 0 0
= 1 0 0 0 1 0 1
Algorithm
1 accept n and k value from users.
2 accept generator polynomial (Binary form) from user.
3 Find syndrome table using generator polynomial.
Example for(7,4) code vector.
G(x) = 1011
First row of syndrome value.
1000000
1011
0011000
1011
0001110
1011
0000101 s(1) = 101
Using this logic find syndrome table.
4 Accept received code vector from user.
5 Multiply Received code vector with syndrome table Ans =0 no any error.
6 Answer nonzero syndrome find syndrome value in whitch row of syndrome table. i.e.
row means that location bit corrupt.
7 display corrected code vector.

Code:
clc
close all
clear all
disp('cyclic coding & decoding');
n=input('enter the code length');
k=input('enter the no of msg bits:');
m=n-k;
g=input('enter the generator polynomial ');
d=input('enter the data vector:');
I=eye(k);
z=zeros(k,m);
t=[I,z];
for i=1:1:k
[q,r]=deconv(t(i,:),g);
q=rem(abs(q),2);
r=rem(abs(r),2);
%c=t(i,:)+r;
c=rem(conv(q,g),2);
if i==1
G=c;
else
G=[G;c];
end
end
disp('generator matrix');
G
c=rem(d*G,2);
disp('code vector: ');
c
r=input('enter the recieved vector: ');
[q,s]=deconv(r,g);
s=rem(abs(s),2)
for i=1:1:1:n
e=[zeros(1,i-1),1,zeros(1,n-i)];
[q,r2]=deconv(e,g);
r2=rem(abs(r2),2);
if s==r2
dc=rem((r+e),2);
e
flag=1;
break;
end
end
if flag~=1
for j= 1:1:n
e(i)=0;
end
dc=r;
e
end
disp('decoded code vector: ');
dc
disp('decoded data vector: ');
dc(1:k)

Conclusion:

You might also like