Digital Communications
Digital Communications
Digital Communications
LECTURE NOTES
B.TECH
(III YEAR – I SEM)
(2021-22)
Prepared by
Mrs. P. Swetha, Assistant Professor
Mr. K.D.K.Ajay, Assistant Professor
UNIT I
Pulse Digital Modulation: Elements of digital communication systems, Advantages of digital
communication systems, PCM generation and reconstruction, Quantization Noise, Types of
Quantization and Companding, Differential PCM (DPCM), Time Division Multiplexing &
Demultiplexing.
Delta Modulation: Delta modulation and its draw backs, Adaptive Delta modulation, Noise in
PCM and DM systems, Illustrative Problems.
UNIT II
Digital Modulation Techniques: Introduction, ASK modulator, Coherent and Non-Coherent ASK
detector, FSK modulator, coherent & non-coherent detection of FSK, BPSK modulator &
Coherent reception of BPSK, Principles of DPSK and QPSK.
Data Transmission: Base band signal receiver, probability of error, The optimum filter, Matched
filter, probability of error using matched filter.Optimum filter using correlator.Probability of
error of ASK,FSK,BPSK and QPSK, Illustrative Problems.
UNIT III
Information Theory: Introduction to Information theory, Concept of amount of information and
its properties, Average information (Entropy) and its properties, Information rate, Mutual
information and its properties, Illustrative Problems.
Source Coding: Introduction, Advantages, Hartley Shannon’s theorem (Channel Capacity
Theorem), Bandwidth-S/N trade off, Shannon-Fano coding, Huffman coding, Illustrative
Problems.
UNIT IV
Linear Block Codes: Introduction, Matrix description of Linear Block codes, Error detection and
error correction capabilities of linear block codes, Hamming codes.
Cyclic Codes: Encoding, Syndrome Calculation, Decoding,
UNIT V
Convolution Codes: Introduction, Encoding of convolution codes - Time domain approach,
Transform domain approach. Graphical approach: State, Code Tree and Trellis diagram,
Decoding using Viterbi algorithm, Illustrative Problems.
TEXT BOOKS:
1. Digital communications - Simon Haykin, John Wiley, 2005
2. Principles of Communication Systems – H. Taub and D. Schilling, TMH, 2003
REFERENCES:
1. Digital and Analog Communication Systems – K.Sam Shanmugam, John Wiley, 2005.
2. Digital Communications – John Proakis, TMH, 1983.
3. Communication Systems Analog & Digital – Singh & Sapre, TMH, 2004.
4. Modern Analog and Digital Communication – B.P.Lathi, Oxford reprint, 3rd edition,
2004.
COURSE OUTCOMES:
At the end of the course, the student will be able to:
1. Understand the basic components and signal flow in a digital communication system.
2. Understand the concept of digital modulation techniques & analyze their error performance
in presence of noise using Probability of error.
3. Design Optimum receivers for digital modulation techniques.
4. Understand the importance of Information theory and Source Coding.
5. Implement different channel encoding and decoding techniques used for error detection &
correction in digital communications.
UNIT-1
Digital Pulse Modulation
Elements of Digital Communication Systems:
1
3. Channel Encoder:
The information sequence is passed through the channel encoder. The purpose
of the channel encoder is to introduce, in controlled manner, some redundancy in the
binary information sequence that can be used at the receiver to overcome the effects
of noise and interference encountered in the transmission on the signal through the
channel.
For example take k bits of the information sequence and map that k bits to
unique n bit sequence called code word. The amount of redundancy introduced is
measured by the ratio n/k and the reciprocal of this ratio (k/n) is known as rate of code
or code rate.
4. Digital Modulator:
The binary sequence is passed to digital modulator which in turns convert the
sequence into electric signals so that we can transmit them on channel (we will see
channel later). The digital modulator maps the binary sequences into signal wave
forms , for example if we represent 1 by sin x and 0 by cos x then we will transmit sin
x for 1 and cos x for 0. ( a case similar to BPSK)
5. Channel:
The communication channel is the physical medium that is used for
transmitting signals from transmitter to receiver. In wireless system, this channel
consists of atmosphere , for traditional telephony, this channel is wired , there are
optical channels, under water acoustic channels etc.We further discriminate this
channels on the basis of their property and characteristics, like AWGN channel etc.
6. Digital Demodulator:
The digital demodulator processes the channel corrupted transmitted
waveform and reduces the waveform to the sequence of numbers that represents
estimates of the transmitted data symbols.
7. Channel Decoder:
This sequence of numbers then passed through the channel decoder which
attempts to reconstruct the original information sequence from the knowledge of the
code used by the channel encoder and the redundancy contained in the received data
Note: The average probability of a bit error at the output of the decoder is a measure
of the performance of the demodulator – decoder combination.
8. Source Decoder:
At the end, if an analog signal is desired then source decoder tries to decode
the sequence from the knowledge of the encoding algorithm. And which results in the
approximate replica of the input at the transmitter end.
2
9. Output Transducer:
Finally we get the desired signal in desired format analog or digital.
Disadvantages
3
Fig. Pulse modulation
Sampling
Quantization
Binary encoding
4
Elements of PCM System:
Sampling:
5
Fig. 4 Types of Sampling
A band-pass signal of bandwidth 2fm can be completely recovered from its samples.
Min. sampling rate =2×𝐵𝑎𝑛𝑑𝑤𝑖𝑑𝑡ℎ
=2×2𝑓𝑚=4𝑓𝑚
6
Sampling Theorem:
7
8
9
10
11
Fig. 7 (a) Sampled version of signal x(t)
(b) Reconstruction of x(t) from its samples
12
PCM Generator:
13
Transmission BW in PCM:
14
PCM Receiver:
Quantization
The quantizing of an analog signal is done by discretizing the signal with a number of
quantization levels.
15
Quantization is representing the sampled values of the amplitude by a finite set of
levels, which means converting a continuous-amplitude sample into a discrete-time
signal
Both sampling and quantization result in the loss of information.
The quality of a Quantizer output depends upon the number of quantization levels
used.
The discrete amplitudes of the quantized output are called as representation levels
or reconstruction levels.
The spacing between the two adjacent representation levels is called a quantum or
step-size.
There are two types of Quantization
o Uniform Quantization
o Non-uniform Quantization.
The type of quantization in which the quantization levels are uniformly spaced is
termed as a Uniform Quantization.
The type of quantization in which the quantization levels are unequal and mostly the
relation between them is logarithmic, is termed as a Non-uniform Quantization.
Uniform
Quantization:
• There are two types of uniform quantization.
– Mid-Rise type
– Mid-Tread type.
• The following figures represent the two types of uniform quantization.
• The Mid-Rise type is so called because the origin lies in the middle of a raising part
of the stair-case like graph. The quantization levels in this type are even in number.
• The Mid-tread type is so called because the origin lies in the middle of a tread of the
stair-case like graph. The quantization levels in this type are odd in number.
• Both the mid-rise and mid-tread type of uniform quantizer is symmetric about the
origin.
16
Quantization Noise and Signal to Noise ratio in PCM System:
17
18
19
Derivation of Maximum Signal to Quantization Noise Ratio for Linear Quantization:
20
Non-Uniform Quantization:
In non-uniform quantization, the step size is not fixed. It varies according to certain
law or as per input signal amplitude. The following fig shows the characteristics of Non
uniform quantizer.
21
Companding PCM System:
• Non-uniform quantizers are difficult to make and expensive.
• An alternative is to first pass the speech signal through nonlinearity before quantizing
with a uniform quantizer.
• The nonlinearity causes the signal amplitude to be compressed.
– The input to the quantizer will have a more uniform distribution.
• At the receiver, the signal is expanded by an inverse to the nonlinearity.
• The process of compressing and expanding is called Companding.
22
23
Differential Pulse Code Modulation (DPCM):
Redundant Information in PCM:
24
25
26
Line Coding:
In telecommunication, a line code is a code chosen for use within a communications
system for transmitting a digital signal down a transmission line. Line coding is often used
for digital data transport.
The waveform pattern of voltage or current used to represent the 1s and 0s of a digital
signal on a transmission link is called line encoding. The common types of
27
line encoding are unipolar, polar, bipolar and Manchester encoding. Line codes are used
commonly in computer communication networks over short distances.
28
Time Division Multiplexing:
29
Introduction to Delta Modulation
30
31
32
33
34
35
Condition for Slope overload distortion occurrence:
Slope overload distortion will occur if
36
Expression for Signal to Quantization Noise power ratio for Delta
Modulation:
37
38
39
UNIT-2
Digital Modulation provides more information capacity, high data security, quicker system
availability with great quality communication. Hence, digital modulation techniques have a greater
demand, for their capacity to convey larger amounts of data than analog ones.
There are many types of digital modulation techniques and we can even use a combination of these
techniques as well. In this chapter, we will be discussing the most prominent digital modulation
techniques.
if the information signal is digital and the amplitude (lV of the carrier is varied proportional to
the information signal, a digitally modulated signal called amplitude shift keying (ASK) is
produced.
If the frequency (f) is varied proportional to the information signal, frequency shift keying (FSK) is
produced, and if the phase of the carrier (0) is varied proportional to the information signal,
phase shift keying (PSK) is produced. If both the amplitude and the phase are varied proportional to
the information signal, quadrature amplitude modulation (QAM) results. ASK, FSK, PSK, and
QAM are all forms of digital modulation:
Amplitude Shift Keying (ASK) is a type of Amplitude Modulation which represents the binary
data in the form of variations in the amplitude of a signal.
Following is the diagram for ASK modulated waveform along with its input.
1
Any modulated signal has a high frequency carrier. The binary signal when ASK is modulated,
gives a zero value for LOW input and gives the carrier output for HIGH input.
Mathematically, amplitude-shift keying is
In above Equation, the modulating signal [vm(t)] is a normalized binary waveform, where + 1 V =
logic 1 and -1 V = logic 0. Therefore, for a logic 1 input, vm(t) = + 1 V, Equation 2.12 reduces to
2
ωc= analog carrier radian frequency (radians per second, 2πfct) In Equation 2.12, the modulating
signal [vm(t)] is a normalized binary waveform, where + 1 V = logic 1 and -1 V = logic 0.
Therefore, for a logic 1 input, vm(t) = + 1 V, Equation 2.12 reduces to and for a logic 0 input, vm(t)
= -1 V,Equation reduces to
Thus, the modulated wave vask(t),is either A cos(ωct) or 0. Hence, the carrier is either "on “or
"off," which is why amplitude-shift keying is sometimes referred to as on-off keying (OOK).
it can be seen that for every change in the input binary data stream, there is one change in the ASK
waveform, and the time of one bit (tb) equals the time of one analog signaling element (t,).
B = fb/1 = fb baud = fb/1 = fb
Example :
Determine the baud and minimum bandwidth necessary to pass a 10 kbps binary signal using
amplitude shift keying. 10Solution For ASK, N = 1, and the baud and minimum bandwidth are
determined from Equations 2.11 and 2.10, respectively:
B = 10,000 / 1 = 10,000
baud = 10, 000 /1 = 10,000
The use of amplitude-modulated analog carriers to transport digital information is a relatively low-
quality, low-cost type of digital modulation and, therefore, is seldom used except for very low-
speed telemetry circuits.
ASK TRANSMITTER:
3
The input binary sequence is applied to the product modulator. The product modulator amplitude
modulates the sinusoidal carrier .it passes the carrier when input bit is ‘1’ .it blocks the carrier when
input bit is ‘0.’
FREQUENCYSHIFT KEYING
The frequency of the output signal will be either high or low, depending upon the input data
applied.
Frequency Shift Keying (FSK) is the digital modulation technique in which the frequency of the
carrier signal varies according to the discrete digital changes. FSK is a scheme of frequency
modulation.
Following is the diagram for FSK modulated waveform along with its input.
The output of a FSK modulated wave is high in frequency for a binary HIGH input and is low in
frequency for a binary LOW input. The binary 1s and 0s are called Mark and Space frequencies.
4
where
From Equation 2.13, it can be seen that the peak shift in the carrier frequency ( f) is proportional to
the amplitude of the binary input signal (vm[t]), and the direction of the shift is determined by the
polarity.
The modulating signal is a normalized binary waveform where a logic 1 = + 1 V and a logic 0 = -1
V. Thus, for a logic l input, vm(t) = + 1, Equation 2.13 can be rewritten as
With binary FSK, the carrier center frequency (fc) is shifted (deviated) up and down in the
frequency domain by the binary input signal as shown in Figure 2-3.
5
As the binary input signal changes from a logic 0 to a logic 1 and vice versa, the output frequency
shifts between two frequencies: a mark, or logic 1 frequency (fm), and a space, or logic 0 frequency
(fs). The mark and space frequencies are separated from the carrier frequency by the peak frequency
deviation ( f) and from each other by 2 f.
|fm – fs| = absolute difference between the mark and space frequencies (hertz)
Figure 2-4a shows in the time domain the binary input to an FSK modulator and the corresponding
FSK output.
When the binary input (fb) changes from a logic 1 to a logic 0 and vice versa, the FSK output
frequency shifts from a mark ( fm) to a space (fs) frequency and vice versa.
In Figure 2-4a, the mark frequency is the higher frequency (fc + f) and the space frequency is the
lower frequency (fc - f), although this relationship could be just the opposite.
Figure 2-4b shows the truth table for a binary FSK modulator. The truth table shows the input and
output possibilities for a given digital modulation scheme.
6
FSK Bit Rate, Baud, and Bandwidth
In Figure 2-4a, it can be seen that the time of one bit (tb) is the same as the time the FSK output is a
mark of space frequency (ts). Thus, the bit time equals the time of an FSK signaling element, and
the bit rate equals the baud.
The baud for binary FSK can also be determined by substituting N = 1 in Equation 2.11:
baud = fb / 1 = fb
The minimum bandwidth for FSK is given as
B= |(fs – fb) – (fm – fb)|
where
B= minimum Nyquist bandwidth (hertz)
f= frequency deviation |(fm– fs)| (hertz)
fb = input bit rate (bps)
Example 2-2
Determine (a) the peak frequency deviation, (b) minimum bandwidth, and (c) baud for a binary
FSK signal with a mark frequency of 49 kHz, a space frequency of 51 kHz, and an input bit rate of
2 kbps.
Solution
7
c. For FSK, N = 1, and the baud is determined from Equation 2.11 as
baud = 2000 / 1 = 2000
FSK TRANSMITTER:
Figure 2-6 shows a simplified binary FSK modulator, which is very similar to a conventional FM
modulator and is very often a voltage-controlled oscillator (VCO). The center frequency (fc) is
chosen such that it falls halfway between the mark and space frequencies.
A logic 1 input shifts the VCO output to the mark frequency, and a logic 0 input shifts the VCO
output to the space frequency. Consequently, as the binary input signal changes back and forth
between logic 1 and logic 0 conditions, the VCO output shifts or deviates back and forth between
the mark and space frequencies.
8
With the sweep mode of modulation, the frequency deviation is expressed mathematically as
f = vm(t)kl (2-19)
Figure 2-8 shows the block diagram for a coherent FSK receiver.The incoming FSK signal is
multiplied by a recovered carrier signal that has the exact same frequency and phase as the
transmitter reference.
However, the two transmitted frequencies (the mark and space frequencies) are not generally
continuous; it is not practical to reproduce a local reference that is coherent with both of them.
Consequently, coherent FSK detection is seldom used.
9
PHASESHIFT KEYING:
The phase of the output signal gets shifted depending upon the input. These are mainly of two
types, namely BPSK and QPSK, according to the number of phase shifts. The other one is DPSK
which changes the phase according to the previous value.
Phase Shift Keying (PSK) is the digital modulation technique in which the phase of the carrier
signal is changed by varying the sine and cosine inputs at a particular time. PSK technique is widely
used for wireless LANs, bio-metric, contactless operations, along with RFID and Bluetooth
communications.
PSK is of two types, depending upon the phases the signal gets shifted. They are −
BPSK is basically a DSB-SC (Double Sideband Suppressed Carrier) modulation scheme, for
message being the digital information.
Following is the image of BPSK Modulated output wave along with its input.
10
Binary Phase-Shift Keying
The simplest form of PSK is binary phase-shift keying (BPSK), where N = 1 and M =
2.Therefore, with BPSK, two phases (21 = 2) are possible for the carrier.One phase represents a
logic 1, and the other phase represents a logic 0. As the input digital signal changes state (i.e., from
a 1 to a 0 or from a 0 to a 1), the phase of the output carrier shifts between two angles that are
separated by 180°.
Hence, other names for BPSK are phase reversal keying (PRK) and biphase modulation. BPSK
is a form of square-wave modulation of a continuous wave (CW) signal.
Figure 2-12 shows a simplified block diagram of a BPSK transmitter. The balanced modulator acts
as a phase reversing switch. Depending on the logic condition of the digital input, the carrier is
transferred to the output either in phase or 180° out of phase with the reference carrier oscillator.
Figure 2-13 shows the schematic diagram of a balanced ring modulator. The balanced modulator
has two inputs: a carrier that is in phase with the reference oscillator and the binary digital data. For
the balanced modulator to operate properly, the digital input voltage must be much greater than the
peak carrier voltage.
This ensures that the digital input controls the on/off state of diodes D1 to D4. If the binary input is
a logic 1(positive voltage), diodes D 1 and D2 are forward biased and on, while diodes D3 and D4
11
are reverse biased and off (Figure 2-13b). With the polarities shown, the carrier voltage is
developed across transformer T2 in phase with the carrier voltage across T
FIGURE 9-13 (a) Balanced ring modulator; (b) logic 1 input; (c) logic 0 input
12
FIGURE 2-14 BPSK modulator: (a) truth table; (b) phasor diagram; (c) constellation
diagram
BANDWIDTH CONSIDERATIONS OF BPSK:
In a BPSK modulator. the carrier input signal is multiplied by the binary data.
If + 1 V is assigned to a logic 1 and -1 V is assigned to a logic 0, the input carrier (sin ωct) is
multiplied by either a + or - 1 .
The output signal is either + 1 sin ωct or -1 sin ωct the first represents a signal that is in phase with
the reference oscillator, the latter a signal that is 180° out of phase with the reference
oscillator.Each time the input logic condition changes, the output phase changes.
Mathematically, the output of a BPSK modulator is proportional to
13
fa = maximum fundamental frequency of binary input (hertz)
fc = reference carrier frequency (hertz)
Solving for the trig identity for the product of two sine functions,
fc + fa fc + fa
-fc + f a
-(fc + fa) or
2fa
and because fa = fb / 2, where fb = input bit rate,
Figure 2-15 shows the output phase-versus-time relationship for a BPSK waveform. Logic 1 input
produces an analog output signal with a 0° phase angle, and a logic 0 input produces an analog
output signal with a 180° phase angle.
As the binary input shifts between a logic 1 and a logic 0 condition and vice versa, the phase of the
BPSK waveform shifts between 0° and 180°, respectively.
BPSK signaling element (ts) is equal to the time of one information bit (tb), which indicates that the
bit rate equals the baud.
14
Example:
For a BPSK modulator with a carrier frequency of 70 MHz and an input bit rate of 10 Mbps,
determine the maximum and minimum upper and lower side frequencies, draw the output spectrum,
de-termine the minimum Nyquist bandwidth, and calculate the baud..
Solution
Therefore, the output spectrum for the worst-case binary input conditions is as follows: The
minimum Nyquist bandwidth (B) is
BPSK receiver:.
Figure 2-16 shows the block diagram of a BPSK receiver.
15
The input signal maybe+ sin ωct or - sin ωct .The coherent carrier recovery circuit detects and
regenerates a carrier signal that is both frequency and phase coherent with the original transmit
carrier.
The balanced modulator is a product detector; the output is the product d the two inputs (the BPSK
signal and the recovered carrier).
The low-pass filter (LPF) operates the recovered binary data from the complex demodulated signal.
The LPF has a cutoff frequency much lower than 2 ωct, and, thus, blocks the second harmonic of
the carrier and passes only the positive constant component. A positive voltage represents a
demodulated logic 1.
For a BPSK input signal of -sin ωct (logic 0), the output of the balanced modulator is
or
sin2ωct = -0.5(1 – cos 2ωct) = 0.5 + 0.5cos 2ωct
filtered out
16
leaving
output = - 0.5 V = logic 0
The output of the balanced modulator contains a negative voltage (-[l/2]V) and a cosine wave at
twice the carrier frequency (2ωct).
Again, the LPF blocks the second harmonic of the carrier and passes only the negative constant
component. A negative voltage represents a demodulated logic 0.
If this kind of techniques are further extended, PSK can be done by eight or sixteen values also,
depending upon the requirement. The following figure represents the QPSK waveform for two bits
input, which shows the modulated result for different instances of binary inputs.
QPSK is a variation of BPSK, and it is also a DSB-SC (Double Sideband Suppressed Carrier)
modulation scheme, which sends two bits of digital information at a time, called as bigits.
Instead of the conversion of digital bits into a series of digital stream, it converts them into bit-pairs.
This decreases the data bit rate to half, which allows space for the other users.
QPSK transmitter.
A block diagram of a QPSK modulator is shown in Figure 2-17Two bits (a dibit) are
clocked into the bit splitter. After both bits have been serially inputted, they are simultaneously
parallel outputted.
17
The I bit modulates a carrier that is in phase with the reference oscillator (hence the name "I" for "in
phase" channel), and theQ bit modulate, a carrier that is 90° out of phase.
For a logic 1 = + 1 V and a logic 0= - 1 V, two phases are possible at the output of the I balanced
modulator (+sin ωct and - sin ωct), and two phases are possible at the output of the Q balanced
modulator (+cos ωct), and (-cos ωct).
When the linear summer combines the two quadrature (90° out of phase) signals, there are four
possible resultant phasors given by these expressions: + sin ωct + cos ωct, + sin ωct - cos ωct, -sin
ωct + cos ωct, and -sin ωct - cos ωct.
Example:
For the QPSK modulator shown in Figure 2-17, construct the truthtable, phasor diagram, and
constellation diagram.
Solution
For a binary data input of Q = O and I= 0, the two inputs to the Ibalanced modulator are -1 and sin
ωct, and the two inputs to the Q balanced modulator are -1 and cos ωct.
18
I balanced modulator =(-1)(sin ωct) = -1 sin ωct
Q balanced modulator =(-1)(cos ωct) = -1 cos ωct and the output of the linear summer is
-1 cos ωct - 1 sin ωct = 1.414 sin(ωct - 135°)
For the remaining dibit codes (01, 10, and 11), the procedure is the same. The results are shown in
Figure 2-18a.
FIGURE 2-18 QPSK modulator: (a) truth table; (b) phasor diagram; (c) constellation
diagram
In Figures 2-18b and c, it can be seen that with QPSK each of the four possible output phasors has
exactly the same amplitude. Therefore, the binary information must be encoded entirely in the
phase of the output signal
19
Figure 2-18b, it can be seen that the angular separation between any two adjacent phasors in QPSK
is 90°.Therefore, a QPSK signal can undergo almost a+45° or -45° shift in phase during
transmission and still retain the correct encoded information when demodulated at the receiver.
Figure 2-19 shows the output phase-versus-time relationship for a QPSK modulator.
With QPSK, because the input data are divided into two channels, the bit rate in either the I or the Q
channel is equal to one-half of the input data rate (fb/2) (one-half of fb/2 = fb/4).
QPSK RECEIVER:
The power splitter directs the input QPSK signal to the I and Q product detectors and the carrier
recovery circuit. The carrier recovery circuit reproduces the original transmit carrier oscillator
signal. The recovered carrier must be frequency and phase coherent with the transmit reference
carrier. The QPSK signal is demodulated in the I and Q product detectors, which generate the
original I and Q data bits. The outputs of the product detectors are fed to the bit combining circuit,
where they are converted from parallel I and Q data channels to a single binary output data stream.
The incoming QPSK signal may be any one of the four possible output phases shown in Figure 2-
18. To illustrate the demodulation process, let the incoming QPSK signal be -sin ωct + cos ωct.
Mathematically, the demodulation process is as follows.
20
FIGURE 2-21 QPSK receiver
The receive QPSK signal (-sin ωct + cos ωct) is one of the inputs to the I product detector. The
other input is the recovered carrier (sin ωct). The output of the I product detector is
Again, the receive QPSK signal (-sin ωct + cos ωct) is one of the inputs to the Q product detector.
The other input is the recovered carrier shifted 90° in phase (cos ωct). The output of the Q product
detector is
21
The demodulated I and Q bits (0 and 1, respectively) correspond to the constellation diagram and
truth table for the QPSK modulator shown in Figure 2-18.
It is seen from the above figure that, if the data bit is LOW i.e., 0, then the phase of the signal is not
reversed, but is continued as it was. If the data is HIGH i.e., 1, then the phase of the signal is
reversed, as with NRZI, invert on 1 (a form of differential encoding).
22
If we observe the above waveform, we can say that the HIGH state represents an M in the
modulating signal and the LOW state represents a W in the modulating signal.
The word binary represents two-bits. M simply represents a digit that corresponds to the number of
conditions, levels, or combinations possible for a given number of binary variables.
This is the type of digital modulation technique used for data transmission in which instead of one-
bit, two or more bits are transmitted at a time. As a single signal is used for multiple bit
transmission, the channel bandwidth is reduced.
DBPSK TRANSMITTER.:
Figure 2-37a shows a simplified block diagram of a differential binary phase-shift keying
(DBPSK) transmitter. An incoming information bit is XNORed with the preceding bit prior to
entering the BPSK modulator (balanced modulator).
For the first data bit, there is no preceding bit with which to compare it. Therefore, an initial
reference bit is assumed. Figure 2-37b shows the relationship between the input data, the XNOR
output data, and the phase at the output of the balanced modulator. If the initial reference bit is
assumed a logic 1, the output from the XNOR circuit is simply the complement of that shown.
In Figure 2-37b, the first data bit is XNORed with the reference bit. If they are the same, the XNOR
output is a logic 1; if they are different, the XNOR output is a logic 0. The balanced modulator
operates the same as a conventional BPSK modulator; a logic I produces +sin ωct at the output, and
A logic 0 produces –sin ωct at the output.
23
FIGURE 2-37 DBPSK modulator (a) block diagram (b) timing diagram
BPSK RECEIVER:
Figure 9-38 shows the block diagram and timing sequence for a DBPSK receiver. The received
signal is delayed by one bit time, then compared with the next signaling element in the balanced
modulator. If they are the same. J logic 1(+ voltage) is generated. If they are different, a logic 0 (-
voltage) is generated. [f the reference phase is incorrectly assumed, only the first demodulated bit is
in error. Differential encoding can be implemented with higher-than-binary digital modulation
schemes, although the differential algorithms are much more complicated than for DBPS K.
The primary advantage of DBPSK is the simplicity with which it can be implemented. With
DBPSK, no carrier recovery circuit is needed. A disadvantage of DBPSK is, that it requires
between 1 dB and 3 dB more signal-to-noise ratio to achieve the same bit error rate as that of
absolute PSK.
24
FIGURE 2-38 DBPSK demodulator: (a) block diagram; (b) timing sequence
The coherent demodulator for the coherent FSK signal falls in the general form of coherent
demodulators described in Appendix B. The demodulator can be implemented with two correlators
as shown in Figure 3.5, where the two reference signals are cos(27r f t) and cos(27r fit). They must
be synchronized with the received signal. The receiver is optimum in the sense that it minimizes the
error probability for equally likely binary signals. Even though the receiver is rigorously derived in
Appendix B, some heuristic explanation here may help understand its operation. When s 1 (t) is
transmitted, the upper correlator yields a signal 1 with a positive signal component and a noise
component. However, the lower correlator output 12, due to the signals' orthogonality, has only a
noise component. Thus the output of the summer is most likely above zero, and the threshold
detector will most likely produce a 1. When s2(t) is transmitted, opposite things happen to the two
correlators and the threshold detector will most likely produce a 0. However, due to the noise nature
that its values range from -00 to m, occasionally the noise amplitude might overpower the signal
amplitude, and then detection errors will happen. An alternative to Figure 3.5 is to use just one
correlator with the reference signal cos (27r f t) - cos(2s f2t) (Figure 3.6). The correlator in Figure
can be replaced by a matched filter that matches cos(27r fit) - cos(27r f2t) (Figure 3.7). All
25
implementations are equivalent in terms of error performance (see Appendix B). Assuming an
AWGN channel, the received signal is
where n(t) is the additive white Gaussian noise with zero mean and a two-sided power spectral
density A',/2. From (B.33) the bit error probability for any equally likely binary signals is
where No/2 is the two-sided power spectral density of the additive white Gaussian noise. For
Sunde's FSK signals El = Ez = Eb, pI2 = 0 (orthogonal). thus the error probability is
where Eb = A2T/2 is the average bit energy of the FSK signal. The above Pb is plotted in Figure 3.8
where Pb of noncoherently demodulated FSK, whose expression will be given shortly, is also
plotted for comparison.
26
Figure: Pb of coherently and non-coherently demodulated FSK signal.
Coherently FSK signals can be noncoherently demodulated to avoid the carrier recovery.
Noncoherently generated FSK can only be noncoherently demodulated. We refer to both cases as
noncoherent FSK. In both cases the demodulation problem becomes a problem of detecting signals
with unknown phases. In Appendix B we have shown that the optimum receiver is a quadrature
receiver. It can be implemented using correlators or equivalently, matched filters. Here we assume
that the binary noncoherent FSK signals are equally likely and with equal energies. Under these
assumptions, the demodulator using correlators is shown in Figure 3.9. Again, like in the coherent
case, the optimality of the receiver has been rigorously proved (Appendix B). However, we can
easily understand its operation by some heuristic argument as follows. The received signal
(ignoring noise for the moment) with an unknown phase can be written as
27
The signal consists of an in phase component A cos 8 cos 27r f t and a quadrature component A sin
8 sin 2x f,t sin 0. Thus the signal is partially correlated with cos 2s fit and partiah'y correlated with
sin 27r fit. Therefore we use two correlators to collect the signal energy in these two parts. The
outputs of the in phase and quadrature correlators will be cos 19 and sin 8, respectively. Depending
on the value of the unknown phase 8, these two outputs could be anything in (- 5, y). Fortunately
the squared sum of these two signals is not dependent on the unknown phase. That is
This quantity is actually the mean value of the statistics I? when signal si (t) is transmitted and noise
is taken into consideration. When si (t) is not transmitted the mean value of 1: is 0. The comparator
decides which signal is sent by checking these I?. The matched filter equivalence to Figure 3.9 is
shown in Figure 3.10 which has the same error performance. For implementation simplicity we can
replace the matched filters by bandpass filters centered at f and fi, respectively (Figure 3.1 1).
However, if the bandpass filters are not matched to the FSK signals, degradation to
various extents will result. The bit error probability can be derived using the correlator demodulator
(Appendix B). Here we further assume that the FSK signals are orthogonal, then from Appendix B
the error probability is
28
PART-2
DATATRANSMISSION
Consider that a binary encoded signal consists of a time sequence of voltage levels +V or –V.
if there is a guard interval between the bits, the signal forms a sequence of positive and negative
pulses. in either case there is no particular interest in preserving the waveform of the signal after
reception .we are interested only in knowing within each bit interval whether the transmitted
voltage was +V or –V. With noise present, the receives signal and noise together will yield sample
values generally different from ±V. In this case, what deduction shall we make from the sample
value concerning the transmitted bit?
29
30
PROBABILITY OF ERROR
Since the function of a receiver of a data transmission is to distinguish the bit 1 from the bit 0
in the presence of noise, a most important characteristic is the probability that an error will be made
in such a determination. We now calculate this error probability Pe for the integrate and dump
receiver of fig 11.1-2
31
32
The probability of error pe, as given in eq.(11.2-3),is plotted in fig.11.2-2.note that pe decreases
rapidly as Es./η increases. The maximum value of pe is ½.thus ,even if the signal is entirely lost in
the noise so that any determination of the receiver is a sheer guess, the receiver cannot bi wrong
more than half the time on the average.
In the receiver system of Fig 11.1-2, the signal was passed through a filter(integrator),so that at the
sampling time the signal voltage might be emphasized in comparison with the noise voltage. We are
naturally led to risk whether the integrator is the optimum filter for the purpose of minimizing the
probability of error. We shall find that the received signal contemplated in system of fig 11.1-2 the
integrator is indeed the optimum filter. However, before returning specifically to the integrator
receiver.
We assume that the received signal is a binary waveform. One binary digit is represented by
a signal waveformS1 (t) which persists for time T, while the4 other bit is represented by the
waveform S2(t) which also lasts for an interval T. For example, in the transmission at baseband, as
shown in fig 11.1-2 S1(t)=+V; for other modulation systems, different waveforms are transmitted.
for example for PSK signaling , S1(t)=Acosw0t and S2(t)=-Acosw0t;while for FSK,
S1(t)=Acos(w0+Ω)t.
33
Hence probability of error is
34
35
36
In general, the impulsive response of the matched filter consists of p(t) rotated about t=0 and
then delayed long enough(i.e., a time T) to make the filter realizable. We may note in passing, that
any additional delay that a filter might introduce would in no way interfere with the performance of
the filter ,for both signal and noise would be delayed by the same amount, and at the sampling time
(which would need similarity to be delayed)the ratio of signal to noise would remain unaltered.
37
From Parseval’s theorem we have
38
COHERENT RECEPTION: CORRELATION:
(11.6-1)
39
(11.6-2)
Where s1(t) is either s1(t) or s2(t),and wthere π is the constant of the integrator(i.e.,the integrator
output is 1/π times the integral of its input).we now compare these outputs with the matched filter
outputs.
Fig: 11.6-1 Coherent system of signal reception
If h(t) is the impulsive response of the matched filter ,then the output of the matched filter v0(t) can
be found using the convolution integral. we have
(11.6-3)
The limits on the integral have been charged to 0 and T since we are interested in the filter response
to a bit which extends only over that interval. Using Eq.(11.4-4) which gives h(t) for the matched
filter, we have
(11.6-4)
(11.6-5)
(11.6-7)
(11.6-8)
40
Thus s0(T) and n0(T), as calculated from eqs.(11.6-1) and (11.6-2) for the correlation receiver, and
as calculated from eqs.(11.6-7) and (11.6-8) for the matched filter receiver, are identical .hence the
performances of the two systems are identical. The matched filter and the correlator are not simply
two distinct, independent techniques which happens to yield the same result. In fact they are two
techniques of synthesizing the optimum filter h(t)
41
42
43
44
45
46
47
48
49
ERROR PROBABILITY OF BINARY PSK:
To realize a rule for making a decision in favor of symbol 1 or symbol 0,we partition the signal
space into two regions:
50
51
52
UNIT-III
INFORMATION THEORY & SOURCE CODING
Information Theory
There are two fundamentally different ways to transmit messages: via discrete signals
and via continuous signals. ... For example, the letters of the English alphabet are commonly
thought of as discrete signals.
Information sources
Definition:
The set of source symbols is called the source alphabet, and the elements of the set are
called the symbols or letters.
The number of possible answers ‘ r ’ should be linked to “information.”
“Information” should be additive in some sense.
We define the following measure of information:
The basis ‘b’ of the logarithm b is only a change of units without actually changing the
amount of information it describes.
Discrete memory less source (DMS) can be characterized by “the list of the symbols, the
probability assignment to these symbols, and the specification of the rate of generating these
symbols by the source”.
1. Information should be proportion to the uncertainty of an outcome.
2. Information contained in independent outcome should add.
Scope of Information Theory
53
1. Determine the irreducible limit below which a signal cannot be compressed.
2. Deduce the ultimate transmission rate for reliable communication over a noisy channel.
3. Define Channel Capacity - the intrinsic ability of a channel to convey information.
With probabilities
Properties of Information
Entropy:
The Entropy (H(s)) of a source is defined as the average information generated by a
discrete memory less source.
54
Information content of a symbol:
Let us consider a discrete memory less source (DMS) denoted by X and having the
alphabet {U1, U2, U3, ……Um}. The information content of the symbol xi, denoted by I(xi) is
defined as
Units of I(xi):
For two important and one unimportant special cases of b it has been agreed to use the
following names for these units:
b =2(log2): bit,
b =10(log10): Hartley.
log2a=
Definition:
In order to get the information content of the symbol, the flow information on the
symbol can fluctuate widely because of randomness involved into the section of symbols.
H(U)= E[I(u)]=
55
Where PU (·) denotes the probability mass function (PMF) 2 of the RV U, and where
the support of P U is defined as
We will usually neglect to mention “support” when we sum over PU (u) · log b PU (u), i.e., we
implicitly assume that we exclude all u
With zero probability PU (u) =0.
It may be noted that for a binary source U which genets independent symbols 0 and 1
with equal probability, the source entropy H (u) is
Bounds on H (U)
Where
To derive the upper bound we use at rick that is quite common in.
Formation theory: We take the deference and try to show that it must be non positive.
56
Equality can only be achieved if
57
Conditional Entropy
Similar to probability of random vectors, there is nothing really new about conditional
probabilities given that a particular event Y = y has occurred.
The conditional entropy or conditional uncertainty of the RV X given the event Y = y is
defined as
Note that the definition is identical to before apart from that everything is conditioned
on the event Y = y
Note that the conditional entropy given the event Y = y is a function of y. Since Y is
also a RV, we can now average over all possible events Y = y according to the probabilities
of each event. This will lead to the averaged.
Mutual Information
Although conditional entropy can tell us when two variables are completely
independent, it is not an adequate measure of dependence. A small value for H(Y| X) may
implies that X tells us a great deal about Y or that H(Y) is small to begin with. Thus, we
measure dependence using mutual information:
I(X,Y) =H(Y)–H(Y|X)
58
I(X,Y)=H(X)–H(X| Y)
KL divergence measures the difference between two distributions. It is sometimes called the
relative entropy. It is always non-negative and zero only when p=q; however, it is not a
distance because it is not symmetric.
In other words, mutual information is a measure of the difference between the joint
probability and product of the individual probabilities. These two distributions are equivalent
only when X and Y are independent, and diverge as X and Y become more dependent.
Source coding
Coding theory is the study of the properties of codes and their respective fitness for
specific applications. Codes are used for data compression, cryptography, error-
correction, and networking. Codes are studied by various scientific disciplines—such as
information theory, electrical engineering, mathematics, linguistics, and computer
science—for the purpose of designing efficient and reliable data transmission methods.
This typically involves the removal of redundancy and the correction or detection of
errors in the transmitted data.
The aim of source coding is to take the source data and make it smaller.
All source models in information theory may be viewed as random process or random
sequence models. Let us consider the example of a discrete memory less source
(DMS), which is a simple random sequence model.
A DMS is a source whose output is a sequence of letters such that each letter is
independently selected from a fixed alphabet consisting of letters; say a 1, a2 ,
59
……….ak. The letters in the source output sequence are assumed to be random
and statistically
Let us consider a source with four letters a1, a2, a3 and a4 with P(a1)=0.5,
P(a2)=0.25, P(a3)= 0.13, P(a4)=0.12. Let us decide to go for binary coding of these
four
Source letters While this can be done in multiple ways, two encoded representations
are shown below:
Code Representation#1:
Code Representation#2:
It is easy to see that in method #1 the probability assignment of a source letter has not
been considered and all letters have been represented by two bits each. However in
The second method only a1 has been encoded in one bit, a2 in two bits and the
remaining two in three bits. It is easy to see that the average number of bits to be used
per source letter for the two methods is not the same. ( a for method #1=2 bits per
letter and a for method #2 < 2 bits per letter). So, if we consider the issue of encoding
a long sequence of
Letters we have to transmit less number of bits following the second method. This
is an important aspect of source coding operation in general. At this point, let us
note
a) We observe that assignment of small number of bits to more probable letters and
assignment of larger number of bits to less probable letters (or symbols) may lead to
efficient source encoding scheme.
60
b) However, one has to take additional care while transmitting the encoded letters. A
careful inspection of the binary representation of the symbols in method #2 reveals
that it may lead to confusion (at the decoder end) in deciding the end of binary
representation of a letter and beginning of the subsequent letter.
61
Shannon-Fano Algorithm:
A Shannon–Fano tree is built according to a specification designed to define an
effective code table. The actual algorithm is simple:
For a given list of symbols, develop a corresponding list of probabilities or frequency
counts so that each symbol’s relative frequency of occurrence is known.
Sort the lists of symbols according to frequency, with the most frequently
occurring Symbols at the left and the least common at the right.
Divide the list into two parts, with the total frequency counts of the left part being
as
Close to the total of the right as possible.
The left part of the list is assigned the binary digit 0, and the right part is assigned
the digit 1. This means that the codes for the symbols in the first part will all start
with 0, and the codes in the second part will all start with 1.
Recursively apply the steps 3 and 4 to each of the two halves, subdividing groups
and adding bits to the codes until each symbol has become a corresponding code leaf
on the tree.
Example:
The source of information A generates the symbols {A0, A1, A2, A3 and A4} with the
corresponding probabilities {0.4, 0.3, 0.15, 0.1 and 0.05}. Encoding the source symbols
using binary encoder and Shannon-Fano encoder gives
62
Shanon-Fano code is a top-down approach. Constructing the code tree, we get
14
Binary Huffman Coding (an optimum variable-length source coding scheme)
In Binary Huffman Coding each source letter is converted into a binary code
word. It is a prefix condition code ensuring minimum average length per source letter in
bits.
Let the source letters a1, a 2, ……….aK have probabilities P(a1), P(a2),………….
P(aK) and let us assume that P(a1) ≥ P(a2) ≥ P(a 3)≥…. ≥ P(aK).
We now consider a simple example to illustrate the steps for Huffman coding.
Example Let us consider a discrete memory less source with six letters having
Arrange the letters in descending order of their probability (here they are
arranged).
Consider the last two probabilities. Tie up the last two probabilities. Assign, say, 0
to the last digit of representation for the least probable letter (a 6) and 1 to the last
digit of representation for the second least probable letter (a5). That is, assign ‘1’
to the upper arm of the tree and ‘0’ to the lower arm.
(3) Now, add the two probabilities and imagine a new letter, say b1, substituting for a6
and a5. So P(b1) =0.2. Check whether a4 and b1are the least likely letters. If not,
reorder the letters as per Step#1 and add the probabilities of two least likely letters.
For our example, it leads to:
P(a1)=0.3, P(a2)=0.2, P(b1)=0.2, P(a3)=0.15 and P(a4)=0.15
15
(4) Now go to Step#2 and start with the reduced ensemble consisting of a1 , a2 , a3 ,
Continue till the first digits of the most reduced ensemble of two letters are
assigned a ‘1’ and a ‘0’.
Again go back to the step (2): P(a1)=0.3, P(b2)=0.3, P(a2)=0.2 and P(b1)=0.2.
Now we consider the last two probabilities:
16
6. Now, read the code tree inward, starting from the root, and construct the
code words. The first digit of a codeword appears first while reading the code tree
inward.
Hence, the final representation is: a1=11, a2=01, a3=101, a4=100, a5=001, a6=000.
A few observations on the preceding example
4. Note that the entropy of the source is: H(X)=2.465 bits/symbol. Average length
per source letter after Huffman coding is a little bit more but close to the source
entropy. In fact, the following celebrated theorem due to C. E. Shannon sets the
limiting value of average length of code words from a DMS.
Shannon–Hartley theorem
In information theory, the Shannon–Hartley theorem tells the maximum rate at which
information can be transmitted over a communications channel of a specified bandwidth in
the presence of noise. It is an application of the noisy-channel coding theorem to the
archetypal case of a continuous-time analog communications channel subject to Gaussian
noise. The theorem establishes Shannon's channel capacity for such a communication link, a
17
bound on the maximum amount of error-free information per time unit that can be transmitted
with a specified bandwidth in the presence of the noise interference, assuming that the signal
power is bounded, and that the Gaussian noise process is characterized by a known power or
power spectral density.
The law is named after Claude Shannon and Ralph Hartley.
The theory behind designing and analyzing channel codes is called Shannon’s noisy
channel coding theorem. It puts an upper limit on the amount of information you can
send in a noisy channel using a perfect channel code. This is given by the following
equation:
where C is the upper bound on the capacity of the channel (bit/s), B is the
bandwidth of the channel (Hz) and SNR is the Signal-to-Noise ratio (unit less).
Bandwidth-S/N Tradeoff
The expression of the channel capacity of the Gaussian channel makes intuitive
sense:
2 As S/N increases, one can increase the information rate while still preventing errors
due to noise.
Thus we may trade off bandwidth for SNR. For example, if S/N = 7 and B = 4kHz,
then the channel capacity is C = 12 ×103 bits/s. If the SNR increases to S/N = 15 and B
is decreased to 3kHz, the channel capacity remains the same. However, as B tends to
1, the channel capacity does not become infinite since, with an increase in bandwidth,
the noise power also increases. If the noise power spectral density is ɳ/2, then the total
noise power is N = ɳB, so the Shannon-Hartley law becomes
18
19
UNIT IV
Linear Block Codes
Introduction
The signal to noise power ratio determine the probability of error of the modulation
scheme. Errors are introduced in the data when it passes through the channel. The channel
noise interferes the signal. The signal power is reduced. For the given signal to noise ratio, the
error probability can be reduced further by using coding techniques. The coding techniques
also reduce signal to noise power ratio for fixed probability of error.
For the block of k message bits, (n-k) parity bits or check bits are added. Hence the
total bits at the output of channel encoder are ‘n’. Such codes are called (n,k)block
codes.Figure illustrates this concept.
k bits k (n-k)
n bits
Types are
Systematic codes:
In the systematic block code, the message bits appear at the beginning of the code
word. The message appears first and then check bits are transmitted in a block. This type of
code is called systematic code.
Nonsystematic codes:
In the nonsystematic block code it is not possible to identify the message bits and
check bits. They are mixed in the block.
1
Consider the binary codes and all the transmitted digits are binary.
A code is linear if the sum of any two code vectors produces another code vector.
This shows that any code vector can be expressed as a linear combination of other code
vectors. Consider that the particular code vector consists of m1,m2, m3,…mk message bits and
c1,c2,c3…cq check bits. Then this code vector can be written as,
X=(m1,m2,m3,…mkc1,c2,c3…cq)
Here q=n-k
X=(M/C)
The main aim of linear block code is to generate check bits and this check bits are
mainly used for error detection and correction.
Example :
The (7, 4) linear code has the following matrix as a generator matrix
2
Let u = (u0, u1, … , uk-1) be the message to be encoded.The corresponding code word
is
The n – k equations given by above equation are called parity-check equations of the
code
Let u = (u0, u1, u2, u3) be the message to be encoded and v = (v0, v1, v2, v3, v4, v5,v6) be
the corresponding code word
3
Solution :
If the generator matrix of an (n, k) linear code is in systematic form, the parity-check
matrix may take the following form
4
Figure: Encoding Circuit
For the block of k=4 message bits, (n-k) parity bits or check bits are added. Hence
the total bits at the output of channel encoder are n=7. The encoding circuit for (7, 4)
systematic code is shown below.
Let v = (v0, v1, …, vn-1) be a code word that was transmitted over a noisy channel. Let
r = (r0, r1, …, rn-1) be the received vector at the outputof the channel
5
Where
e = r + v = (e0, e1, …, en-1) is an n-tuple and the n-tuple ‘e’ is called the
error vector (or error pattern).The condition is
ei = 1 for ri ≠ vi
ei = 0 for ri = vi
Upon receiving r, the decoder must first determine whether r contains transmission
errors. If the presence of errors is detected, the decoder will take actions to locate the errors,
correct errors (FEC) and request for a retransmission of v.
When r is received, the decoder computes the following (n – k)-tuple.
s = r • HT
s = (s0, s1, …, sn-k-1)
The syndrome is not a function of the transmitted codeword but a function of error
pattern. So we can construct only a matrix of all possible error patterns with corresponding
syndrome.
When s = 0, if and only if r is a code word and hence receiver accepts r as the
transmitted code word. When s≠ 0, if and only if r is not a code word and hence the presence
of errors has been detected. When the error pattern e is identical to a nonzero code word (i.e.,
r contain errors but s = r • HT = 0), error patterns of this kind are called undetectable error
patterns. Since there are 2k – 1 non-zero code words, there are 2k – 1 undetectable error
patterns. The syndrome digits are as follows:
s0 = r0 + rn-k p00 + rn-k+1 p10 + ··· + rn-1 pk-1,0
s1 = r1 + rn-k p01 + rn-k+1 p11 + ··· + rn-1 pk-1,1
.
sn-k-1 = rn-k-1 + rn-k p0,n-k-1 + rn-k+1 p1,n-k-1 + ··· + rn-1 pk-1,n-k-1
The syndrome s is the vector sum of the received parity digits (r 0,r1,…,rn-k-1) and the parity-
check digits recomputed from the received information digits (rn-k,rn-k+1,…,rn-1).
The below figure shows the syndrome circuit for a linear systematic (n, k) code.
6
Figure: Syndrome Circuit
If the minimum distance of a block code C is dmin, any two distinct code vector of C
differ in at least dmin places. A block code with minimum distance dmin is capable of detecting
all the error pattern of dmin– 1 or fewer errors.
However, it cannot detect all the error pattern of d min errors because there exists at least
one pair of code vectors that differ in dmin places and there is an error pattern of dmin errors
that will carry one into the other. The random-error-detecting capability of a block code with
minimum distance dmin is dmin– 1.
If an error pattern is not identical to a nonzero code word, the received vector r will
not be a code word and the syndrome will not be zero.
Hamming Codes:
These codes and their variations have been widely used for error control
in digital communication and data storage systems.
For any positive integer m ≥ 3, there exists a Hamming code with the following parameters:
Code length: n = 2m – 1
Number of information symbols: k = 2m – m – 1
Number of parity-check symbols: n – k = m
Error-correcting capability: t = 1(dmin= 3)
7
The parity-check matrix H of this code consists of all the non zero m-tuple as its columns
(2m-1)
G = [QT I2m–m–1]
where QT is the transpose of Q and I 2m–m–1 is an (2m – m – 1) ×(2m – m – 1)
identity matrix.
Since the columns of H are nonzero and distinct, no two columns add to zero. Since H
consists of all the nonzero m-tuples as its columns, the vector sum of any two columns, say h i
and hj, must also be a column in H, say hlhi+ hj+ hl = 0.The minimum distance of a Hamming
code is exactly 3.
Using H' as a parity-check matrix, a shortened Hamming code can be obtained with
the following parameters :
Code length: n = 2m – l – 1
Number of information symbols: k = 2m – m – l – 1
Number of parity-check symbols: n – k = m
Minimum distance : dmin ≥ 3
When a single error occurs during the transmission of a code vector, the resultant
syndrome is nonzero and it contains an odd number of 1’s (e x H’T corresponds to a column
in H’).When double errors occurs, the syndrome is nonzero, but it contains even number of
1’s.
Decoding can be accomplished in the following manner:
i) If the syndrome s is zero, we assume that no error occurred
ii) If s is nonzero and it contains odd number of 1’s, assume that a single error
occurred. The error pattern of a single error that corresponds to s is added to the received
vector for error correction.
iii) If s is nonzero and it contains even number of 1’s, an uncorrectable error
pattern has been detected.
8
Binary Cyclic codes:
Cyclic codes are the sub class of linear block codes.
Definition:
A linear code is called a cyclic code if every cyclic shift of the code vector produces
some other code vector.
Linearity: This property states that sum of any two code words is also a valid code word.
X1+X2=X3
Cyclic: Every cyclic shift of valid code vector produces another valid code vector.
Here xn-1, xn-2 ….x1, x0 represent individual bits of the code vector ‘X’.
If the above code vector is cyclically shifted to left side i.e., One cyclic shift of X gives,
The code words can be represented by a polynomial. For example consider the n-bit code
word X = {xn-1,xn-2, ........................ x1,x0}.
This code word can be represented by a polynomial of degree less than or equal to (n-1)
i.e.,
X(p)=xn-1pn-1+xn-2pn-2+.......................+x1p+x0
pn-1 – MSB
9
p0 -- LSB
X(p)=M(p)G(p)
For (n,k) cyclic codes, q=n-k represent the number of parity bits.
If M1, M2, M3....................................etc are the other message vectors, then the corresponding
code vectors can be calculated as
10
Generator and Parity Check Matrices of cyclic codes:
Since cyclic codes are sub class of linear block codes, generator and parity check matrices
can also be defined for cyclic codes.
G= [Ik : Pkxq]kxn
The tth row of this matrix will be represented in the polynomial form as follows
Where t= 1, 2, 3 ............... k
11
Lets divide pn-t by a generator matrix G(p). Then we express the result of this division in
terms of quotient and remainder i.e.,
𝑝𝑛−𝑡 𝑅𝑒𝑚𝑎𝑖𝑛𝑑𝑒𝑟
= 𝑄𝑢𝑜𝑡𝑖𝑒𝑛𝑡 +
𝐺(𝑝) 𝐺(𝑝)
Here remainder will be a polynomial of degree less than q, since the degree of G(p) is ‘q’.
Quotient = Qt(p)
𝑝𝑛−𝑡 𝑅𝑡(𝑝)
= 𝑄𝑡(𝑝) +
𝐺(𝑝) 𝐺(𝑝)
The feedback switch is first closed. The output switch is connected to message input.
All the shift registers are initialized to zero state. The ‘k’ message bits are shifted to the
transmitter as well as shifted to the registers.
12
After the shift of ‘k’ message bits the registers contain ‘q’ check bits. The feedback
switch is now opened and output switch is connected to check bits position. With the every
shift, the check bits are then shifted to the transmitter.
The block diagram performs the division operation and generates the remainder.
Remainder is stored in the shift register after all message bits are shifted out.
In cyclic codes also during transmission some errors may occur. Syndrome decoding can
be used to correct those errors.
If ‘E’ represents the error vector then the correct code vector can be obtained as
X=Y+E or Y=X+E
Y(p) = X(p)+E(p)
X(p) = M(p)G(p)
If Y(p)=X(p)
𝑋(𝑝) 𝑅𝑒𝑚𝑎𝑖𝑛𝑑𝑒𝑟
= 𝑄𝑢𝑜𝑡𝑖𝑒𝑛𝑡 +
𝐺(𝑝) 𝐺(𝑝)
𝑌(𝑝) 𝑅(𝑝)
= 𝑄(𝑝) +
𝐺(𝑝) 𝐺(𝑝)
Y(p)=Q(p)G(p) + R(p)
Clearly R(p) will be the polynomial of degree less than or equal to q-1
M(p)G(p)+E(p)=Q(p)G(p)+R(p)
E(p)=M(p)G(p)+Q(p)G(p)+ R(p)
13
E(p)=[M(p)+Q(p)]G(p)+R(p)
This equation shows that for a fixed message vector and generator polynomial, an
error pattern or error vector ‘E’ depends on remainder R.
For every remainder ‘R’ there will be specific error vector. Therefore we can call the
remainder vector ‘R’ as syndrome vector ‘S’, or R(p)=S(p). Therefore
𝑌(𝑝) 𝑆(𝑝)
= 𝑄(𝑝) +
𝐺(𝑝) 𝐺(𝑝)
Thus Syndrome vector is obtained by dividing received vector Y (p) by G (p) i.e.,
𝑌(𝑝)
𝑆(𝑝) = 𝑟𝑒𝑚[ ]
𝐺(𝑝)
There are ‘q’ stage shift register to generate ‘q’ bit syndrome vector. Initially all the
shift register contents are zero & the switch is closed in position 1.
The received vector Y is shifted bit by bit into the shift register. The contents of flip
flops keep changing according to input bits of Y and values of g1,g2 etc.
After all the bits of Y are shifted, the ‘q’ flip flops of shift register contain the q bit
syndrome vector. The switch is then closed to position 2 & clocks are applied to shift register.
The output is a syndrome vector S= (Sq-1, Sq-2 ….S1, S0)
Once the syndrome is calculated, then an error pattern is detected for that particular
syndrome. When the error vector is added to the received code vector Y, then it gives
corrected code vector at the output.
14
The switch named Sout is opened and Sin is closed. The bits of the received vector Y
are shifted into the buffer register as well as they are shifted in to the syndrome calculator.
When all the n bits of the received vector Y are shifted into the buffer register and Syndrome
calculator the syndrome register holds a syndrome vector.
Syndrome vector is given to the error pattern detector. A particular syndrome detects
a specific error pattern.
Sin is opened and Sout is closed. Shifts are then applied to the flip flop of buffer
registers, error register, and syndrome register.
The error pattern is then added bit by bit to the received vector. The output is the
corrected error free vector.
15
Unit-5
Convolution codes
1
2
3
4
5
6
7
8
Decoding methods of Convolution code:
1. Veterbi decoding
2.Sequential decoding
3.Feedback decoding
Convolutional encoding operates continuously on input data. Hence there are no code vectors and blocks.
Metric: It is the discrepancy between the received signal y and the decoding signal at particular node.
Surviving path: This is the path of the decoded signal with minimum metric.
Metric of the particular is obtained by adding individual metric on the nodes along that path.
9
Exe:
10