100% found this document useful (1 vote)
484 views1,275 pages

Data Communication - NPTEL

This document provides information about a video course on data communication taught by Prof. Ajit Pal at IIT Kharagpur, including the professor's contact information and an overview of topics that will be covered. The course covers fundamentals of digital communication including encoding, modulation techniques, error detection/correction codes, and data transmission basics. It will also discuss topics like impairments during signal propagation, bandwidth, bit rates, transmission media types, and the impact of noise on analog and digital signals.

Uploaded by

nbpr
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
484 views1,275 pages

Data Communication - NPTEL

This document provides information about a video course on data communication taught by Prof. Ajit Pal at IIT Kharagpur, including the professor's contact information and an overview of topics that will be covered. The course covers fundamentals of digital communication including encoding, modulation techniques, error detection/correction codes, and data transmission basics. It will also discuss topics like impairments during signal propagation, bandwidth, bit rates, transmission media types, and the impact of noise on analog and digital signals.

Uploaded by

nbpr
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 1275

Course 12.

Data Communication (Video Course)

Faculty Coordinator(s) :

1. Prof. Ajit Pal

Department of Computer Science and Engineering

Indian Institute of Technology, Kharagpur

Kharagpur, 721302, INDIA

E-Mail: [email protected]

Telephone : (91-3222) Off : 283486

Res : 283487, 278612

Detailed Syllabus :

Fundamentals of Digital Communication. Communication channel, Measure of information,


Encoding of source output, Shannon’s Encoding algorithms, Discrete and continues channel,
Entropy, Variable length codes, Data compression, Shannon-Hantly Theorem.

Baseband data transmission, Baseband pulseshaping, Inter Symbol Interface (ISI), Dubinary
Baseband PAM, System Many signaling schemes, Equalisation, Synchronisation Scrambler and
Unscrambler.

Band-pass data transmission system ASK, PSK, FAK, DPSK &PSK, MSK, Modulation schemes
coherent and Non Coherent detector. Probability of Error (PE) , Performance Analysis and
Comparison.

Error detection and correction codes, Linear Block Encoding, Algebraic Codes, Cyclic Codes,
Convolution codes, Best Error, Correeding Codes, performance of Codes.

Synchronous and Asynchronous transmission, Modem, Serial interface, Circuit Switching,


Packet Switching, Hybrid switching, Architecture of computer network, OSI model, Data
communication protocols.
Data Communications
Prof. Ajit Pal
Department of Computer Science & Engineering
Indian Institute of Technology, Kharagpur
Lecture # 01
Introduction and Course Outline

Hello viewers welcome to the video lecture series on data communication. It is the first
lecture of the 40 lecture series and in this lecture I shall give an introduction and course
outline.

(Refer Slide Time: 00:01:16)

First let me introduce myself, my name is Ajit Pal I joined IIT Kharagpur in 1982 the day
ASIAD started in Delhi and I am currently professor in the Computer Science and
Engineering Department of IIT Kharagpur. I received M. Tech and PhD degrees from the
Institute of Radio Physics and Electronics, Calcutta University in the year 1971 and 1976
respectively.
(Refer Slide Time: 1:20)

My research interests include embedded systems, low power VLSI circuits and computer
networks. I am fellow of the IETE India and senior member of the IEEE USA. If you
want to send email address to me here is the email address: [email protected].

Before I discuss about the various topics that will be covered in this lecture first I shall
give a very over simplified model of data communication system and that will put you in
proper perspective. So here is the over simplified model of data communication model
system.

You have got a source, source is essentially where the data is originated. Source can be a
computer, peripheral, it can be some communication equipment like cell phones, PDAs
and so on so any system which can send data which can process data and which can
receive data can be source. Then we will require a transmitter, a transmitter is the device
which converts the data sent by the source into a suitable form for transmission through
the medium.
(Refer Slide Time: 5:20)

(Refer Slide Time: 2:55)

As we shall see the source generates data and transmitter will convert it into a suitable
form which can be sent through the communication system. So the third component is the
communication system the medium through which signal is sent. Now the medium can be
very simple a piece of wire or a pair of wire like a shell cable, twisted pair of wire or it
can be optical fiber or it can be Local Area Network, it can be Wide Area Network so by
communication system we mean that it can be a very simple system like a pair of wire or
it can be very complex system like LAN, WAN or internet. So we shall consider different
types of communication systems.
Then comes the receiver which receives the signal and converts it into data or message.
So here again you see you require a receiver which will receive the signal coming
through the communication system and then it will do some processing and then after
converting the signal into data it will send to the destination. Destination again can be a
computer, can be peripherals or it can be communication equipments or whatever. The
source and destination equipments can be of same type. Now let us consider what we
mean by data.

This source is generating some data or message is the information to be communicated.


What do you mean by data? Data is something which conveys some meaning to the
receiver that is what we call data. And, data can be analog in nature, can be digital in
nature so here d(t) means data it can be analog or it can be digital which is sent from the
source to the transmitter.

(Refer Slide Time: 5:55)

Now the transmitter as I said will convert it into a signal so data is transformed into
signal. As such the data cannot be sent through the communication system. Data has to be
converted into some electromagnetic signal which can be transmitted through a medium.
The signal can be electrical in nature, electronic in nature or optical in nature which can
be sent through the communication system.

Before we discuss about the communication system and other things let us first consider
what we mean by data and signal.

First of all we shall discuss about what we mean by data, analog and digital data types,
analog and digital data. Then as I mentioned the data has to be converted into signal
again the signal can be analog or digital in nature it can be either of the two types.
(Refer Slide Time: 7:00)

So we shall discuss about two different types of signals analog and digital both types will
be considered. And as we shall see the signal can be periodic in nature, can be a periodic
in nature and in fact a signal which is not periodic in nature can be considered as a
combination of some periodic signals. So we shall first discuss the periodic signal
characteristics and then we shall see how periodic signals can be used to non-periodic
signals. And as we shall see the signal can have two different types of representation; one
is time domain representation, another is frequency domain representation. We shall
discuss about the time domain and frequency domain representations and in this context
we have to discuss the spectrum of the signal and bandwidth of the signal. We shall
discuss about both of them and also see the relationship between the two.

And obviously when a signal is generated it has to be propagated and obviously


whenever it goes from say transmitter to the receiver the signal has to go through the
communication system and depending on the distance and medium used there will be
some propagation time and obviously the wavelength of the signals that can be sent will
depend on the medium that is being sent. So we shall discuss in detail the data and signal.
Then comes the impairments that take place as the signal goes through the signal. As the
signal passes through the transmission medium it suffers some impairment and that
impairment can be in the form of attenuation.

We shall see how attenuation occurs and also the unit of attenuation decibel or db which
is universally used. We shall discuss about the attenuation of different types of media and
the unit of attenuation. And also in this context we shall consider the bandwidth of the
medium, the signals which can be sent through the medium and for different types of
mediums the signal can be of different types. Then as I said the impairments will take
place, one reason is attenuation and second reason is distortion. The distortion will occur
in two forms. These two forms are known as delay distortion and also the time distortion
and obviously these two distortions are to be taken care of at the receiving end.

(Refer Slide Time: 11:17)

And the rate at which data can be sent will be dependant on the medium and as we shall
see there is some data rate limit which can be sent through a medium and which is
characterized by Nyquist bit rate. Depending on the bandwidth of the medium the
Nyquist bit rate will be decided. We shall we shall discuss in detail about this Nyquist bit
rate which is the highest data rate that can be transmitted through the medium.

Another important concept is the baud rate. As data is sent it is converted into a signal
and actually the rate at which data is sent and the rate at which the signal elements are
sent through the medium is different and that leads to two different concepts like bit rate
and baud rate. We shall discuss about both of them in detail and the relationship between
the two. And apart from attenuation, distortion there is another source of distortion which
is noise. We shall discuss about various noise sources and see how they affect both the
analog and digital type of signals.

A signal can be analog in nature or it can be digital in nature so let us see how these two
different types of signals are affected because of noise. And we shall also discuss in the
subsequent lectures about whenever in presence of noise how the bandwidth of the signal
or the channel capacity changes that is resided by the Shannon capacity in a noisy
channel.
(Refer Slide Time: 11:43)

As I said the signal has to be sent through some transmission media. We shall discuss
about two different types of transmission media. In fact the transmission media can be
broadly divided into two types; one is guided and another is unguided. In case of guided
transmission media there are three popular types; twisted pair, coaxial cable and fiber
optic cable.

We shall discuss about the characteristics of these three types of guided transmission
media and also we shall discuss about the transmission of signal through unguided media
or through air. In that case of course as we shall see there are three mechanisms of
sending transmission in the wireless form. One is radio wave, another is microwave and
another is infrared. So there are three different forms in which the wireless transmission
occurs. Some examples are broadcast radio the AM FM radio that we hear, then the
terrestrial microwave, satellite microwave and infrared communication.
(Refer Slide Time: 12:28)

So these are the four different types of transmission media in the context of wireless
communication and we shall discuss about all of them in detail.

(Refer Slide Time: 13:05)

Then as I said this signal data has to be converted into signal for transmission through
transmission media and depending on the data and the signal as we can see various
approaches that can be used first of all if the data is digital in nature here the data is
digital in nature so the signal can be also digital in nature and in that context we call it
encoding. Whenever we transform digital data to digital signal the approach that is
followed is known as encoding. And as we shall see there are various encoding
techniques. Then if the data is analog in nature such as voice, video and converted into
digital form then also we have to do encoding. So in general whenever the signal is
digital in nature or when we do digital transmission the conversion process is known as
encoding.

On the other hand whenever the signal is analog in nature whether it is analog data or
digital data we call it modulation that means the technique is known as modulation. And
obviously the type of signals we will use will be dependent on the situation and
bandwidth and obviously of the transmission media that we are using.

(Refer Slide Time: 15:10)

Let us look at the conversion techniques, coding techniques first. Here we have
mentioned about the various techniques for digital to digital conversion. That means here
we are doing encoding. The encoding can be divided into three types. This is known as
line coding (Refer Slide Time: 15:04) so unipolar, polar and bipolar. Unipolar is not that
popular because of its various limitations. We shall discuss about the limitations of
unipolar transmission. And the polar where the signal has two different levels has got a
number of varieties such as non return to 0 NRZ, return to 0RZ, Manchester encoding,
differential Manchester encoding and so on. So these are the four popular polar
techniques for line coding and we shall discuss about each of their advantages,
disadvantages, bandwidth required etc in detail in the subsequent lectures.
(Refer Slide Time: 15:50)

So far as the bipolar techniques are concerned they have some advantages such as AMI
Amplitude Mark Inversion then B8ZS and HDB3. These are the three popular bipolar
encoding techniques and we shall discuss about these three techniques in detail.

(Refer Slide Time: 16:23)

Coming to analog data to digital signals where data is analog in nature such as voice,
video etc in such a case you have to convert the analog data into digital form and there
are two basic approaches; one is know as PCM Pulse Code Modulation and second one is
known as Delta Modulation.
We shall discuss about these two techniques and obviously we shall consider the
limitations of both PCM and Delta Modulation technique and compare these two
approaches in detail in subsequent lectures.

(Refer Slide Time: 16:50)

Coming to the modulation techniques where we are generating analog signal and if the
data is analog in nature we have three different modulation techniques which can be
broadly divided into two types; amplitude modulation and angle modulation. And again
angle modulation has got two different components frequency modulation and phase
modulation. When we discuss about data and signal we shall see that the analog signal
has got three important parameters; amplitude, frequency and phase and any one of the
three parameters can be modified or changed to embed some signal and actually this has
lead to three different modulation techniques like amplitude modulation, frequency
modulation and phase modulation.

We shall discuss about these three modulation techniques in detail, their advantages,
disadvantages, the bandwidth required for transmission to the media, the immunity to
noise and so on.
(Refer Slide Time: 18:04)

Coming to digital to analog modulation where the data is digital in nature and the signal
is again analog in form in such a case again we have got three different techniques known
as amplitude shift keying, frequency shift keying and phase shift keying and of course
these two can be combined to form another modulation technique known as QAM
Quadrature Amplitude Modulation. So we shall discuss about these four modulation
techniques which are used for converting digital data to analog signal and these QAM
and PSK are particularly used in many applications. ASK is used in transmission of
signal through optical fiber so all these modulation techniques we shall consider in detail.
(Refer Slide Time: 19:03)

Whenever the bandwidth of the medium is very high it is possible to send several signal
simultaneously and the technique that can be used is known as multiplexing. And in this
lecture we shall discuss about the basic concepts of multiplexing and there are two
different forms like frequency division multiplexing and wavelength division
multiplexing rather three different forms frequency division, wavelength division and
these two are essentially the same thing representing two different phase and then Time
Division Multiplexing.

Again Time Division Multiplexing has got two different forms synchronous TDM and
asynchronous TDM. We shall discuss about both of them and compare their advantages
and limitations and as we shall see nowadays another technique that is being used is
known as inverse TDM. So, when we discuss multiplexing techniques we shall discuss
all these topics in detail.
(Refer Slide Time: 21:45)

Of course the multiplexing has got very wide applications. and in this lecture series we
shall discuss about four important applications and four important applications are
telephone system which are used in our day to day life and we shall discuss how using
multiplexing technique two different types of services known as analog services and
digital services are provided.

And nowadays it is possible to have a broadband service using the telephone system
known as DSL technology digital subscriber line technology and it has got three different
types or three different variations like ADSL, SDSL and HDSL rather four different types
ADSL, SDSL, HDSL and VDSL. We shall discuss about all the four one after the other
and we shall discuss the multiplexing application. then we shall discuss about cable
modem where the standard cable TV network can be used for transmission of data which
is possible by using a technique known as Hybrid Fiber Coaxial network or HFC network
and we shall see how multiplexing technique is used not only to send the TV signals but
also to send data which can be used for internet access.

Then so far as the optical network is concerned we shall consider another important
application of multiplexing known as SONET Synchronous Optical Network and
Synchronous Optical Network provides you very high bandwidth. And we shall see how
that bandwidth can be used and particularly the telephone system and SONET system can
be integrated so that also we shall discuss in detail. So these are the four important
applications of multiplexing. we shall discuss in detail in subsequent lectures.
(Refer Slide Time: 22:23)

Now we have discussed various techniques that is being used for sending signal through
the communication system; the encoding techniques, modulation techniques and
multiplexing techniques. And as the signal passes through the medium because of the
various impairments the signal that is being sent S(t) is not same as it is received by the
receiver. so signal received through the medium is different from what has been sent but
what the receiver wants is a same thing now so we have to find out what kind of problem
or what is the difference between the original signal and the received signal.

(Refer Slide Time: 00:23:13)


Particularly in this context we shall discuss first about how the interfacing to the medium
can be done and for that purpose the various modes of communication we shall discuss
like parallel and serial, simplex, full duplex and half duplex techniques. There are two
approaches of serial communication and also parallel communication, asynchronous and
synchronous we shall discuss about that, then DCE DTE interface that is being used for
interfacing between the source and the transmitter and also for the destination at the
interface. so these source and destination are known as the DTE Data Terminal
Equipment and transmitter and receiver are known as DCE, so this interface will be
discussed in detail known as RS – 232 and in this context there will be a concept known
as null modem and X.21 also we shall discuss in detail.

And various types of modems or DCE being used will be discussed so that you can
interface to the communication system. Then we have got another important concept. As
we receive the signal we shall see that the data that is being sent by the source is different
from the data that is being received so they are not same.

(Refer Slide Time: 26:45)

So in this context there is some problem and that problem is known as error. So, because
of various types of impairments like attenuation, distortion and noise there will be error,
error will be introduced in the signal, if it is a digital signal a 0 will become 1 and 1 will
become 0 and obviously first we shall discuss about different techniques for detection of
error and the various types of error that can occur. First of all we shall discuss about
single bit error then burst error etc. In single bit error one bit gets changed from 1 to 0 or
from 0 to 1.

On the other hand in the case of burst error a sequence of bits say 1 1 0 0 gets changed to
1 0 1 1 so this happens whenever a burst error occurs. We shall discuss about both of
them and particularly various techniques that is being used for error detection such as
parity check, two dimensional simple parity check where only one bit error detection is
possible or odd number of error detection is possible, then two dimensional parity check,
checksum and cyclic redundancy techniques. So these are the four different error
detection techniques which are used for detecting both single bit error and burst error.

We shall learn about the cyclic redundancy check which is the most popular one which is
known as CRC and possibly the most widely used technique. And apart from error
detection techniques we shall discuss about error correcting codes which can be used to
correct the error from the received data. This is known as forward error correction. So we
shall discuss about how particularly by using hamming code the error correction can be
done, we shall restrict our discussion to only single bit error correction.

However, in practice something else is done which is known as error control where
actually a backward error correction technique is used instead of forward error correction.
In backward error correction what is being done is if the received signal is found to be
corrupted that means if there is error in it then the receiver sends a message to the
transmitter to retransmit the data or message once again so it is based on retransmission
and for that purpose there are several techniques. Apart from error control there is
another technique which is known as flow control. Flow control is necessary whenever
the transmitter and receiver are not of the same capability.

Suppose you have got a fast transmitter and a slow receiver, a server and a desktop
system in such a case the transmitter can be sent at a very high speed but the receiver is
not capable of receiving at that speed. So in such a case there will be over flow or the
buffer of the receiver will become full so we have to overcome that problem and for that
purpose the flow control technique is used.

(Refer Slide Time: 29:45)


The first technique we shall discuss is the stop-and-wait flow control and also sliding-
window flow control. So both these techniques will be used and as we shall see the
performance of sliding-window flow control is better that's why it is widely used. And we
shall see that these flow control approaches can be extended to perform error control or
can be used for backward error correction and the technique which is used for this
purpose is known as ARQ or Automatic Repeat Request.

There are three different variations of error control techniques. First one is known as
stop-and-wait ARQ based on stop-and-wait flow control. We shall discuss about the stop-
and-wait flow control in detail. There is another technique known as go back n ARQ
which is based on sliding-window flow control approach. However, here error and lost
frame is also taken into consideration. So we shall discuss about both stop-and-wait ARQ
and go-back-N ARQ in detail.

However, go-back-N ARQ has some extra overhead because of retransmission of some
frames which are not really necessary and that can be overcome by using selective repeat
ARQ and we shall discuss about this selective repeat ARQ and particularly the buffer
requirement in all the three cases and also the requirement of the number of bits that is
required for numbering the frames so frame numbers are to be given so that this ARQ
technique can work.

So number of bits required is again an overhead. That we shall discuss in the context of
all these three techniques.

(Refer Slide Time: 31:00)

Now whenever we are sending signal through a communication system it is necessary to


have synchronization at three different levels; at the bit level, word level and frame level.
So whenever the data link control is performed essentially we are interested in frame
synchronization. A sequence of bits or sequence of characters are being sent and
obviously in this context you have to identify when a particular frame is starting and
when it is ending so to do that you have to use some kind of framing or a format has to be
used such as it will have some flag at the beginning then you will have some addresses
then the data and at the end also there will be some flag and also it may require some
information for flow control, error control etc.

We will also discuss how the flow control and error control techniques are being used in
data link control. And also it will be necessary to perform link management to initiate a
link to continue the communication of messages and terminate the session so this is
known as link management and in this context there is a standard which is widely used
known as High Level Data Link Control or HDLC. Not only HDLC is widely used but
also some limited versions of HDLC in some other forms are also used. And particularly
we shall discuss HDLC in detail particularly some of the following important parameters
of HDLC such as types of stations, data transfer modes, frame formats all these things we
shall consider in detail in the context of HDLC.

For example, in the previous cases in case of say HDLC we are assuming that there are
two stations here you have got one station and here you have got another station (Refer
Slide Time: 00:32:30) and there is a direct link between them and they are
communicating with each other that is the case of data link control. But it maybe
necessary that a large number of stations or equipments may want to communicate with
each other so in such a case this kind of simplified direct link cannot be used and in such
a case we have to go for data communication through Wide Area Network and we have
to use the switching techniques.

(Refer Slide Time: 00:33:00)


For example, there are different types of switching techniques and we shall discuss about
switching techniques like circuit switching and in one lecture we shall discuss about the
circuit switching techniques in detail after introducing the switch communication network
and we shall see that in a switch communication network you will have a number of
intermediate nodes through which signals are sent. So you will have some kind of
stations and there is a number of intermediate nodes and these are essentially equipments
which are used for communicating data to a number of stations and stations are connected
to such nodes and that leads to a scenario known as Wide Area Network because these
stations maybe located far away and they are connected with the help of nodes and in that
context there are several switching techniques that is being used and the most popular one
is known as circuit switching technique. We shall discuss about the circuit switching
fundamentals, its advantages and disadvantages and how these circuit switching is
implemented.

(Refer Slide Time: 34:48)

As we shall see there are different concepts like space division, switching using crossbar
switches and time division switching like say it uses TDM Time Division Multiplexing.
And we shall discuss about how space division and time division switching are combined
to form a single switching technique. Then we shall discuss about message switching and
packet switching. And in the context of message switching we shall see how a message
can be sent through a switch communication network and what are the limitations of a
message switching technique and particularly as we shall see whenever a long message
say several gigabytes are sent through a network it monopolizes the network, it increases
the probability of error and as we shall see whenever a weak message is sent the
probability of corruption increases that's why messages are usually sent in terms of a
number of packets. That means a single message is divided into a number of packets and
each of them is sent separately and that is known as packet switching. We shall see that
packet switching is very efficient compared to messages switching.
(Refer Slide Time: 37:25)

We shall discuss about various packet switching techniques such as virtual circuit packet
switching and datagram circuit packet switching and we shall compare the virtual circuit
packet switching and datagram circuit packet switching in detail and see how they are
compared with each other and what are their advantages and disadvantages.

Circuit switching is essentially similar to the telephone network where you have to
establish a link then do the communication. On the other hand packet switching is very
similar to the postal system where we can send a letter and drop it in the letter box and
then it can be sent to the next post office and so on. Essentially it is based on store and
forward.

We shall discuss about the packet switching and circuit switching techniques in detail and
particularly the application of circuit switching that is being done in Public Switch
Telephone Network PSTN network where the circuit switching has the biggest
application. Then we shall discuss about some important applications of packet switching
in different types of network such as X.25, frame relay and also in ATM.
(Refer Slide Time: 37:50)

So we shall discuss about these three different types of networks where circuit switching
and packet switching concepts are used and we shall see how data communication can be
done through these networks. Then after discussing the various concepts of Wide Area
Network there will be other techniques which I have not mentioned in the context of
Wide Area Network. Particularly we shall be having a number of techniques. As I
mentioned frame relay, X.25, ATM we shall consider and also we shall consider cellular
telephone networks and satellite communication. We shall discuss about each of these
networks in detail and also see how the various techniques have been used.

And in the context of Wide Area Networks we also have to use routing apart from the
other switching techniques because these messages or packets have to be sent through a
number of nodes so we need to know the route. We shall also discuss about the different
type of routing techniques such as fixed routing, dynamic routing, flooding and so on.
(Refer Slide Time: 40:20)

Another important concept in this context we have to discuss is known as congestion.


The Wide Area Network can be considered as a network of packets. Now, whenever a
large number of packets are sent through the network a problem known as congestion
occurs. just like on the road whenever large number of cars come to the road traffic
congestion occurs so similar to that congestion occurs in Wide Area Network whenever
large number of nodes are sending packets. Particularly whenever a large burst of packets
are sent it leads to congestion.

We shall discuss about various techniques by which we can first of all prevent
congestion. So, congestion control can be done in two ways. First we shall consider how
congestion can be prevented and second technique is whenever congestion occurs how
we can come out of it. So we shall discuss about both the techniques like congestion
prevention and also congestion control which we have to apply as whenever congestion
occurs how we have to come out of that.
(Refer Slide Time: 42:22)

Then we shall discuss about the various Medium Access Control techniques. Medium
Access Control techniques can be of different types. For example, it can be based on
contention, contention based. That means you have got some kind of shared media and a
number of nodes are connected and these nodes are having equal right to access the node.
Then all these nodes are contending to get access to the network. So in such a case we
have to use Medium Access Control technique based on contention and this contention
based Medium Access Control techniques has a number of types.

For example, it starts with ALOHA which is used in packet radio network so we shall
discuss about aloha then there will be other techniques which are based on CSMA Carrier
Sense Multiple Access which overcomes some of the limitations of aloha then we shall
discuss CSMA CD Carrier Sense Multiple Access with collision detection particularly it
improves the efficiency over CSMA. So apart from this contention based schemes we
shall also discuss control based schemes based on token bus and token ring which is
again based on sending tokens. That means whenever a number of nodes are there a
station which is having that token will be able to send and this is how the contention is
overcome.
(Refer Slide Time: 43:19)

So we shall discuss about this token parsing techniques in the context of Medium Access
Control and this token parsing control techniques are popular in many applications for
example in FDDI an important Local Area Network technique the token parsing
technique is used for Medium Access Control. So apart from token parsing techniques the
control access techniques there are other techniques which is based on reservation. There
are many applications particularly which is used in satellite communications. You will
see that neither the contention based schemes nor the token parsing schemes can be used
because of the long delay. So in such a case we have to use a technique known as
reservation scheme. So we shall discuss about the reservation techniques which is used in
satellite communication. Reservation techniques are important whenever the delay is very
large. We shall also discuss about the satellite communication and we shall see how
reservation technique is used in satellite communication.
(Refer Slide Time: 44:33)

On the other hand the contention based techniques are used in LAN and also it is being
used in cellular telephone system for example CDMA Code Division Multiple Access or
TDMA Time Division Multiple Access. These techniques are used in cellular telephone
systems so shall discuss how they are being used.

We shall see in cellular telephone systems not only multiplexing but also multiple access
techniques are used and we shall see how they are combined to improve the efficiency of
cellular telephone system. So, after discussing the WAN we shall discuss about the data
communication through LAN Local Area Network. And the Local Area Network can be
used whenever the geographic region is limited to few kilometers. We shall discuss
various issues involved in Local Area Network like what it is sending information, to
whom it is sending and when it is sending. suppose you have got a shared media which is
being accessed by a number of users in that case the LAN technique has to decide to
whom it should be sent, what will be sent which means what will be the size of the packet
as to the minimum and maximum size and when it can send.
(Refer Slide Time: 46:53)

Obviously there will be several techniques. We have to use several techniques like
Medium Access Control as I mentioned. Particularly in the context of LAN both
contention bas based schemes and token parsing schemes are used. We shall discuss how
they are used in various Local Area Networks.

So in the context of LAN we have to use some kind of addressing so that the sender and
transmitter details will be recorded like who is sending the packet or frame and where it
is going. That means the sender and receiver has to be identified and for that particular
purpose addressing has to be used. So the address of the source and the address of the
destination are to be sent which is known as addressing. We have to use error detection
whenever the data is in a frame that is being sent and we shall see that there will be some
address information, source address, destination address apart from data and also there
will be some CRC check for detecting errors. Therefore error detection is used in the
context of Local Area Networks.

Apart from the conventional legacy LANs like Ethernet, token ring or token bus we shall
also discuss about high speed LANs such as FDDI Fiber Distributed Digital Interface,
fast Ethernet and gigabit Ethernet techniques that is being used in high speed LANs.
(Refer Slide Time: 00:48:15)

Nowadays the wireless Local Area Network is becoming very popular so we shall discuss
about the Wireless Local Area Networks such as IEEE 802.11 based techniques and also
we shall discuss about other techniques like Bluetooth that is being used in Wireless
LAN. So we shall discuss about the legacy LANs like Ethernet and also about the high
speed LANs and Wireless LAN in detail.

Then as we shall see apart from Local Area Networks and Wide Area Networks most of
the people are communicating through internet, data communication is done through
internet so we shall discuss about the internet. And as we shall see internet comprises a
Local Area Network, Wide Area Network and they are bound together with the help of
suitable software and hardware. So the basic objective of internet is to connect individual
heterogeneous networks both LAN and WAN and distribute it across the world using
suitable hardware and software in such a way that it gives the user the illusion of a single
network.
(Refer Slide Time: 49:50)

So the single virtual network is widely known as internet which is essentially a network
of networks. Question arises as what is the hardware and software that is needed.

As we shall see here apart from this host which is essentially the computers you have got
LANs Local Area Network and Wide Area Network and there are other devices which is
known as R essentially they are routers so you will require router that is the hardware that
you require. So this is the hardware you require to link the various heterogeneous
networks LAN and WAN. We shall discuss about the capability of router and how the
routing is done. And also we shall discuss about the software the software that is being
used in this context is known as TCP IP. So transmission control protocol and internet
protocol is actually the software and that acts as a glue or which binds the various Local
Area Network and Wide Area Network together so we shall discuss about TCP IP briefly.

Then whenever the data communication is done through internet we have to discuss
about a number of techniques such as segmentation and reassembly. A particular packet
may not be sent through a particular network because of the restriction and the maximum
size so it has to be segmented or divided. We have to discuss about the segmentation and
how the reassembling is done before delivering it to the destination. Also, we have to
discuss about the encapsulation how the frames can be encapsulated, encapsulated to be
sent through internet, how the connection control is done, how ordered delivery is been
performed, how the addressing is done by using that internet address IP address and
various types of IP addresses that is being used and how multiplexing is done through a
single interface.
(Refer Slide Time: 51:29)

We can have access to multiple devices which is known as multiplexing and how these
are all incorporated as part of TCP IP. We shall also discuss about the data compression
that can be used for sending different types of signals through the mid transmission media
for efficient communication and also we shall discuss about the data encryption
techniques which is used for secrecy purpose or for the purpose of security and various
types of transmission services as a priority grade of service and security.

All these sources will be discussed in detail particularly in the context of data
communication through the internet. and by now you must have realized that the data
communication is not a very simple technique it will involve a number of very complex
techniques. we have already mentioned about a number of techniques and obviously we
shall see that it becomes a very complex thing.

So, in such a case whenever we have to deal with a very complex system normally we
use a layered approach. Layered approach is essentially a divide and conquer approach
where a complex problem is divided into a number of simple problems and that each of
these simple problems is solved independently and individually and that is being
precisely used in layered architecture.

So, for the purpose of data communication we have to use layered approach and in fact
the next lecture that we shall deal is on layered architecture where we shall discuss about
what layered approach is and why layered approach is used and what are the basic
principles of layered approach and how various layers for example a system is divided
into a number of layers and each of these layers is responsible for performing different
functions.
(Refer Slide Time: 53:52)

Obviously questions will arise as how these layers interact with each other. So in that
context we shall discuss about layers and interfaces and we shall see the various
functionalities that is being provided which can be hardware, software or a combination
of them which is known as entity and how various protocols, protocols are essentially
agreed upon rules and conventions that is being used for communication and how these
protocols are used in a layered architecture.

And in the context of layered architecture we have to discuss about services and service
access points, types of services, service primitives and particularly we shall discuss about
the ISO’s OSI reference model. International Standards Organization has proposed a
open system interconnection reference model which is essentially a framework of
standard and that is being widely followed. So in the next lecture we shall discuss about
the ISO’s OSI reference model and the functions of different layers used in version OSI
model will be discussed in detail.
(Refer Slide Time: 57:33)

So, to summarize let me give you some idea about the lecture sequence.

First lecture that is being on now is essentially the introduction course outline which is
this lecture. Then we shall discuss about the layered architecture, third will be essentially
data and signal then the fourth lecture will be on transmission impairments and channel
capacity, the fifth lecture will be on guided transmission media such as twisted pair,
coaxial cable and optical fiber and the unguided medial will be covered in sixth lecture
and there we shall discuss about those radio and other techniques that I have mentioned
like the wireless communication techniques we shall discuss in detail.

Then transmission of digital signal will be covered in seventh and eighth lectures that is
essentially the encoding techniques that I mentioned those unipolar, polar and bipolar
techniques that is used for converting digital and analog data into digital signal. Sso this
will be covered in these two lectures seventh and eight lecture and ninth and tenth lecture
will cover transmission of analog signal that means how the digital and analog data is
converted not in digital form but in analog form by using different analog modulation
techniques such as amplitude modulation, phase modulation and frequency modulation
when it converts analog data to analog signal and also the ASK Amplitude Shift Keying,
Frequency Shift Keying and Phase Shift Keying used for converting digital data to analog
signal.

We shall discuss various multiplexing techniques in lecture 11 where we shall discuss


about the Time Division Multiplexing, Frequency Division Multiplexing and in the
context of TDM as I mentioned we shall discuss about synchronous TDM and
asynchronous TDM and in the context of FDM we shall also discuss about the
wavelength division multiplexing.
Then lecture twelve will cover the telephone system and DSL technology which are
essentially the applications of multiplexing. Lecture 13 will cover cable modem and
SONET, lecture 15 will cover interfacing to the media then lecture fifteen will cover
various error detection techniques and error correction techniques by using hamming
code and flow control and error control techniques will be covered in lecture number 16
where we shall discuss about stop-and-wait flow control, go-back-N ARQ techniques and
so on.

Therefore various ARQ techniques and flow control techniques will be covered in lecture
sixteen and data link control particularly that HDLC will be covered in detail in lecture
number 17, then lecture number 18 will cover the switching techniques such as circuit
switching and lecture number 19 will cover packet switching various characteristics and
features of packet switching lecture number 20 will cover routing in packet switching
networks.

(Refer Slide Time: 58:17)

As I mentioned we have to use routing technique whenever we send packets through


packet switching networks. Then congestion that happen in packet switching networks
will be discussed in lecture 21 and lecture number twenty two will cover the Wide Area
Network based on networks based on X.25 and frame relay, lecture twenty three will
cover ATM the Asynchronous Transmission Mode and in lecture 24 to 25 we shall
discuss about the Medium Access Control techniques such as contention based then token
parsing based, reservation based all these techniques will be covered in lecture number 24
– 25.

And in the remaining lectures we shall discuss about cellular telephone network, satellite
network, various Local Area Network including high speed Local Area Network and
Wireless Local Area Network and we shall discuss about the internet and internet
working techniques and we shall also discuss about the multimedia communication
where we have to use the compression and decompression techniques and we shall
discuss about this security in communication where we shall discuss about encryption
and decryption techniques where involve other techniques that will be in use. So this is
the nut shell which will be covered in this lecture. So with this we come to the end of
today's lecture the first lecture.

(Refer Slide Time: 58:50)

Thank you.
Data Communications
Prof. Ajit Pal
Dept. of Computer Science & Engineering
Indian Institute of Technology, Kharagpur
Lecture # 02
Layered Architecture

Hello viewers, welcome to the second lecture of the lecture series on data
communication. The topic of today's lecture is layered architecture obviously layered
architecture for data communication.

(Refer Slide Time: 00:01:11)

On completion of this lecture the students will be able to explain the concepts of layering,
explain the basic principles of the layered architecture particularly of the OSI model, they
will be also able to explain how information flows in a layered architecture and specify
the functions of the seven layers of the OSI model.
(Refer Slide Time: 1:30)

The outline of today's lecture is as listed here.

(Refer Slide Time: 00:01:55)

We shall first discuss why layered approach, what is layered approach, basic principles of
layered approach, layers and interfaces, entity and protocols, services and Service Access
Points, types of services, service primitives, ISO’s OSI reference model ISO stands for
International Standards Organization and OSI stands for Open System Interconnection
and then we shall discuss functions of different layers of the OSI model.
First let us start with the basic concepts or why do we really need layered architecture. In
the last lecture we have discussed the basic issues of data communication. There we have
seen that data communication is a very complex process, it involves many issues, many
functions and particularly when a user is trying to communicate with another user then
the user must send in a particular form by following a set of rules so that he can
communicate over an unreliable error prone communication system.

As we know the real world is not really ideal it has got noise and many things can happen
like attenuation, distortion, noise these are the things that will be there so as a result the
data we try to send will pass through the unreliable medium. So the user wants to perform
reliable communication through the unreliable error flow medium and to do that it must
follow a common set of rules for generating data.

And at the other end the receiver must follow the same common set of rules for
interpreting the messages and unfortunately it has been found that the set of rules to be
followed is very complex. It is so complex that it cannot be considered as a single entity,
it is not possible to implement as a single entity in a monolithic form. Thus what is done
in such circumstances is normally we follow a layered approach and this layered
approach provides a viable process viable technique to deal with a complex problem.

(Refer Slide Time: 5:00)

So not only in the case of data communication but layered approach is followed in many
applications. In this particular case we shall follow the layered approach for data
communication and we shall see it will involve a large number of functions. The large
number of functions will be grouped or partitioned into hierarchical set of layers.
Essentially this layered approach is nothing but a divide-and-conquer technique. What we
do in a divide and conquer technique?
A particular problem or task is divided into a number of subtasks such that each subtask
can be tackled easily.

(Refer Slide Time: 8:35)

So let us see the basic issues or basic concepts of layered approach.

First point is as I said a complex problem is divided into a number of pieces of


manageable and comprehensible size. Normally the data communication system is very
complex. What we want is we want to divide it into pieces of manageable and
comprehensible size such that each subtask or each piece can be understood easily.

We may consider it as a modular approach. We know that whenever we try to develop a


software if the software is complex we try to divide it into a modular approach or
structured modular approach, so what we do in that, we try to divide into modules then
each module is developed independently. Hence that is precisely what we are trying to do
here. So it may be considered as a structured modular approach.

One objective of this layered approach is that each module can be developed and tested
independently. Why this is important?

Normally as we shall see a data communication system is not implemented by one person
or complex application software is not developed by a single person but it is usually
developed by a team of people. Similarly a data communication system is developed by a
team of people having expertise in different areas. So here what we are doing is we are
dividing into modules such that each module can be developed and tested independently.
Whenever it is done when the development is performed by a team of people we can
assign a particular module to a particular person then he can develop it and test it
independently that is one of the basic principles of layered approach.
Then another basic principle is it allows easy enhancement and implementation of the
functions of a particular layer without affecting other layers. That means as we assign a
particular layer to a particular team member he will be able to enhance it, implement it
independently. That means subsequently if some modification is needed, some correction
is needed then that modification or enhancement can be performed independently without
affecting other layers.

And whenever we shall be doing layering we have to follow some principles to do the
layering, to do the partitioning. What is the first principle to be followed?

First principle is we have to use optimum number of layers. The number of layers into
which the complex problem will be divided should not be too large, should not be too
small.

(Refer Slide Time: 12:05)

If it is too small suppose a complex problem is divided into two subtasks then what will
happen is each subtask will remain again of high complexity. But what we want is we
want to divide into some optimum number such that the subtask is not very difficult not
very complex it is comprehensible, manageable and understandable. Then second
principle that we have to follow is put similar functions in the same layer.

We shall see that in data communication functions some of the functions are similar,
some of the functions are dissimilar. But we want to put similar functions into a particular
layer and other functions can be put in another layer. This layering can be considered also
from another view point from the level of abstraction. As we are performing a
hierarchical set of layers we may consider that essentially you are dividing into different
levels of abstraction. That means the top most layer has the highest level of abstraction
and as we go down the layer it gets more and more refined, more and more defined and
then at the lower most layer it becomes easy to implement.
That means higher layer will have higher level of abstraction and then the lower layers
will have lower level of abstraction. And as I have already mentioned the partitioning
should be done such that the changes of functions in a particular layer can be made within
a layer without affecting other layers. Then as we shall see we have to create layer
boundaries very judiciously.

Why it is necessary to create layer boundaries?


Because we shall say that the communication between layers will takes place through the
layer boundaries. And if the layer boundaries are not judiciously defined then the
communication may become a problem or communication information to be
communicated will become a trouble. For example, we have to choose layer boundaries
to minimize information flow across the boundaries. This is another objective or another
principle to be followed while doing layering. Why it is necessary?

We are decomposing the data communication system into a number of layers. Now it is
somewhat similar to multitasking or parallel tasking. A particular is divided into a
number of parallel tasks or parallel processes. What we want in that situation is minimum
inter-process communication.

Here also what we want is that the layer boundaries must be minimized in such a way we
have to minimize the information flow across the boundaries. That means the two layers
which will communicate with each other through these boundaries that information flow
through these boundaries should be minimized.

Here as I was mentioning we have to create layers and also the interfaces between layers.
Then in such a case we have to see that the system interconnection rules are modularized
in terms of series of layers of functions say n layers. That means we shall divide the
modules in the task into a number of n layers and each layer will contain a group of
related functions and a layer below layer N and a layer above layer N are layer (n minus
1) and layer (n plus 1) respectively so we have to give names to the layers so that the top
most layer will have the higher number and the lower most layer will have the number 1.
(Refer Slide Time: 14:16)

(Refer Slide Time: 14:59)

Then between each pair of adjacent layers there is an interface as I was telling and the
interface will define which primitive services the lower layer offers to the other layer. As
we shall see a lower layer will provide service to the upper layers. That means lower
layer is a service provider and an upper layer is the service user and layer N provides
service to the layer (n plus 1) through Service Access Points that means each layer will
have a number of Service Access Points and through the Service Access Points layer end
will provide services to layer (n plus 1).
And as we shall see each layer will add value to the services provided by lower layers.
That means from the lower layer the services provided by layer N will be used by layer (n
plus 1) to add value. Let me explain with more details with the help of a diagram

(Refer Slide Time: 19:38)

Here is layer N so this is layer 1 and this is layer 1 of another side so this is user A and
this is user B and this is layer 2, this is layer 2, this is layer N and here is layer N. so we
have grouped into n layers and as we can see there is layer boundary here or interface.

Layer 1 has an interface with layer 2 of this system, here also you have interface between
layer 1 and layer 2 (Refer Slide Time: 17:52) then layer N has interface with layer (n
minus 1), layer N has interfaced with layer (n minus 1), layer 2 has interfaced with layer
three and layer 2 on this side also has interfaced with layer three. So here we have got n
layers on two sides. Then here are the interface boundaries between layers. So this is the
interface boundary between layer 1 and two, interface boundary between layer 2 and
three and here is the interface boundary between layer (n minus 1) and n.

And as we shall see the communication takes place through a physical media, here is
your physical media through which ultimately communication will take place and layer 1
will interface with the physical media and communication will take place in this manner.
And each layer will have will have some protocols and this is known as layer N protocol.
Later we will see in more detail about what we really mean by protocol and here we have
layer 1 protocol.

So here I have drawn the diagram of two sides of a two system say here is user a here is
user b as you can see the data communication system has got n layers here data
communication system has n layers which are communicating with each other through a
physical medium. This is the basic concepts of layers and interfaces.
(Refer Slide Time: 20:05)

Data communication occurs between two entities in different systems. We have seen that
the different layers are there and within each layer there are some entities and entities are
responsible for communication. You may be asking what is an entity? Entity is
essentially an active element. An entity is something which is capable of sending,
processing or receiving information. An entity can be a piece of hardware, entity can be a
piece of software or entity can be a piece of hardware software combination and these
entities we will communicate, entity of layer n we will communicate with the entity of
layer n of the other side so the entities will be communicating. These active elements
within the layer are an entity and they are capable of sending, processing and receiving
information. And for communication to take place the entities should follow an agreed
upon protocol.

For example if two persons want to communicate they should use the same language. If
one person is speaking in Bengali and another person is being is speaking in English
cannot communicate with each other. They should follow some agreed upon rules which
are known as protocol. So, for communication to take place the entities should follow an
agreed upon protocol.

Here we have the details about the entity and protocol


(Refer Slide Time: 22:00)

As I mentioned a protocol is a set of rules that govern data communication it defines


what, how and when. So the protocol will define what to communicate, in what form the
information is to be communicated, in what form it will go, will it be in the form of a
bite, in the form of one kilobyte of packet or say one page of book so it will decide that
and how it will be communicated. Also it will define when to communicate and for that
purpose we have to define the syntax, semantics and timing.

Syntax will refer to the structure or format of the data. The data has to be properly
formatted so that the other side can interpret it properly, then we have to follow also the
semantic the way the bit patterns are interpreted and actions taken based on the
interpretation. Semantics is nothing but some kind of grammar for example a language as
a grammar, semantics is that kind of a grammar or the way the interpretation of data has
to be done. So you have to define the syntax and semantics.

The timing is also very important for communication. It will specify when data can be
sent and how fast it can be sent. As we shall go into the details we shall see that will be
need for synchronization. That means when data is sent then the other side has to receive
it from that point so that it receives correct data and also there will be some specific rate
of communication. That means how fast it can be sent that has to be decided. So the rate
of communication will be part of the timing.
(Refer Slide Time: 24:05)

Let me explain how the layers communicate with each other through Service Access
Points and for that purpose let us consider two adjacent layers. Here you have got two
adjacent layers (Refer Slide Time: 24:27) say this is layer n and this is layer (n minus 1).
So this is layer n and this is layer (n minus 1) and this is the interface between the two
layers (Refer Slide Time: 25:00) and in these interface there is there is some Service
Access Points, these are the Service Access Points SAP.

(Refer Slide Time: 26:30)


So, through these Service Access Points the layer N entities will get service from layer (n
minus 1). So obviously the Service Access Points will have some address. Let us take an
example for this.

Suppose we are using telephone system that socket of the telephone system is the Service
Access Point of the telephone network. And as we know each socket corresponds to a
telephone number and that is the address of that Service Access Point. So, if I want to use
the telephone network for data communication my handset has to be connected to the
socket and then I can get the services of the telephone system. So here it is somewhat like
that, this is the service provider and this is the service user. And to get service from the
service provider the layer N will send some data, it has got two parts; one is your
Interface Data Unit IDU and the SDU Service Data Unit.

SDU stands for Service Data Unit and Interface Data Unit IDU and ICI is Interface
Control Information. ICI actually is Interface Control Information and SDU is Service
Data Unit for this and this is communicated to the Service Access Point and for
communication to the lower layer.

So, interface control information is not really the data but it provides some control
information to the lower layer. For example, whenever you are writing program in
assembly language there is some assembly directive so that's the directive to the
assembler for example ORG EGU DB these are the assembly directives. So these are not
really the instruction but some directive to the assembler.

Here also we have somewhat similar to that the layer N will send this particular data
called the PDU or Protocol Data Unit of this layer and this is communicated to the lower
layer. And this lower layer will separate it out; ICI is separated out from the SDU Service
Data Unit. Then the SDU can be divided into several parts. For example, here there is one
part and it will be included that SDU will be included in that into another kind of
Protocol Data Unit we call it Protocol Data Unit PDU and this layer (n minus 1) will add
some header to it. So, after receiving this Protocol Data Unit from the N layer through the
Service Access Point layer (n minus 1) will separate out the Interface Control
Information, takeout the Service Data Unit put it in another Protocol Data Unit so this is
essentially Protocol Data Unit corresponding to layer (N minus 1) which will add some
header then again it may be sent to the next layer through Service Access Points. This is
how the two adjacent layers layer N and layer (N minus 1) can communicate through
Service Access Points and the unit of communication is Protocol Data Unit and that
Protocol Data Unit is passed on through the interface to the lower layer which is
interpreted with the help of the Interface Control Information. It takes out the Service
Data Unit and again it makes another Protocol Data Unit by adding this Service Data
Unit along with the header and that can be communicated with the next layer. So this is
how the Service Access Point works.
(Refer Slide Time: 31:30)

As we shall see layer N of one machine will carry on conversation with layer N of
another machine. As I mentioned although we are using layering the entities of a
particular layer of the same layer will communicate with each other and the rules and
convention used in this conversations are collectively known as layer N protocol. And the
list of protocols used by certain system is called protocol stack. So whenever the protocol
is simple it can be written by a single protocol but sometimes we have to follow a set of
protocols or a list of protocols and that entire thing is known as protocol stack. This is the
protocol stack. And the set of layers and protocols in the network is the network
architecture.
(Refer Slide Time: 32:50)

Our topic of lecture was network architecture but I did not define architecture. So here
you see that the set of layers and protocols is the network architecture. And as I
mentioned architecture is a set of rules and conventions that is used to build something.

Let me define more precisely what you really mean by architecture. It is a set of rules and
conventions necessary for building something. it does not specify implementation details.
For example, when we say Roman architecture or Gothic architecture we usually refer to
some stylistic pattern of a building, it does not say how it has been implemented, those
details are not known. Essentially it’s a model and the model can be considered as a
framework of standard and a standard based on model. Actually architecture will provide
a framework of standard based on which something can be implemented.

As we shall see this network architecture will not really give you the implementation
details. It will provide a framework of standard which can be used to implement
something. Now as I mentioned the essentially different layers will interact with each
other through the interface boundaries then the lower layer will provide service to its
upper layer.
(Refer Slide Time: 34:38)

Let us see the different types of services that are offered.

Services can be connection oriented or connection less. Connection oriented service is


modeled after the telephone system. All of us know the telephone system. If we want to
talk to somebody through a telephone system first of all you have to setup a link setup the
connection. If you don’t get connection because of some problem then no communication
can take place. In other words no data transfer can be performed without setting a
connection. Only after setting a connection data communication can be done so it is
connection oriented.

Then connection less model is modeled after postal system. In a postal system we have
seen if we want to send a letter to somebody we don’t establish a connection with the
receiver. We put the letter in an envelope, write down the address and then put that letter
in letter box then from the letter box the postman collects it puts it in a bag sends it to the
distance place there also another postman will open it and by looking at the address he
will deliver it. So it can be connection oriented or connection less. Later on we shall have
examples of the connection oriented and connection less services used in different
applications.
(Refer Slide Time: 36:40)

Then we can have some quality of service. Quality of service can be in different forms.
One is confirmed service, another is unconfirmed service. For example we are sending a
letter then after sending a letter if we want to know whether it has been delivered to the
destination properly then it has to be confirmed service. Sometimes for example
whenever we send a letter we send a letter we register an acknowledgement that means
we have to use that acknowledgement card and after receiving that acknowledgement that
particular packet or letter has been delivered to that destination. So in such a case we can
call it confirmed service.

On the other hand normally we just post a letter and we assume that it will be delivered to
the receiver. But there are situations when it is not done or there is some long delay. So
that can happen in a unconfirmed service. And as we shall see a part of the quality of
service may include order of delivery.

What do we really mean by order? Suppose we are sending three letters one after the
other or say we are sending three volumes of a book one after the other then first we send
volume I then we send volume II then we send volume III. It may so happen that at the
other end it is not received in the same order it may be that volume III may be received
first then volume I or volume II in such a case there may be some problem.

So sometimes it is necessary to maintain proper order of service. So order of service is


important, the order in which things are delivered. Second is whether an error has
occurred while sending the data. That means we have to find out whether any error
occurred or did not occur. Then third issue which is part of the quality of service is the
delay, how much delay that is occurring in delivering the thing. These are the various
quality of services needed which are used as part of the service.

Let us take some example of services.


(Refer Slide Time: 40:40)

I have taken up six examples. First one is sequence of pages to be sent to another side,
another example is remote login with which we are very much familiar with, third one is
digitized voice. The first three can be considered as connection oriented and the next
three are examples of connection less service.

Sequence of pages, remote login and digitized voice are sent through connection oriented
services. And whenever we send sequence of pages usually it is a reliable message
stream. Whenever we are sending reliable message stream it is part of reliable service.
Remote login is also reliable byte stream we are sending that is the service that we use
and digitized voice is unreliable connection oriented service that we normally use,
electronic junk mail is an unreliable datagram service that is connection less and another
name of connection less service is datagram service then registered mail is essentially
acknowledged datagram service as I explained earlier in details.

Whenever we do a database enquiry it is essentially the request replay service. So we


have some examples of services provided in data communication systems. Whenever
service is provided and particularly if it is connection oriented then there are some service
primitives. One such service primitive is the connection request where a request is made
with the help of the connect request in which one side of the system requests a connection
to be established. Then with the help of signal the other side will indicate whether
connection has been done, it will signal the called party that connection has been done.

Then we have the connection response used by the callee to accept or reject calls. That
means the other side will have some indication. By having that indication the other side
will respond and will say whether it wants to accept or reject calls then connect and tell
the caller whether the call was accepted.
(Refer Slide Time: 43:00)

Then you have got data request, data indication, disconnect request and disconnect
indication. These are the service primitives that have to be used in connection oriented
service. Let me explain with the help of a time sequence diagram and show how it really
occurs.

(Refer Slide Time: 44:40)


(Refer Slide Time: 00:46:08)

So this side is system A and this side is system B and here protocol actions are performed
and obviously this is the service provider. So system A will make a request say connect
request and in this direction you have got the time so after sometime the other side will
get some indication so here we get connect indicate (Refer Slide Time: 44:20). Then this
side the system B side will send the connect response and after some delay the other side
will receive this connect confirm.

As an example you can consider the telephone system. for example if system A wants to
communicate then it will dial the number and the other side will get the ringing tone that
is the connect indicate and after hearing the ringing tone as connect indicate the user will
respond by lifting the handset that is your connect response and the other side will get the
information that connection has been confirmed. So once the system B side lifts the
telephone this side will know that the telephone has been lifted and after that data
exchange will take place in the same manner. That means data request will come from
this side then it will have some data indication on the other side then the data response
will come from this side to this side then it will have the data curve form and after the
data communication is performed in this way then it will be disconnect request and
disconnect confirm. So in this way in a sequence of time by using this protocol the
communication will take place.

So we have discussed about the layering concept, we have discussed the concept of
interface, the interface access points through which the communication takes place
between two layers; we have also discussed some services and some service primitives.
(Refer Slide Time: 47:10)

Let me explain layered architecture with the help of an example.

The most popular layered architecture that is followed in data communication is


International Standards Organization’s open system interconnection architecture, it is
known as OSI reference model. As you can see it has got seven layers; physical layer,
Data Link Layer, Network layer, transport layer, session layer, presentation layer and
Application Layer. So we see it has got seven layers and out of these seven layers the
lower three layers are the responsibility of the network and the upper layers are
essentially the responsibility of the host or the computer. So here you see that with the
help of these seven layers data communication is performed and most of the data
communication systems follow this particular model.

Let me start with the functions of the lower most layer that is the physical layer.
(Refer Slide Time: 48:30)

The physical layer is concerned with transmission of raw bits over a communication
channel. As we have seen the physical layer is connected to the transmission medium. So
this physical layer is concerned with the transmission of raw bits over a communication
channel, it decides the mechanical interface that has to be performed between two
systems. that means the connector what will be used, how many pins will it have, what is
the length and so on and then it will have at the electrical part the signal levels, data rate
and so on and then it will have functional procedural part and also it will decide whether
simultaneous transmission is possible in both directions or not like simplex and so on.
We shall discuss these in detail later; simplex or full duplex etc.

Then the lower layer also decides the establishing and breaking of connection. This is one
of the functions of the physical layer and it deals with the physical transmission medium.
That means what particular type of medium to be used, whether we shall be using a
guided media like coaxial cable, optical fiber or we shall be using air unguided medium.
So these are all decided as part of the physical layer.

With the help of a diagram let me explain how the physical layer will communicate.
(Refer Slide Time: 49:20)

Suppose this is one side of the physical layer so here we have got one side. Suppose we
are sending 01001100 then this particular data is a raw bit stream which is sent through a
physical medium to the other side. The other side will receive the same thing so it will
receiver 01001100 so this side will receive the data (Refer Slide Time: 51:01). So we can
see here that these are the physical layers and these physical layers are communicating
through a physical media and here is the raw bit stream. Then we have got the next layer
that is your Data Link Layer.

(Refer Slide Time: 51:30)


The functions of the Data Link Layer is that this layer transforms the physical layer to a
reliable transmission and reception of structured stream. The raw stream is transformed
into reliable transmission and reception of a structured stream and here are the functions
performed: framing, physical addressing, synchronization, error control, flow control and
also decides whether the transmission will be character oriented or bit oriented. And as
we shall see whenever the communication is taking place to a multiple number of people
then it will do Medium Access Control. That means a medium is shared by a large
number of people. Then we have got network layer, these are the functions performed in
a network layer.

(Refer Slide Time: 00:52:28)

The network layer is responsible for source to destination delivery by establishing,


maintaining and terminating connections and it will deal with logical addressing. As we
shall see as part of the frame we have to put source at this destination address so you have
to do some kind of framing and logical address will be part of that then it has to do
routing because here it will go through a number of communication systems. There are
examples of routing such as virtual circuits, datagram service etc. It will do assembly and
disassembly of messages, messages may be assigned priorities and it may be necessary to
do internetworking particularly in a heterogeneous network that will be the responsibility
of the network layer.
(Refer Slide Time: 53:40)

Then comes the transport layer. This transport layer is responsible for true end to end
communication. In the previous one this network layer was not really concerned with end
to end communication but here this transport layer is concerned with end to end
communication. Particularly the quality of services required by the upper layer is
provided by it, it will do port addressing which we shall discuss later on in detail. It may
be necessary to multiplexing, multiplexes end user addresses onto network. It may be
necessary to breakdown a packet into a number of packets and reassemble at the other
side. So this segmentation and reassembly is done by this transport layer. Then we have
the connection control which monitors the quality of services, end to end error detection
and recovery, multiplexing, flow control these are the various functions performed in the
network layer.
(Refer Slide Time: 54:50)

Then comes the next layer which is the session layer. The session layer establishes
connection and when the data transfer is complete it does the termination. So it performs
dialogue management and as part of the dialogue management it will decide who will
speak if you feel somebody will speak and how long they will speak. And as I mentioned
the communication can be simplex, half-duplex or full-duplex which we shall discuss in
detail later on.

Whenever the communication is taking place over a long distance there maybe failure.
So, to recover from failure in a efficient manner some check point maybe done which is a
part of the session layer. it will also do the token management. This is necessary when
some critical operations are performed by one side. So the side which has the token will
perform the critical operation and that management is done in the session layer.
(Refer Slide Time: 56: 40)

Then comes the presentation layer. The presentation layer is responsible for syntax and
semantics of information. We have the data types, the type of data used, whether it is 2s
complement number or whenever we are using floating point number we have to use
IEEE 754. So what data types are supported is decided by this then the character codes
like ASCII or something else is also decided. It will also decide about data compression
at the other side, decompression like jpeg, mpeg compressions are also done which is part
of this presentation layer. Sometimes for secured communication encryption and
decryption has to be done and this is the function of this presentation layer. Finally comes
the last layer that is your Application Layer.
(Refer Slide Time: 00:57:05)

Application Layer is concerned with user applications and there are two types common
application service elements, elements like login, password checks, or it can be specific
application service elements like file transfer, access and management, job transfer and
manipulation, electronic mail, videotex, teletex, telefax, message handling, document
transfer etc are in within the Application Layer.

(Refer Slide Time: 59:15)

This diagram shows rather in a Gibbs in a nutshell the complete picture of


communication.
here you see that from the Application Layer some data is coming in and then
Application Layer puts a header to it and sends this to the presentation layer and the
presentation layer in turn will put a header to it send it to the next layer that is the
transport layer and the transport layer will put a header to it and send it to the network
layer and the Network layer will put a header to it and send it to the Data Link Layer and
the Data Link Layer will send it to the physical layer and physical layer will perform the
communication. So here what is mentioned is how communication takes place. However,
communication takes place between two users they are the peer protocols.

For example, here you have got the user A and here is user B. suppose they are sending
email so this user A will send email and that it will go down through this channel and it
will reach the user B. So essentially user A is communicating with user B and these are
the layers which are used to pass the information to different layers so that it reaches the
other side in proper form and it is interpreted properly. So before I conclude this lecture
let me give you some review questions.

(Refer Slide Time: 59:30)

1. Why is it necessary to have layering in a network?


2. How two adjacent layers communicate in a layered network?
3. What are the key functions of Data Link Layer?

These questions will be answered in the next lecture, thank you.


Data Communications
Prof. Ajit Pal
Department of Computer Science & Engineering
Indian Institute of Technology, Kharagpur
Lecture # 03
Data and Signal

(Refer Slide Time: 00:01:23)

Hello viewers welcome to today's lecture on data and signal. This is the third lecture on
the series of data communication. In this lecture we shall cover various aspects of data
and signal. On completion of this lecture the students will be able to explain what is data,
distinguish between data and signal, distinguish between analog and digital signal,
explain the difference between time and frequency domain representation of signal,
specify the bandwidth of a signal, explain the bit interval and bit rate of a digital signal
and here is the outline of today's lecture.
(Refer Slide Time: 00:02:07)

(Refer Slide Time: 00:02:10)

First we shall consider data and data types, analog and digital data, signal and signal
types, examples of analog and digital signals, periodic signal characteristics, time and
frequency domain representation of signal, spectrum and bandwidth of a signal and
finally the propagation time and wavelength.
(Refer Slide Time: 00:04:50)

First let us consider what data is. As we have seen that we have to send data through
some communication media and we have to understand what we really mean by data,
data is an entity that conveys some meaning based on some mutually agreed upon rules,
conventions between a sender and a receiver. That means for sending the data the sender
has to follow some mutually agreed upon rules and convention at the receiving end. The
receiver also has to follow mutually agreed upon rules and convention to interpret the
data.

Let us take up an example say the sender has sent some data say 1000001 that has been
sent by a sender. Now as such this bit sequence has no meaning and the receiver cannot
really understand what it is and what for it has been sent. But once it is told that an ASCII
character has been sent as you know ASCII stands for American Standard Code for
Information Interchange and whenever the sender informs that this is a ASCII character
then the receiver will know that an ASCII character has been sent which is nothing but
the character code for the character A. That means A has a seven bit code 1000001 so
now it has become a data until it was able to be interpreted the receiver is able to interpret
it is no longer data.

For example we hear many noise disturbances from the environment but we do not
interpret every thing as data. But whenever we hear a music or song then we can interpret
it and we can understand the meaning of that song and that can be considered as data.
That means uncorrelated incoherent signal or information whatever we receive is not
data. But what we can interpret obviously that interpretation is based on some knowledge
of the data that means some standard norm or some mutually agreed upon convention. So
this is your significance of data or meaning of data.

Now the question is what are the different types of data that are possible? Data can be
either analog or digital in nature.
(Refer Slide Time: 00:07:35)

Analog data have continuous values over time. That means suppose this is the time
domain representation of some data and say. So it has continuous values over time so this
side is the time and this side is the amplitude in this case. So whenever this is being sent
then this has continuous values that means it has got infinite number of values within this
range then we will call it analog data. Here within this duration say time 0 time t a large
number of values or infinite number of values have been sent that means the signal levels
can vary from 0 to some range say Amax any value we can get so we call that range the
analog data. So this is the example of analog data. For example, voice and video are
analog data that is being commonly sent over medium. First let us consider audio as an
example of the analog data. For example if a person is speaking then someone is listening
to it. So this is the mouth and this is the ear.
(Refer Slide Time: 00:09:06)

Now through air the voice travels to the ear. That means from the mouth the vibration is
made and that vibration leads to some compression and decompression as you know the
waves and that waves compresses the data to the ear and it gets some sensation in the
eardrum which makes us hear. So this is your audio or acoustic data. Now if we represent
it on time domain or instead of time domain let me represent it frequency domain.

(Refer Slide Time: 00:10:07)

So here we write in frequency domain and the signal strength usually lies in the range of
may be 0 to 3500 so the significant portion of the signal lies in this frequency domain.
However, as we know a voice or music can have any signal in between 20 Hz or 20 KHz.
However, our ear is more sensitive to this range of the signal that means 0 to 3500 and
most of the voice communication takes place within this range. So this is the analog data
or audio data that can be communicated usually through air.

However we cannot send it electronically so to send it electronically we have to do


something which we shall see later. So far as the video data is concerned we are familiar
to viewing image on TV. The video data can be best explained with the help of what we
see on TV.

(Refer Slide Time: 00:12:27)

As we see on TV a raster scan moves, an electron beam moves from this end to the other
end a large number of times and that's how a frame is created and the number of lines is
about 525 lines of course all of them are not visible but of which 483 lines are visible the
other lines are blank and it is done in an interleaved manner. That means 241 ½ times it is
repeated twice and it is repeated thirty times in second to make it visible on the screen.
That means for the sake of retentivity of the eye the repetition rate should not be less than
thirty times otherwise the image will not appear to be stationary.

So, whenever we do this the bandwidth requirement of this video data is about 4 MHz of
course this is excluding voice and color, if we include that it will be more than this. So
this is the example of video data that we receive through a television network and we see
that it has got much higher signal components having bandwidth of 4 MHz, this is an
example of another analog data.

Now let us consider the third type of analog data that is physical parameters. Physical
parameters are essentially various parameters that we encounter in environment. The
world that you belong to is in analog domain. That means all the physical parameters are
analog in nature.
(Refer Slide Time: 00:14:56)

For example temperature, pressure etc are all analog in nature and these analog
parameters are usually converted to electrical form with the help of transducers.
Transducers are essentially some kind of energy conversion device so on this side we
apply physical parameter it can be temperature, pressure, light intensity and on this side
we apply that on this we get some electrical signal voltage or current so this is your
transducer. And transducer produces here an electrical signal which is also analog in
nature usually. That means we get continuous values of the electrical signal
corresponding to the physical parameter on this side for anyone of the parameters I have
mentioned earlier. So these are the examples of analog data.

Now let us consider digital data. Digital data take on discrete values. Discrete values so
again in terms of time we are representing then it can have 1 0 1 1 so these are the
discrete values. That means two distinguished voltage levels, here it is 0V and it can be
said 5V it is represented by two different voltage levels. So 0 1 1 1 then it can be again 0
and one so as you can see it has got two different voltage levels.
(Refer Slide Time: 00:16:53)

So here it takes on some discrete values. It is not absolutely necessary that it should have
only two voltage levels it can have more than two voltage levels but in general we are
more concerned with digital communication so it will be essentially of two levels as we
shall see in most of the cases. An example of digital data is text or character strings.

As we have mentioned that ASCII characters are of eight bit here we see is an example of
some ASCII characters. So American Standard Code for Information Interchange ASCII
codes are seven bit codes and some of the codes are given here.

(Refer Slide Time: 00:17:07)


For example 0 is 0 0 1, 0 0 0 0 then one is your 0 1 1, 0 0 0 1 that is the ASCII code for 1.
That means in the computer whenever we press the keyboard and press the character
essentially 1 0 0 0 0 0 1 is sent to the computer. That means keyboard senses the
character code 1 0 0 0 0 0 1. Thus some of the ASCII characters examples are given here
and this is an example of digital data.

Another example of digital data is data stored in memory say CD has two discrete values
which can be represented at 0 and 1. For example inside the computer the information
that is stored is stored in the form of 0s and 1s and inside the digital computer it is
essentially digital domain and nothing else exists other than 0s and 1s. So whenever some
information is stored in the memory or hard disk or CD it is stored in the form of 0s and 1
and it is stored in that way.

Now comes the question of signal. So far we have discussed about data and explained
what data is. We have seen that whenever data has to be sent over some communication
media it has to be converted into signal. Data cannot be sent as it is through the
transmission media. We have to transform it into signal before sending over the
communication media.

(Refer Slide Time: 00:19:37)

So what is signal? A signal is nothing but it is electric, electronic or optical representation


of data which can be sent over a communication media. That means we are converting
into some form which can be either electric electronic or optical. It depends on through
which medium they are trying to send. That means when we are trying to send through a
pair of wires it has to be electrical voltage or current.

On the other hand whenever we are trying to send through an optical fiber cable it has to
be converted into optical signal or light. Then that light can be communicated through the
optical fiber medium or sometimes we send in the form of magnetic field so we can say it
has to be in some form which can be electric, electronic, optical or some electro magnetic
signal.

Again as we shall see signals can be of two different types analog and digital. It can be
either analog or it can be digital. An analog signal has continuous values over a period of
time or you can say the infinite number of values over a period of time. For example here
this is an example of an analog signal.

(Refer Slide Time: 00:21:04)

As you can see a microphone converts voice data into voice signal. As we have seen
whenever somebody is speaking or singing he makes some vibration in the air. So that
audio data can be converted into audio signal or electrical energy with the help of
microphone, that microphone is essentially a transducer which converts that vibration or
pressure on the microphone into electrical signal generating voice signal which can be
sent over a pair of wires.

Here is an example say t and as we can see here you have got the electrical signal with
different amplitude values as it is shown on this side. This has got infinite number of
values so it is analog in nature. Similarly digital signal can have only limited number of
defined values usually two values 0s and 1.
(Refer Slide Time: 00:23:04)

For example, this is a 0V and here it is plus 5V. The 0V can be considered as 0 logic
level and plus 5V can be considered as one logic level. So here as you can see the signal
has got two distinct values either one so here let it be 1 1 0 1 1 1 and 1. So in terms of 1s
and 0s the digital data is converted. So in digital data we are having only two different
values.

(Refer Slide Time: 00:23:17)

Now we can focus more on the analog signal. Analog signal can be classified into two
types. First one is simple, second one is composite. Example of a simple analog signal is
sine wave. Sine wave can be considered as a simple analog signal and as we shall see a
composite analog signal is essentially a mixture of multiple simple signals as we shall see
very soon.

(Refer Slide Time: 00:27:31)

Again these simple analog signals usually are periodic in nature for example a sine wave.
Let me draw a sine wave. For example, this is a sine wave I have drawn. Now this sine
wave is repeating as you can see after this time (Refer Slide Time: 24:45) this time t is
known as the time period where t is equal to time period. That means after time t the
signal will be same as the previous portion. The same thing that is this part will be
repeated here so this is another t. So mathematically as you can see when s(t) plus t is
equal to s(t) for the range minus infinity to plus infinity of time that means it is starting
from any time, it has started long back and it is generated continuously then we call it a
periodic signal and it is a time period t, this T is the time period (Refer Slide Time:
25:47).

Now a periodic signal can be fully characterized by three parameters. What are the three
parameters? The three parameters are amplitude, frequency and phase. What is
amplitude? As we can see on this side we have plotted time and this side we plot
amplitude and this one is known as the peak amplitude A.

So we have seen peak amplitude A and that frequency f is represented by one by t. That
means if t is the time period then frequency is essentially is 1/t so f is equal to 1/t that is
your frequency, this is actually the frequency. Now there is another parameter phase.
Phase is actually expressed between two different signals, the relative appearance of a
signal with respect to another signal.
For example, let us assume this is time T is equal to 0. Here the signal value is 0 so we
can say phase is 0. Now let us discuss in more detail about amplitude, frequency and
phase.

(Refer Slide Time: 28:00)

Amplitude is the value of the signal at different instants of time measured in volts. And
for example let me represent a sine wave, amplitude frequency and phase. So, if we
consider a sine wave say let me represent this signal s(t) is equal to a sine 2 pi ft plus phi.
This is the representation of a sine wave and as you can see here A is the peak amplitude
and obviously with time this amplitude varies. For example, as we have seen earlier it
varies like this. Although this is the peak amplitude that means with time it is varying
because of this factor multiplied with this. so this is known as the amplitude varying with
time. And frequency it is the inverse of the time period and usually it is measured in
hertz.

(Refer Slide Time: 29:55)


This amplitude is measured in volts as you can see here. Of course as we shall see there
are other units that also we use. Frequency for example it is inverse of the time period
that we have seen f is equal to 1/t and it is measured in hertz. On the other hand phase as I
told it gives a measure of the relative position of two signals in time expressed in degrees
or radian. Let us see examples of signals with different amplitudes, frequency and phases.

For example this particular signal (Refer Slide Time: 30:49) as you can see has got peak
amplitude 1V so A is equal to 1V, this part is 1V, this is 1V and within one second we
can see that one complete cycle is completed. So it has got frequency f is equal to 1. And
as I told in time 0 the phase is 0 so phi is equal to 0. Now here is another signal.

Here amplitude as you can see the peak value is 0.5 and with respect to this we can
compare so this is 0.5. So A is equal to 0.5, the frequency is same it is repeated after one
second and here also phase is 0. Here on the other hand as you can see within one second
there are two complete cycles. So here frequency f is equal to 2 although amplitude A is
equal to 1 and phase phi in this case is 0. So here we have got amplitude equal to 1,
frequency is equal to 2 and phase is equal to 0.

(Refer Slide Time: 00:32:29)


Now in this diagram (Refer Slide Time: 32:39) we have got phase as you can see with
respect to this. Everything is the same with respect to this expect phase. So it is not
starting with 0 that means 0V is not at 0 point and there is a phase shift of about 45
degree and phase is expressed in terms of degree or radian as we shall see. So in this
particular case A is equal to 1V, f is equal to 1 Hz where 1 means 1 Hz because in one
second again one cycle is completed and phase is equal 45 degree. So here we have taken
example with different amplitude, frequency and phases.

Now, for different parameters that we have discussed amplitude, frequency and phase let
us consider the other units that are commonly used.

(Refer Slide Time: 00:36:57)


For example, let us consider first for amplitude. So it is in volts but sometimes the
voltage can be of very small so in such case we can call it millivolt then we write mV
millivolt that means something which is equal to 10 to the power minus 3V or sometimes
we have to represent high voltage then we can say that it is KV that means it is 10 to the
power 3V.

Similarly we call frequency as hertz. But sometimes the values can be very large or
sometimes the frequency can be very high then we say KHz kilohertz that means 10 to
the power 3 Hz or it can be MHz megahertz that means 10 to the power 6 Hz or it can be
GHz gigahertz that means 10 to the power 9 Hz or it can be THz terahertz that means 10
to the power 12 Hz and so on. Hence we are dealing with very large frequencies
nowadays.

Similarly for time period or sometimes we call it period so the basic unit is second which
we represent by S but sometimes we have to deal with very small time or very long time.
If we have to write millisecond that is equal to 10 to the power minus 3 second or micro
second that is equal to 10 to the power minus 6 second millisecond, microsecond or
nanosecond 10 to the power minus 9 second or it can be picosecond sometimes we have
to write very small time picosecond 10 to the power minus 12 second so these are the
various units for amplitude, frequency and phase. Similarly we can have units for phase.

(Refer Slide Time: 00:38:21)


We have seen that for phase within one cycle that is within this range the phase can vary
at 360 degree. So from here to here (Refer Slide Time: 37:43) that means this is 90
degree, this is 180 degree and this is 270 this is 360 or we can express it in radian and in
that case it is 2 pi. Suppose the phase has been shifted by another signal like this that
means here the phase is about 45 degree. So in that case it is 45 degree which can be
written as 2 pi by 360 into 45 in radian.

So we have seen the various units used to represent the important parameters such as
amplitude, frequency and phase for a signal. And later on we shall see that these are the
parameters that we shall vary for the purpose of communication through medium.

Now we have discussed about the time domain representation of the signal. On the x axis
we have plotted time and the amplitude variation with time we have shown and this can
be visualized with the help of the oscilloscope. Oscilloscope is the equipment that is
normally used to have visualization of the time domain representation of the signal and
we are very familiar with that.

However, whenever you vary the amplitude phase or frequency it becomes a composite
signal and that composite signal whenever it is shown on the oscilloscope does not
convey much meaning to us so in such a situation it is better to have frequency domain
representation of the signal.

(Refer Slide Time: 00:38:57)


And whenever a signal is complex by virtue of the variation of parameters like amplitude,
phase and frequency with time then it becomes a composite signal and that composite
signal according to Fourier analysis any composite signal can be expressed as a
combination of simple sine waves with different amplitudes, frequencies and phases.
That means we can express any composite signal in terms of sine waves having
frequency f1, having frequency f2, having frequency f3 so it’s an infinite series of many
frequencies. Now here is an example.

(Refer Slide Time: 00:41:14)

So here we see this is a sine wave, sine 2 pi f1 t having frequency f1, this has got
frequency f1. Here is another signal, as we can see within this we have got three cycles so
it has got frequency 3f1 so this frequency is three times of that. Now if these two are
added that means suppose we have sine 2 pi f1 t plus 1/3 as the amplitude is 1/3 sine 2 pi
3 f1t assuming that phase is 0 for both then if we combine these two we get a signal like
this as shown here. That means here as you can see we have got combination of these two
sine 2 pi f1t plus 1/3 sine 2 pi 3 f1 t so if we combine this and if we see it on the
oscilloscope in the time domain we shall get this kind of diagram. This is the diagram we
shall get in the time domain.

However, if we analyze it with Fourier series then we know that it has got two frequency
components one of f1 and another of 3f1. So here we see that on the frequency domain
representation on this side you have got f and this amplitude corresponds to the amplitude
of this frequency and here we have 2 and 3 so here it is 2, here it is 1 and here it is 3. So
here you see that the waveform with three amplitude has got frequency 3f1.

How we can see this? This can be visualized with the help of special equipment known as
spectrum analyzer. So spectrum analyzer is an equipment which can give us frequency
domain representation of the same signal. Now this frequency domain representation as
you can see gives us the information about the different frequency components the signal
has got which is sometimes very useful to us. So, now let us consider several examples of
time domain and frequency domain representations.

(Refer Slide Time: 00:44:00)

So here we see, this diagram A gives you a signal having one second and within one
second how many pulses are there? There are 1 2 3 4 5 6 7 8 so there are eight pulses. So
it has got frequency f and time period is equal to 1/8 second. And this corresponds to 5
units may be 5V so if we have a look at the frequency domain representation we get a
line here at frequency f is equal to 8.

On the other hand here is another signal which has frequency 16. If you count you will
get sixteen full periods within one second, this is one second. So it has got time period
1/16 second. And as you can see here frequency domain representation is given here
(Refer Slide Time: 45:11) and we get a line of the same voltage with amplitude 5.

Now, whenever we are considering particularly a digital signal or a composite analog


signal as we have seen it comprises many frequency components and that is expressed in
terms of frequency spectrum. And frequency spectrum of a signal is the range of
frequencies a signal contains.

(Refer Slide Time: 00:45:50)


For example, here is a square wave and this square wave has time period t and frequency
f here say f is equal to 1/t. Now this square wave if we expand it with the help of Fourier
series it will have the representation s(t) is equal to A where A is the amplitude that
means it will have the amplitude A/2 pi sine 2 pi ft plus A/6 pi 1/3 2 pi 3 f1 that is 2 pi
into 3 ft which means the frequency is three times plus A/10 pi sine 2 pi into 5f into t. So
here actually it will be A/6 pi sine 6 pi ft. So in this way it will have all the frequency
components say frequency component of 3f, 5f, 7f, 9f and so on. However, as you can
see here the amplitude is gradually decreased as A/6 pi, A/10 pi, A/12 pi so it will keep
on falling but however it will have a frequency component of infinite frequencies. So it
will have all the frequency components starting from f to infinite components. However,
the frequency components of higher frequencies will be lower known as harmonics.

So this is the third harmonic, this is the fifth harmonic, it will have seventh harmonic,
ninth harmonic and so on so increasingly of lower amplitude. Now, whenever we are
representing a signal we have seen that the frequency spectrum can be very large.
However, most of the frequency components having very small amplitudes are of not
much use that's why there is another parameter known as bandwidth that represents the
frequencies over which most of the signal is contained or this bandwidth can be the
effective bandwidth of the signal.

(Refer Slide Time: 00:51:09)


Here the term that means the range of frequencies over which most of the look at this
term most that means so here is the information about bandwidth.

For example, you are interested in representing the bandwidth. Suppose this is the
frequency domain representation and this side essentially signal strength or amplitude has
got infinite frequencies in this range. Obviously the frequency of this spectrum of the
signal is spreading over from lowest to highest frequencies. However, for all practical
purposes we can remove this, (Refer Slide Time: 51:35) similarly from this side so if we
remove this portion of the spectrum then we can say this is the bandwidth. So this is the
bandwidth of the signal. From this part to this part is the bandwidth of the signal.

(Refer Slide Time: 00:52:08)


This is where the major part of the frequency components or energy is lying so we call it
the bandwidth of the signal. Now in case of digital signal as you have seen it can be
aperiodic in nature it may not be periodic in nature like the simple analog signal like sine
wave. So in such a case it can have infinite number of frequencies. Now in the context of
digital signal there are two parameters known as bit interval and bit rate which we have to
discuss. For example bit interval is the time required to send a single bit.

(Refer Slide Time: 00:52:23)

For example this is a digital signal say this is one and these are the intervals say 1 0 and
then it can be 1 1 and this interval is essentially the bit interval say this time t is the bit
interval. And if bit interval is t then the number of such bits can be sent in one second
which is essentially 1/t and this is known as the bit rate the number of such symbols that
can be sent in one second and it has the expression bits per second, obviously it can be
extended to kilobits per second whenever you are sending 10 to the power 3 bits per
second or it can be megabits per second then we are sending 10 to the power 6 bits per
second or gigabits per second 10 to the power 9 bits per second and so on. So these are
the bit rates that are commonly used.

Whenever we are sending analog digital signal then the bandwidth is restricted within
certain range. On the other hand whenever we are sending a digital signal bandwidth is
very large. So this can be represented in two forms.

(Refer Slide Time: 00:56:44)

So, for example a digital signal can have spectrum like this. That means this is the
bandwidth. This is for digital, 0 to very large frequency. On the other hand for analog
signal we may restrict it within certain range so bandwidth is essentially like a band pass.
And here it is like low pass as we can see from 0 to certain high frequency. And because
of these characteristics through one medium we can send only one digital signal and on
the other hand we can send several analog signals like say another analog signal, another
analog signal and so on so in this way we can say in a frequency division multiplexing
manner. So this is the frequency and several such signals can be sent through the media.
On the other hand digital signal cannot be sent this way.

(Refer Slide Time: 00:56:57)


Now there is another important parameter that is propagation time. It is the time required
for the signal to travel from one point of transmission medium to another. It is essentially
the distance by the propagation speed. And there is a related parameter wavelength it is
the distance occupied in space by a single period. Obviously there is relationship between
the two where wavelength is equal to propagation speed into period or propagation time
into frequency. Let me explain with the help of some example about the speed and
frequency.

(Refer Slide Time: 00:59:19)

For example speed of electromagnetic signal in the free space is 3 into 10 to the power 8
m per second. Now that lambda that means this is equal to the speed of light; lambda is
equal to c by frequency, the speed of light by frequency. For example the wavelength of
red light s equal to 4 into 10 to the power 14 hertz, what is the wavelength of this? This is
the frequency.

What is the wavelength of red light? Lambda is equal to 3 into 10 to the power 8/4 into
10 to the power 14 m is equal to 750 nanometer. So you see that the frequency is very
high but wavelength is small for the red light. So this is how this speed and propagation
time can be correlated and this is the speed in free space. Obviously whenever it is sent
through some guided media as we shall see then the speed will be smaller will be equal to
say 2 into 10 to the power 8 meter per second.

So we have covered different topics related to data and signal.

Here are the review questions on this particular lecture. The questions are distinguished
between data and signal.

(Refer Slide Time: 00:59:33)

What are the three parameters that characterize a periodic signal?

Distinguish between time domain and frequency domain representation of a signal.

What equipments are used to visualize electronic signals in time domain and frequency
domain?

Distinguish between spectrum and bandwidth of a signal.

So these questions will be answered in the next lecture and we end this lecture with the
answers to the question of lecture 2.

(Refer Slide Time: 01:00:15)


(Refer Slide Time: 01:00:48)

(Refer Slide Time: 01:00:55)


First question is why it is necessary to have layering in a network.

The answer is as we know a computer network is a very complex system it becomes very
difficult to implement as a single entity. The layered approach divides a very complex
task into small pieces if each of which is independent of others and it allows a structured
approach in implementing a network.

This is the answer to questions of lecture number 2. And this is the answer to question 3
of lecture 3, thank you.
Data Communications
Prof. Ajit Pal
Department of Computer Science & Engineering
Indian institute of Technology, Kharagpur
Lecture # 04
Transmission Impairments and Channel Capacity

Hello viewers welcome to today's lecture on transmission impairments and channel


capacity.

(Refer Slide Time: 00:01:00)

This is the fourth lecture in the lecture series on data communication. on completion of
this lecture the students will be able to specify the source of impairments as the signal
passes through a channel, explain attenuation and unit of attenuation the decibel, specify
possible distortions of a signal as it passes through the medium, explain data rate limits
and Nyquist bit rate because of the bandwidth limitation and then distinguish between bit
rate and baud rate then identify noise sources finally we shall explain the Shannon
capacity in a noisy channel. That is the maximum bandwidth maximum information that
can be passed through a noisy channel.
(Refer Slide Time: 1:05)

The topics that I shall cover in this lecture are sources of impairment, attenuation and unit
of attenuation, bandwidth of a medium, various kinds of distortions that can take place as
the signal passes through a medium, data rate limits, Nyquist bit rate, bit rate and baud
rate, noise sources and finally Shannon capacity in a noisy channel.

As we know to send data you have to convert it into a signal either analog or digital then
that signal has to be passed through a medium it can be a simple medium or a complex
communication system. Whatever it may be the transmitter and receiver will be linked by
some medium. And unfortunately the medium that we use is not ideal.

By that what do we mean?

Normally by ideal we mean that whatever is sent by the transmitter should be received by
the receiver but because of the limitation of the medium that will not happen there will be
some impairments which we shall discuss.
(Refer Slide Time: 2:10)

(Refer Slide Time: 2:40)

The imperfections of the medium cause impairments in the signal. What are the possible
impairments? First is attenuation then distortion and noise. We shall discuss each of these
impairments one after the other. First let us consider attenuation.
(Refer Slide Time: 00:05:50)

Attenuation can be considered as the loss of energy as the signal passes through a
medium. We know from our basic knowledge of physics that as some signal passes
through a medium its intensity falls at the rate of d square distance square. We explain
that with the help of suitable unit. Normally the unit that we use is known as decibel dB.
Here dB is equal to 10log10 P2 by P1 where P2 is the power at the destination or point 2
and P1 is the power at the receiver that is at the transmitting end or point P1. That means
power P2 is the received power in the destination and P1 is the transmitted power from the
source so this ratio P2 by P1 is taken as related measure of the loss of energy and it is
expressed as 10log10 P2 by P1 and this decides how far the signal can go without
amplification. From our basic knowledge we know that whenever the signal level is low
we can increase this signal level by amplification by a technique known as amplification
and whenever we use the amplification obviously the signal level can be erased.
(Refer Slide Time: 7:00)

As shown in this diagram for example this is the point P1 from where the signal is
passing and it is going to point P2 and as you can see this was the signal sent and at point
P2 it is very much attenuated. Now we use an amplifier between point 2 and point 3 and
we get an amplified version of the signal.

So an amplifier can be used to compensate the attenuation of the medium. And as I


mentioned decibel is a measure of the relative strengths of two signals that means signal
at the destination and signal at the transmitter. If P2 and P1 are signal strengths at two
different points then relative strength at the first point with respect to the second point in
dB is as I explained dB is equal to 10log10 P2 by P1. So, as you can see here P1 is the
power, here P2 is the power and here P3 is the power at this point.
(Refer Slide Time: 7:03)

Now let me explain this with the help of an example. Let the energy strength or power at
point 2 is 1 by 10 with respect to point 1. Then attenuation in dB is 10log10 1 by 10 that is
minus 10 dB. It may be noted that the loss of power is represented by a negative sign. So
we find that whenever there is attenuation by looking at the sign of that decibel value we
can find out the level of attenuation and also you can identify that it is attenuation.

On the other hand let the gain as it passes through an amplifier is hundred times at point
three with respect to point 2. Then the gain in dB is 10log10 100 by 1 that is 20 dB. In this
case as we find this is positive.

So whenever we amplify then we get a value which is positive and whenever attenuation
occurs we get a value which is negative. Now as it passes through several points say in
other words a channel is in cascade with an amplifier then may be say from here there is
another channel so in this way it can be cascaded. So what we can do is we can add up
the dB values to find out the final attenuation or gain at a point 3 with respect to one. For
example, in this case say the signal strength at point 3 with respect to one can be obtained
by da by adding the attenuation between P1 to P2 then amplification between 2 to 3 is
equal to minus 10 plus 20 is equal to 10 dB.
(Refer Slide Time: 9:10)

So here with respect to point one the amplification is 10 dB. So we find that finally from
here to here if we consider then it is not attenuation but a gain of 10 dB. So in this way
the decibel values can be added whenever we have a number of devices or channel in
cascade to find out the final value of attenuation or amplification whatever it maybe.

Now, as we know we are interested in sending data through a medium and we want to
send it as fast as possible. Or in other words we want to send it at a high speed.
Obviously we want the maximum possible speed. But at what speed we can send will
depend on several parameters of the medium. What are the parameters of the medium?
Let us see.
(Refer Slide Time: 9:40)

First one is the bandwidth of the channel, number of levels used in the signal, noise level
in the channel. So these three factors we shall consider one after the other to understand
how fast data can be sent through a medium or channel and because of these parameters it
is restricted. First of all let us consider the term bandwidth.

(Refer Slide Time: 10:35)

In the last lecture we have considered the bandwidth of a signal. What is the bandwidth
of a signal?
Bandwidth of a signal is the signal frequencies where most of the energy lies. Here it is
somewhat different. Here bandwidth refers to the range of frequencies that a medium can
pass without a loss of1 half of the power that is minus 3 dB contained in the signal. So
you see that the bandwidth term is used to refer to the bandwidth of a signal, it also refers
to the bandwidth of a channel. So, whenever we refer to the bandwidth of a signal we
refer to the major frequency components. On the other hand whenever we refer to a
medium we mention the range of frequencies that can be sent without much attenuation
through the channel.

For example in this case this is the amplitude (Refer Slide Time: 12:03) and that reflects
essentially the attenuation or gain. So here we find that the attenuation is less in this part
and on both sides there is higher attenuation that means amplitude is less and half power
is at frequency fl and another half power is at frequency fh. So on both sides of this mid
point there is the power label goes down to hub and so the bandwidth here for the channel
can be considered as fh minus fl. This is how the bandwidth of the channel is represented.

And as you know the frequency components of a digital signal varies from zero to almost
infinity. However, at higher frequencies the signal levels are gradually lower and lower.
And as you know a digital signal not periodic in nature but usually aperiodic in nature
requires a bandwidth from 0 to infinity so it needs a low pass channel low pass means
starting from 0 to f. So we require a channel which will not attenuate starting from 0
frequency that means dc to almost say we can consider that this is somewhat like infinity
a very high frequency if we want to send all the frequency components of a digital signal.
So the bandwidth of a medium decides the quality of the signal at the other end. If we
restrict the bandwidth somewhere here then some of the frequency component will not
reach (Refer Slide Time: 14:06).

On the other hand whenever we send analog signal then we can send it through a band
pass channel f1 to f2.
(Refer Slide Time: 13:00)

(Refer Slide Time: 14:03)

So this is the frequency range of a band pass channel and this band pass channel can be
used to send an analog signal because the analog signals have bandwidth within certain
range say from lower frequency to upper frequency.

Later on we shall see that these will generate signals similar to a band. That means it can
be passed through a band pass channel and it has other consequences as I mentioned in
the last lecture. This will help us to send several signals through one channel. You can
send one signal between f1 and f2, another signal between f3 and f4 and so on. Several
signals can be simultaneously send which you may call frequency division multiplexing.
We shall discuss about it in more detail later on.

Now let us consider distortion.

(Refer Slide Time: 00:16:42)

As we have seen one important phenomenon that occurs is attenuation, apart from
attenuation what occurs. Whenever attenuation occurs it is assumed that all frequency
components are attenuated uniformly. However, that does not happen in practice. It has
been found that attenuation of all frequency components are not same, some frequencies
are passed without attenuation, some are weakened and some are completely blocked so
we can see that there are three situations; some are passed without attenuation, some are
weakened and some are blocked this leads to what is known as distortion. That means
what the transmitter is sending at the other end of the medium the receiver is not getting
the same thing they are not same. So in such a situation we can say that signal is distorted
or it has suffered distortion.

For example this is an input signal (Refer Slide Time: 16:36) and this is the medium
through which the signal is being sent. And as it is sent through the medium at the other
end of the output we get the output signal which is much different from the input signal
and this will be decided by the bandwidth of the channel or medium. Let me take an
example and explain the effect of signal passing through a band limited channel.
(Refer Slide Time: 19:15)

Suppose we are sending a digital signal 0 1 0 0 0 0 1 0 0 which has bit rate of 2000 Bps,
here the bit rate is two thousand bits per second Bps (Refer Slide Time: 17:25) it is the
2000 Bpssignal we are sending. So this is the fundamental frequency of the digital signal.
Obviously apart from two thousand fundamental signal it will have the various other
harmonics like 3000 Hz, 4000 Hz, 5000 Hz all the frequencies will be present however
they will be of lower and lower amplitude.

Now as we pass this signal through a transmission medium having bandwidth of exactly
200 Hz we find we get a signal like this. So this is completely different from what was
sent. It was a rectangular wave but what we are getting here is a sine wave. That means
only the fundamental is passing through the medium. The other frequency higher
frequency components are not passing. So here the bandwidth is two thousand hertz that
is the bandwidth of the medium.

Now we implicit little bit say to 3600 Hz then we find that signal is somewhat like this.
That means another harmonic has passed through the medium, 3000 harmonic has passed
through the medium. To increase the bandwidth to 5200 some more harmonics have
passed. We increase the bandwidth to 6 800 Hz we find it somewhat very close to the
original signal but definitely not sent.

Now we increase the bandwidth to 10000 Hz. We find a signal which is very close to the
original signal. And now when the bandwidth is 16000 Hz we get a very good quality
signal that means this will pass up to eighth harmonic and as a consequence the signal we
receive at the other end after passing through the medium will be very close to the
original signal that we have sent. So with this diagram I have explained how a signal gets
distorted as it passes through a band limited channel.
Now, apart from attenuation and in bandwidth limitation there are other kinds of
distortion the signal will suffer. One of them is known as attenuation distortion. Why
attenuation distortion occurs? If the attenuation of the medium varies with frequency this
leads to attenuation distortion. Let me explain with the help of a voice grade telephone
line. Let me draw a diagram for a voice grade telephone line.

(Refer Slide Time: 23:15)

Here the voice grade telephone line has bandwidth from 0 to maybe 4000 Hz. This is
your 2000 Hz, this is your 1000 and this is your 3000. And on this side we draw the
attenuation and this is 0 say this is minus 10 dB in decibel and this is plus 10 dB. The
bandwidth of the medium that means the voice grade telephone line in this case can be
represented like this. Here it is shown as 100 KHz that means with respect to hundred
KHz it is 0 so it will be somewhat like this.

Therefore with respect to thousand hertz attenuation is increasing at lower frequencies


and attenuation is increasing at higher frequencies. So the higher frequency components
will be attenuated more than the frequencies around this region. That means lower
frequency component as well as higher frequency components will be attenuated more
than this central part. What is the way out of this? How can we overcome this problem?

We can overcome this problem with the help of a device known as equalizer. So you can
put a equalizer to change the characteristic to somewhat like this.

For example, this is the transmitter and this is the medium through which the signal is
sent (Refer Slide Time: 22:38) so at the end of the medium we put the hardware known
as equalizer and then the output of the equalizer is fed to a receiver. Now we see that
medium and equalizer together is giving a bandwidth like this. And obviously in this case
whenever the bandwidth is like this then the distortion is much less compared to this one.
That means this particular bandwidth we get only with equalizer.
Therefore in this particular case whenever we use equalizer the problem is less in case of
digital signal and problem is more in case of analog signal. The reason for that is for
digital signal we have seen that most of the energy is concentrated near the fundamental
and lower harmonics. At higher harmonics lesser and lesser energy is present. On the
other hand an analog signal can have frequency components over the entire range.

So whenever it is attenuated at some frequency range we get more distortion. On the


other hand in case of digital signal problem is less severe. That’s why this kind of
equalizer is used in case of voice grade telephone line to correct the bandwidth of the
medium to get a better signal at the receiving end.

Now let us consider another type of distortion that occurs that is known as delay
distortion. And this delay distortion arises particularly in a guided media but not in air.
Later on we shall discuss about the different types of transmission media. We will be
having two types of transmission media guided and unguided. So in case whenever the
signal is passed through a guided media like twisted pair cable, coaxial cable or optical
fiber it leads to delay distortion.

What is delay distortion?

Delay distortion arises because velocity of propagation varies with frequency. That
means this signal components that we are sending will have different velocities for
different frequency components as it passes through a guided media and this leads to
delay distortion. And again let us take up the example of voice grade telephone line and
in this case we consider the delay.

(Refer Slide Time: 00:27:33)


Earlier we considered the attenuation, here we consider the delay, on this side it is the
delay and delay is varying from 0 to 4000 microseconds. And on this side this is the
frequency that means hertz. So zero to four thousand is the bandwidth of the medium
(Refer Slide Time: 26:05). Now it has been found that in the middle part the velocity
delay is less that means velocity is more. That means on either side of the frequency
range delay is more or velocity is less. So what will happen is the lower or upper
frequency component will reach later then the middle frequency components. That means
the frequency components in this range will reach earlier than the frequency component
in this range and in this range. So, it may so happen that the lower and upper frequency
components of the previous signal will fall on the middle frequency component of the
present signal, this will lead to distortion. Again this effect can be minimized with the
help of using an equalizer and after using an equalizer the characteristic can be somewhat
like this so this is with equalizer.

With equalizer the bandwidth is much better. That means relay distortion is much less
and bandwidth characteristic with respect to delay we find is much better. And in this
particular case digital signal is more affected. As we know digital signal will be having
many high frequency components so we find that the delay distortion affects a digital
signal more than an analog signal. This is opposite to that of attenuation distortion. Now
let us consider the Nyquist bit rate.

(Refer Slide Time: 28:05)

As we already mentioned that the signal we send to the other end is dependent on the
bandwidth of the channel, noise of the channel because of these parameters. Now let us
see that in case of a noiseless channel we assume that there is no noise although it is not
true in practice but for an ideal channel that means without noise the maximum bit rate is
given by the Nyquist bit rate and it is equal to C that means Nyquist bit rate C is equal to
2 into Blog2 L where C is known as the channel capacity, B is the bandwidth of the
channel and L is the number of signal levels. Here we are taking bandwidth and number
of signal levels into consideration. Later on let’s see what we really mean by number of
signal levels. So it depends on the three parameters. Rather two parameters in this case
the channel capacity depends on bandwidth and the number of signal levels that we are
sending.

(Refer Slide Time: 31:30)

Actually to understand the number of signal levels another parameter know as baud rate
has to be understood. Let us see what we really mean by baud rate.

The baud rate of a signal or the signaling rate is defined as the number of distinct
symbols transmitted per second irrespective of the form of encoding. Whenever we are
sending a digital signal we know the bit rate and bit interval. Now in a single bit interval
it is possible to send the number of distinct symbols that can be transmitted per second.
Actually it will depend on how much we are sending within the bit interval. In a bit
interval multiple levels can be sent then we can send more information.

For example in case of baseband digitals transmission where the number of levels is two
that means within a bit interval we can send either 0V or 1V. So we have got two distinct
values or two levels; zero level and one level. So in this case it is two. Suppose it is
possible to send 0V, 0.5V, 1V and then 2V like that. Let’s take these four values we can
send 0, 0.5, 1 and 2 so in this case the number of levels is not 2 but 4. That means within
that bit period we can send one of the four values 0V, 0.5V, 1V or 2V so in that case L is
equal to 4. So, depending on how many levels are present the maximum baud rate will be
dependent on that.

So, for digital transmission as we know it is equal to 1 by element width in second is


equal to 2B because here l is equal to 2. So this is the case for digital transmission with
only two levels.
(Refer Slide Time: 32:10)

The bit rate or information rate is the actual equivalent number of bits transmitted per
second. That means this allows us whenever we are able to put more number of levels
within a particular bit interval then we can send more information for a given bandwidth.
For example, the bit rate or information rate is the actual equivalent number of bits
transmitted per second. That means I is equal to baud rate into bits per second. Baud rate
means the number of possible values that we can send and bit rate is actually the number
of bits we are sending, bits per baud. So it is baud rate into N or baud rate into log2M so
here M is the number of levels.

We find that for binary encoding the bit rate and the baud rate are same because here the
value of N is equal to 2 so log2 is essentially 1. That means the information rate or bit rate
is same as the baud rate. So information rate is same as the baud rate in case of binary
encoding or digital encoding. But for maximal use of the channel we can use multi level
encoding. That will allow us to send more information through a communication
medium.
(Refer Slide Time: 34:50)

Let me consider an example of the telephone channel having bandwidth is equal to 4


KHz as we have already discussed.

Assuming that there is no noise determine the channel capacity for the following
encoding values. Say we are considering encoding level two and encoding level 128. So,
when the encoding level is two we get the channel capacity equal to 2B that means 8
Kbps or we can write it as Kbps. On the other hand when we are using 128 different
levels for encoding how multilevel encoding can be done, this we shall discuss in details
later on.

Then we can send two into 4000 that is the bandwidth into log2 128 that means it gives us
8000 into 7 or 56 Kbps that means Kbps 56 Kbps. So you see by suitable multilevel
encoding through a channel having bandwidth of only 4 KHz we can send a very high
data rate 56 Kbps that is precisely what is done in the case of modem. Modem can send
56 Kbps which is the maximum baud data rate. So we find that this is how we can send
more information.
(Refer Slide Time: 37:05)

So far we have neglected the effect of noise but ideally that will not be so and the channel
will always have some noise. So, when there is noise present in the medium the
limitations of both bandwidth and noise must be taken into consideration. So far we have
only taken into consideration the limitation due to bandwidth. Now let us take the
limitation due to the noise.

A noise spike may cause a given level to be interpreted as a signal of greater level. If it is
in positive phase or a smaller level if it is a negative phase that means suppose you are
sending a voltage level 0.5V and there is a noise of plus 0.5V that will make the signal
level equal to 1V although 0.5V was sent now we are receiving 1V at the receiving end.
So at the receiving end it will be interpreted as a signal of different level so we shall get
incorrect data.

On the other hand if it is negative phase say plus 5 and the received signal say noise is
minus five volt then together it will make a 0V so we shall get another signal level which
is also incorrect. So by this way because of noise we may not get correct data.
Particularly noise becomes more problematic as the number of levels increase. so if we
have only two levels zero and one which is represented by zero volt and plus five volt
then problem is less, we can tolerate up to maybe plus minus two volt of noise.

On the other hand when we are using four levels we divide 5V into four equal parts there
it will be more susceptible to noise. Or if the number of level is 128 then we can see that
say 0 to 5V is divided into 128 different values and a small noise will change the
transmitted noise level so at the receiving end we shall get incorrect voltage signal level
so it becomes more problematic as the number of levels increases. To quantify the noise
level a parameter known as signal to noise ratio is used.
Let P be the average signal power and n the average noise power. Then signal to noise
ratio is equal to average signal power by average noise power 10log (S by N). So in
decibel this is the signal to noise ratio.

(Refer Slide Time: 38:04)

(Refer Slide Time: 38:40)

Now for a given signal to noise ratio the Shannon capacity for this noisy channel gives
the highest data rate represented by C is equal to B bandwidth part is there and also it
takes into consideration the signal to noise ratio. So the Shannon capacity gives the
capacity equal to B into log2 (1 plus S by N) where S by N is the signal to noise ratio. So
we find that in case of extremely noisy channel C is equal to 0 that means log21 is equal
to 0. That means whenever we try to send data through a very noisy channel irrespective
of its bandwidth we may not be able to send any data because of this fact. That means if
the noise level is very high compared to the signal level then we cannot send any data
through it that means channel capacity is 0.

Now we have two different parameters; one is Nyquist bit rate which gives us a channel
capacity based on bandwidth and number of levels that we used for encoding and another
is based on bandwidth and the signal to noise ratio. So between the Nyquist bit rate and
the Shannon limit the result providing the smallest channel capacity is the one that
establishes the limit. That means we may compute the Nyquist bit rate and Shannon
capacity and find out which one is the lower and the lower one has to be taken as the
channel capacity or the maximum information that can be send through the medium.

Let me take few examples here particularly in case of noisy channel.

(Refer Slide Time: 40:50)

So here the channel has got bandwidth of 4 KHz as earlier. Now determine the channel
capacity for each of the following signal to noise ratios: 20 dB, 30 dB and 40 dB. So we
find 20 dB is essentially 100 signal to noise ratio so it gives you Shannon capacity of
essentially which is equal to 26.6 Kbps. For 30 dB it is actually signal to noise ratio of
1000 it gives you a channel capacity of 39.8 Kbps and for signal to noise ratio of 40 dB
we get a channel capacity of 53.1 Kbps as the signal to noise ratio is higher and higher
we can achieve higher channel capacity we get higher channel capacity for higher value
of signal to noise ratio.
(Refer Slide Time: 42:10)

Now let us consider a situation where we have got a channel with a bandwidth of 4 KHz,
a signal to noise ratio of 30 dB so we have to determine maximum information rate for
four level encoding. Thus here the value of N is equal to 4 so we are using four level
encoding. Therefore based on our Nyquist bit rate we can calculate the value for B is
equal to 4 and four level encoding we get 16 Kbps. That means 2 into B 8 into log2 4 that
means we get 2 so 2 into 4 into 2 we get 16 Kbps.

On the other hand if you take into consideration the signal to noise ratio of 30 degree then
we get a Shannon capacity of 39.8 Kbps so we find two values; one value is 39.8 kilobits
per sec second that is the maximum capacity that is possible. However with four level of
encoding we can achieve only 16 Kbps. So smallest of the two values has to be taken as
the information capacity. So in this particular case when we are using B is equal to 4
KHz, signal to noise ratio of 30 dB then using four level that is based on these three
values for four level encoding the information capacity will be equal to 16 Kbps. so in
this way we can find out what is the information capacity in a particular situation. Let us
take another example.
(Refer Slide Time: 44:05)

Here a channel has bandwidth B is equal to 4 KHz, signal to noise ratio of 30 dB we


would like to find out the maximum information rate for 128 level encoding. So here it is
based on Nyquist bit rate B is equal to 4 and M is equal to 128 levels we get 56 Kbps. On
the other hand because of the signal to noise ratio of 30 dB we get Shannon capacity of
39.8. So here we are using a large number of levels 128 which may provide us 56 Kbps.
But because of the lower signal to noise ratio the information capacity will be restricted
to 39.8 so you won’t get 56 Kbps but we shall get 39.8 Kbps. So here we have to take the
smallest of the two. And here it is limited by the signal to noise ratio but in the previous
case it was limited by the limited by the Nyquist bit rate.
(Refer Slide Time: 45:26)

Now we can consider another example. The digital signal is to be designed to permit 160
Kbps that means we have to send 160 Kbps for a bandwidth of 20 KHz. Determine the
number of levels and signal to noise ratio. Here the problem is given in a little different
way. Here the bandwidth is given 20 KHz and our desired data rate is given. We have to
find out the number of levels and signal to noise ratio. So applying Nyquist bit rate to
determine the number of levels we get C is equal to 2B log2M so we get 16 that means 4
bits per baud that means a single signal element has to represent four bits so that the
number of levels is 16.

Applying Shannon capacity we get signal to noise ratio based on this formula 24.07. So
you see which one is the signal to noise ratio of 24.07 and number of levels equal to 16 so
we can achieve 160 Kbps data transmission through a channel having bandwidth of 20
KHz. So this example tells us how we can design a transmission system to provide
suitable number of encoding levels and a suitable value of signal to noise ratio so that we
can achieve a desired data rate.

I believe our discussion will not be complete if we don’t discuss about the types of noise
that is present in the medium. There are several types of noise that may corrupt a signal
and the most common noise types are discussed here.
(Refer Slide Time: 50:05)

First one is thermal noise. What is thermal noise?

Normally if we look at a conductor a copper wire apparently we find that it is a very


common tool there is no movement nothing is taking place. However, if you look at the
molecular level we know each molecule is vibrating within a range and all the free
electrons are moving around the conductor so there is a lot of movement which leads to
noise because of the movement of the electrons which are responsible for conducting
current known as the thermal noise and this thermal noise is dependent on three
parameters where the parameter K is the Boltzmann constant, T is the absolute
temperature in Kelvin and B is the bandwidth of the channel so we can see that higher the
bandwidth higher is the noise so N is equal to k, T, B.

Another very important point we must notice is that the thermal noise increases with
temperature. As the temperature increases more and more noise increases. That is the
reason why during summer the quality of signal we receive is poorer than in winter.

Then the second type of noise is known as intermodulation. intermodulation occurs when
signals of different frequencies share the same medium. Suppose a medium is sharing a
frequency f1 it is also sharing a frequency f2. Now because these two signals are present
due to nonlinearities in the transmission system it may generate signal like f1 plus f2. This
signal f1 plus f2 will add if it is in the same place if there is a signal of f1 plus f2 in the
original signal. That means the noise generated because of intermodulation will be added
to the signal if it is having a frequency component f1 plus f2. So we find that this
intermodulation occurs when signals of different frequencies share the same medium.
This is an example of how different signal components are generated from two frequency
components.
Then the third type of noise that we commonly encounter is known as crosstalk. This
crosstalk is due to unwanted coupling between two media. In the next class we shall
discuss the various types of transmission media. There we will see particularly in
telephony and many other applications a number of cables are bunched together and sent
from one place to another. So whenever we are sending a number of cables side by side
the signal passing through one cable induces signal in other cable and this leads to
crosstalk. In our telephone network the crosstalk is a daily phenomenon. We have seen
that in telephone network we frequently encounter crosstalk. We hear some unwanted
talk going on in the background. This is because of this unwanted coupling between two
transmission media that means cables.

Impulse noise: This arises due to disturbances such as lightning and electrical sparks.
This is not getting generated in the medium but the environment is generating it. For
example, in the rainy season there is lightning or in an industrial environment there are
electrical sparks like welding, turning on and turning off switches so various types of
things can occur that leads to radiation of electromagnetic magnetic signals and that may
corrupt the signal that is passing through the medium. This affects more severely the
digital signal not the analog signal that much. Because the 0 may become 1 or 1 may
become 0 if it is a digital signal. On the other hand in case of analog signal it will not
affect much. Signal level will possibly change. If we are listening to voice we will here
some disturbances but that error is tolerable. But in digital domain this error will lead to
corruption of data.

With this we complete our discussion on the transmission impairments and channel
capacity because of the limitation of the channel. Let us give some review questions
which will be answered in the next lecture.

(Refer Slide Time: 00:54:44)


1) Let the energy strength at point 2 is 1 by 50th with respect to point 1. Find out the
attenuation in dB for point 1 with respect to point 2.

2) Assuming there is no noise determine channel capacity for the encoding level 4 and
bandwidth is equal to 4 KHz.

3) A channel capacity has bandwidth of 10 MHz. Determine channel capacity for signal
to noise ratio of 60 dB.

4) The digital signal is to be designed to permit 56 Kbps for a bandwidth of 4 KHz


determine the number of levels required and the signal to noise ratio.

These are the four questions. We have to answer and answer will be given in the next
lecture. And here is the answer of the previous lecture.

(Refer Slide Time: 00:55:12)

Distinguish between data and signal.

Data is an essential entity which conveys some meaning. On the other hand the signal is a
representation of data in some electromagnetic and optical form. So whenever data needs
to be sent it has to be converted into signal of some form for transmission over a suitable
medium.
(Refer Slide Time: 00:55:35)

2) What are the three parameters that characterize a periodic signal?

As we know the three parameters are amplitude, frequency and phase. Amplitude is
represented by A, frequency is represented by F and phase is represented by phi.

(Refer Slide Time: 00:55:52)

3) Distinguish between time domain and frequency domain representation of a signal.

Whenever a signal is represented as a function of time it is called time domain


representation. An electromagnetic signal can be either continuous or discrete it is
represented by s(t). Whenever a signal is represented as a function of frequency it is
called frequency domain representation, it is expressed in terms of different frequency
components and represented by s(f).

(Refer Slide Time: 00:56:25)

4) What equipments are used to visualize electronic signals in time domain and frequency
domain?

For time domain representation we use cathode ray oscilloscope and for frequency
domain representation we use spectrum analyzer.

5) What is the round trip propagation time between earth and a geo-stationary satellite?

As we know distance is 36000 Km and speed of light is light is 3 into 10 to the power 8
meter per second so from that we find that the round trip propagation delay is about
quarter of a second, it’s get a propagation time of 0.24 second so a long time. Later on we
shall discuss about it more. For the time being thank you.
Data Communications
Prof. Ajit Pal
Department of Computer Science & Engineering
Indian Institute of Technology, Kharagpur
Lecture # 05
Guided Transmission Media

Hello and welcome to today's lecture on guided transmission media.

(Refer Slide Time: 00:00:57)

On completion of this lecture the students will be able to classify various transmission
media, they we will be able to distinguish between guided and unguided media. We will
be able to explain the characteristics of the popular guided transmission media such as
twisted pair, coaxial cable and optical fiber. Here is the outline of the lecture. I shall give
a brief introduction.
(Refer Slide Time: 00:01:05)

In that brief introduction I shall try to explain the need for studying the guided
transmission media. Then we shall discuss about the transmission media classes, we shall
consider different classes of transmission media and we shall study in detail the
characteristics and applications of twisted pair cable, coaxial cable and optical fiber
cable. These are the three guided transmission media which will be studied in this lecture.

(Refer Slide Time: 1:25)


(Refer Slide Time: 00:02:20)

First let us look back. As you know the transmission media or communication media
actually between the transmitter and receiver provides the physical path. So here you
have got the transmitter and here you have got the receiver and in between you have got
the transmission media and the entire thing this part is the complete data communication
system. So communication media is very important for communication between source
and destination.

In one of the earlier lectures we have seen the Shannon Hartley theorem that provides the
channel capacity limit based on bandwidth. Here this parameter B is the bandwidth of the
medium and m is the number of levels with which you are encoding the signal. So we see
that the capacity of the channel or the information rate at which the data can be
communicated depends on the bandwidth of the channel. Hence the bandwidth of the
channel plays a very important role.
(Refer Slide Time: 4:05)

Similarly, if we look at another important equation that gives the Shannon limit that
provides a channel capacity based on the signal to noise ratio. So here we see that this
gives the upper limit for information exchange which is given by Blog2 (1 plus S by N)
so signal ratio plays a very important role. So signal ratio of the medium has to be studied
and we have to understand the different media characteristics with respect to signal to
noise ratio. This gives us the need for studying transmission media and particularly it is
very important to study the characteristics of the popular transmission media which we
shall do in this lecture.

First let us look at the classification. The transmission media can be broadly classified
into two types. In the first case we call it guided transmission media where waves are
guided along a solid medium such as copper, twisted pair, copper coaxial cable or optical
fiber. So we have got three examples for the guided media. One is copper twisted pair,
second one is copper coaxial cable and third one is optical fiber.
(Refer Slide Time: 5:02)

So far as the unguided media is concerned it provides a means for transmitting


electromagnetic signals through air but do not guide them. So in such a case we are
communicating though air and obviously whenever we communicate through air then we
have to choose proper antenna and so on and this unguided transmission is called wireless
transmission. And you can see here the various classifications as I have already
explained. There are two basic categories; one is guided and another is unguided.

(Refer Slide Time: 00:05:35)

Under guided which may be called wired we have got twisted pair cable, coaxial cable
and optical fiber. And on the other hand in case of unguided wireless communication it is
air. But it may not be necessarily air it can be water, it can be free space but primarily we
shall be concerned with air because most of our data communication will take place
through air. Now let us look at the quality of transmission.

(Refer Slide Time: 00:7:15)

As you send signal though a transmission media the quality of signal will be dependent
on some parameters. What are the parameters on which the quality of transmission will
depend? Let us see how it depends in case of guided media.

Characteristics and quality of data transmission are determined by medium and signal
characteristics. Now let us consider guided media. In case of guided media the medium is
more important in determining the limitations of transmission. That means the quality of
transmission will be primarily determined by the transmission media. As we have already
seen it is the bandwidth of the medium, the signal to noise ratio and also the attenuation
characteristics of the medium. So these parameters will be very important in case of
guided media.

On the other hand in case of unguided media the bandwidth of the signal produced by the
transmitting antenna is more important than the medium. So when we transmit by using
unguided media the frequency spectrum that is transmitted by the antenna will play more
important role than the medium itself. In other words we can use same medium to
transmit signals of different frequencies using antennas of different shapes and sizes as
we shall see. So in that case the antenna will play the key role in deciding the
transmission characteristics. These are the two broad situations. Now let us focus on the
guided media
(Refer Slide Time: 00:08:49)

First we shall consider the twisted pair cable. A twisted pair cable consists of two
insulated copper wires arranged in a regular spiral pattern. As you see in this diagram this
is one conductor and this is another conductor so you have two conductors. And this is
insulated here is your insulator (Refer Slide Time: 8:50) and then there is a plastic cover
in which both the cables are put in the twisted form. As you can see there is some kind of
helical form in which the wires are twisted.

(Refer Slide Time: 9:10)


Now typically you are not using just one pair of wire. Typically a number of pairs are
bundled together into a cable by wrapping them in a tough protective sheath. In this
diagram as you can see (Refer Slide Time: 9:23) we have got several twisted pairs, here
this is one twisted pair, this is another twisted pair, this is another twisted pair so a large
number of twisted pairs are wrapped with the help of a tough protective sheath and that is
actually used for data communication. So from one building to another building, from
one place to another place with large number of twisted pair of cables a single protective
sheath is used. That will create some problem as we shall see later.

Now you may be asking what is the need for twisting, why we are twisting the wires,
why not just put two wires as it is, what is the need for twisting. The need for twisting
arises because of a phenomenon called crosstalk.

(Refer Slide Time: 11:45)

As we have seen in the previous slide we are taking not just one twisted pair but a large
number of twisted pairs together. Thus signal passing in one pair of wire will induce
signal in another pair of wire. So in our somewhat experience of telephone system we
know that whenever we are talking in the background if you listen carefully you can here
some conversation and that conversation we normally call as crosstalk. The crosstalk is
arising because the adjacent pairs are inducing some signal in the twisted pair of wire
while the conversation is on between both speakers.

Now how it reduces the crosstalk?

Actually as you can see here this is one wire and this is another wire (Refer Slide Time:
11:24) so current is flowing in one direction in one way and it is flowing in the reverse
direction in another way. And if they are twisted together actually the field induced by
this wire will be just opposite to the field induced by the other wire that means they will
cancel out each other. Similarly if there is an adjacent pair here which is actually acting
as some kind of transmitter then it will induce on this part, it will also induce on this part
and since they are in opposite directions they will cancel out. So what is happening in this
case is the twisting is actually canceling out the interferences between adjacent pairs of
wires. In other words it is minimizing crosstalk.

Now you may be asking that whether the number of twist per inch play any role? That
means number of twist per some length say per inch or per feet whatever it may be does it
play an important role? Yes, actually tighter twisting provides much better performance
but also it increases the cost.

(Refer Slide Time: 14:30)

For example, here we can see that we have got several categories of twisted pair of wire.
Here we see Cat 1, Cat 2, Cat 3, Cat 4, Cat 5 these are essentially categories of wires.

Cat 1 cable has got very low bandwidth and we can send data at the rate of about 100
Kbps it is commonly used for analog transmission and primarily used in telephone
systems telephone networks.

Cat 2 cable has got a lower bandwidth less than 2 MHz and you can use data rate of up to
2 Mbps. It can be used for analog transmission as well as for low bandwidth digital signal
transmission. For example in telephone lines T-1 lines it is used. Later on we shall
discuss about these T-1 lines in more detail.

Then Cat 3 which has a bandwidth of 16 MHz allows data rates of 10 Mbps and this can
be used not only for analog data communication but it is also commonly used for digital
data communication. Particularly it is used in Local Area Networks as we shall see later.
Category 5 has got 20 MHz bandwidth with data rate of 20 Mbps. It is also primarily
used in digital data transmission, digital signal transmission and commonly used in Local
Area Networks.

Then we have got Category 5 having bandwidth of 100 MHz and here actually there will
be 100 Mbps (Refer Slide Time: 14:35) and it also uses digital data transmission
particularly in Local Area Networks. Category 6 and category 7 are also available
nowadays but I have not given it here but just I have given Cat 1 to Cat 5. And the two
commonly used categories are Cat 3 and Cat 5 as we have already seen.

(Refer Slide Time: 15:15)

Category 3 has a bandwidth of 16 MHz and it is commonly used in for data transmission
at the rate of 10 Mbps and category 5 has got bandwidth of 100 MHz so you can use it for
hundred Mbps. And obviously the key difference between the two is the number of twist
in the cable per unit distance.

As we can see in case of category 3 the number of twists per feet is 3-4 that means 3-4
twists per feet. On the other hand for category five it is three to four twist per inch so we
can say the number of twists in category 5 is about twelve times that of category 3
because it is per feet and here it is per inch. Therefore by increasing the number of twists
we are able to improve the performance. However, as you increase the number of twists
the cost also increases as we have already mentioned.

Now the twisted pair of wire is available in two different types. One is know as
unshielded twisted pair and another is shielded twisted pair. Let us look at the difference.
(Refer Slide Time: 00:16:25)

As we have already seen in case of unshielded twisted pair apart from these two
conductors and insulators there is a protective plastic cover, there is no other conductor or
shielding.

(Refer Slide Time: 16:30)

Now this is sufficient for ordinary telephone wire telephone network but however as
there is no protective shield it is subjected to external electromagnetic interference. That
means whenever this twisted pair of wire is taken through some industrial environment
where lots of sparks and other things are going on then it creates a problem. So it is
induced and that's why whenever lightning occurs or there is some spark or a car is
moving close by that signal is induced in this unshielded twisted pair of wire. However,
this can be reduced by using shielded twisted pair where as you can see we have got an
additional conductor in the form of braided mesh.

(Refer Slide Time: 17:45)

It is not a solid conductor but it is available in the form of braided mesh and this metal
shield can be connected to the ground and after that there is a plastic cover. That means if
this particular shield is connected to ground then this particular metal shield will rotate
both the wires from electromagnetic interference. So whenever we want lesser
electromagnetic interference we should use shielded twisted pair. However, shielded
twisted pair is definitely costlier than the unshielded twisted pair. Thus shielded twisted
pairs are not that popular in our day to day common applications in telephone networks
as well as in Local Area Networks therefore usually UTP or Unshielded Twisted Pair of
cables are used.
(Refer Slide Time: 19:00)

Here are the attenuation characteristics of different types of UTP cables. Here you see
that in the inner conductor that is used there is an insulator. This inner conductor diameter
is specified here 18 gauge, 22 gauge, 24 gauge and twenty six gauge and here the
diameters are given in inches 18 gauge corresponds to 0.0403 inch.

On the other hand 26 gauge has diameter 0.0159 gauge. So it is quite obvious that higher
the diameter of the conductor lesser will be the attenuation. So 18 gauge wire gives you
lesser attenuation compared to 26 gauge and obviously the 18 gauge wire will be costlier
than 26 gauge. These are the common wire diameters and the gauge of wire given here
and as we can see attenuation is given in dB per meter and here is the frequency. So at
higher frequencies the attenuation increases for UTP.

Apart from increase in attenuation at higher frequencies as the length of the wire
increases the attenuation also increases. It has been found that the attenuation is
proportional to 1 by distance square so attenuation is proportional to proportional to
distance square. Therefore as distance increases the attenuation becomes more and in
such a situation as you know we can use repeater to amplify the signal and regenerate it
and resend it if necessary. The twisted pair UTP is the simplest and possibly the cheapest
guided media used in many application.
(Refer Slide Time: 22:00)

One of the most common application is local loop in telephone lines. That means in a
telephone network the connection that you are getting to your home from the local
exchange the wire that is used is your UTP and it is also used in Digital Subscriber Line
DSL. Nowadays DSL is becoming more popular. We shall discuss about it in more detail.
In DSL also the UTP is used.

Later on we shall as we shall discuss the Local Area Networks there also the twisted pair
of wire is used UTP category five or category six cables are used in different situations
for point to point communication particularly 10BaseT and 100BaseT Ethernet network.
Later on we shall discuss about it in more detail. Normally the connecter used is RJ45
with eight lines. This is the type of connector that is used in the context of UTP cables.

Now we come to the second type of guided media that is your coaxial cable.
(Refer Slide Time: 00:22:30)

Coaxial cable consists of a hollow outer cylinder. As we can see there is a hollow outer
cylinder this is the hollow outer cylinder (Refer Slide Time: 22:35) that surrounds a
single inner conductor. This is the inner conductor which is held in place by either
regularly spaced insulating rings where you have got solid dielectric material inside. So
this is the cross sectional view of the coaxial cable and here as you can see this is the
inner conductor then you have got the insulator and there you have got the outer
conductor then there is an outer protective shield.

Hence that forms the coaxial cable and there is an outer jacket made of plastic that is used
to cover that outer conductor. So we have got two conductors where one is the insulator
and the other is a protective shield.
(Refer Slide Time: 23:26)

So here you can see a real coaxial cable. Here is the inner conductor which is solid and
here is the insulator then the outer conductor and here is the protective shield. These are
the four components; outer protective shield, outer conductor, inner insulator and the
inner conductor. And because of the shielding coaxial cables are much less susceptible to
interference or crosstalk than twisted pair. So what possibly can be done is the outer
conductor can be grounded and then as the outer conductor is grounded the inner
conductor is shielded from interferences and disturbances and that helps to reduce the
crosstalk. The crosstalk in case of twisted coaxial cable is much less than twisted pair of
wire. Normally the BNC connectors are used in connection with coaxial cables.

So here is the coaxial cable and here is the connector (Refer Slide Time: 24:47) then this
is the BNC T connector and other side can be connected to another cable. This is how the
connection is done.
(Refer Slide Time: 25:00)

Here is the performance of the coaxial cable. Here as we can see on this side we have got
the attenuation, decibel dB is in kilometer, here it is in kilometer, here is the frequency f
in KHz. So here you see the frequency ranges from .01 KHz to 100 KHz and here you
have got three different types of cables with inner and these two are representing
diameter of the inner conductor and diameter of the outer conductor. So obviously here
the diameter is less .07mm and outer conductor diameter is 2.9mm and in this case it is
1.2 inner conductor diameter and 4.4mm outer conductor diameter and the other one is
2.6mm diameter of the inner conductor and here 9.5mm is the diameter of the outer
conductor. And obviously this will have lesser attenuation than the other two.

In case of coaxial cable the twisted pair of wire provides you higher bandwidth than the
twisted pair of cable. This coaxial cable is used in a variety of applications in television
distribution, for cable TV applications, the television signal that is coming to your house
from cable TV uses coaxial cable.
(Refer Slide Time: 00:26:37)

And as you know a number of channels are coming through the same cable that means
bandwidth is high. That’s why it is possible to send several channels by using frequency
division multiplexing with the help of coaxial cable. So you can transmit several
frequency bands using the coaxial cable because of higher bandwidth. It is also used for
long distance telephone transmission and because of higher bandwidth it is possible to
send ten thousand voice channels per cable. In a single cable we can send ten thousand
voice channel simultaneously and that is possible because of higher bandwidth. It is also
used in Local Area Network. As we can see here three categories of coaxial cables are
given here RG-59, RG-58 and RG-11 with characteristic impedance of 75 ohm, 50 ohm
and 50 ohm.

This RG-59 is commonly used in cable TV which is very popular and is of low cost.
Then RG-58 with characteristic impedance of 50 ohms that means the cables are to be
transmitted are to be terminated by 50 ohms resistance and that is used in thin Ethernet
which gives a data rate of 10 Mbps. RG eleven is thick Ethernet with characteristic
impedance fifty ohms also used in thick Ethernet

So RG-11, eleven RG-58 and RG-59 these are the three popular coaxial cables commonly
used in three different applications. Later on when we discuss about Local Area
Networks particularly the Ethernet. We shall discuss about thin Ethernet and thick
Ethernet and the use of two different coaxial cables in Ethernet technology. Now let us
look at the third guided media which is your optical fiber.
(Refer Slide Time: 00:29:10)

An optical fiber is a thin flexible medium capable of conducting an optical ray. It is made
of ultra pure fused silica glass fiber or even plastic and it has a cylindrical shape and
consists of three concentric sections the core, the cladding and the jacket. Let us see the
structure of the optical fiber cable.

(Refer Slide Time: 00:29:35)

Here as you see it has got three different parts the core part and there is a cladding part
and there is a jacket. The core consists of very thin stance of fibers made of glass or
plastic. This particular part is made of glass or plastic (Refer Slide Time: 30:00). The
cladding part is also made of glass or plastic but these two materials the core material and
the cladding material they have different optical properties that we shall discuss in detail.
So, although they are made of the same material plastic or silica their optical properties
will be different. They will have different refractive indices as we shall see later on. Then
the jackets surround one or a bundle of cladded fibers.

You can have either this kind of a single optical fiber cable or a large number of optical
fiber cables and bundle them together with the help of a plastic sheet and take it from one
place to another. Usually you have 4 or 6 or 8 or 16 optical fiber cables as normally it is
used in pairs and for data transmission in both directions. So four eight or sixteen pairs of
cables are bundled together and are taken from one place to another. The question
naturally arises as how optical fiber works. The operation of optical fiber is explained
with the help of this diagram.

(Refer Slide Time: 31:45)

As you can see here to explain the operation of optical fiber let us consider two materials,
this one is of higher density than the upper one. So the density of the lower material is
more then the density of the upper material. Let’s assume that this is water and this is air
then this has got higher density than the upper layer.

Here the angle of incidence of this ray is less then the critical angle (Refer Slide Time:
32:30) then what is happening is this signal is going from here and it is getting refracted
because of these two materials has got two different refractive indices. However, as the
angle of incidence increases this i value increases gradually then this angle of refraction
becomes 90 degree and whenever it becomes ninety degree ninety degree we call it
critical angle of refraction.

Now if the angle of incidence is more then what happens? Then it suddenly switches
from refraction to reflection. As you can see here the light is getting reflected instead of
getting refracted here. So when the angle of incidence is more than the critical angle then
this refraction occurs.

(Refer Slide Time: 33:25)

Let us see how it really happens with the help of the animation. As you can see here in
this case the angle of incidence is increasing and we have reached the critical angle, this
is the refracted ray and this is the incident ray.

(Refer Slide Time: 33:29)


(Refer Slide Time: 34:00)

Now if we increase the angle of incidence further then as you can see now it is refracted
and now this angle of incidence is same as the angle of reflection. So, angle of incidence
and angle of refraction is same. So as it is increasing this angle is also increasing and as it
becomes more than the critical angle then it is getting reflected rather than refracted. So
here this signal is getting reflected.

(Refer Slide Time: 34:25)

We have seen how light is reflected in optical fiber. Actually here I have shown two
mediums. Actually this inner material will be the core material (Refer Slide Time: 34:50)
and the outer material will be the cladding material. That will form the two materials and
then the total internal reflection will take place and the light will be passing using total
internal reflection as we shall see. Now based on the optical signal communication
through optical fiber it can be classified into three different types. One is known as step
index, actually there are two broad categories multimode and single mode. We shall see
later about why it is called multimode. The multimode has got two different varieties;
step index and graded index and then we have got single mode.

Let us see the three different types of optical fibers.

(Refer Slide Time: 36:48)

First of all this multimode refers to the variety of angles that will reflect. That means in
case of multimode fiber multiple propagation path exists which means single elements
spread out in time and hence limits the data rate. What is happening in this case is
multiple rays will be reflected with different angles and as a result the length of the path
will be different for different rays. And as a consequence they will reach the destination
at different times and that will lead to distortion. That’s why we are saying it will spread
out in time and it will limit the data rate. So, that problem is overcome by using single
mode particularly whenever the diameter of the core radius is reduced fewer angles will
reflect. by reducing the radius of the core to the order of a wavelength that means say 7
nanometer, 10 nanometer etc then we may assume that only one ray is passing from
source to destination. So in such a case only a single angle passes through the optical
fiber called as the monomode or single mode.

Then as I mentioned there are two types in case of multimode; multimode graded index
and multimode step index. Now in case of multiple step index as we shall see the core
and cladding has got two distinct refractive indices. On the other hand in case of
multimode graded index the refractive index of the material varies gradually from the
center to the outer edge. Let us see with the help of a diagram.
(Refer Slide Time: 38:05)

Here you have got the monomode type of step index fiber. Here as you can see diameter
is very small 8 to 10 micron of the core and this is the cladding and as you can see here
the refractive index of this core material is n1 and refractive index of the cladding
material is n2 and diameter of the core is 8 to 12 micron and diameter of the cladding is
125 micron. On the other hand here you have got your multimode step index fiber. Here
the diameter of the core material varies from 50 to 200 micron and diameter of the
cladding can vary from 125 to 400 micron and the refractive index in this particular case
for the core is n1 and the cladding is n2.

Now, for the third variety that is your multimode graded index fiber as we can see the
core refractive index is gradually changing. As you can see this is the refractive index. In
the center it is 0 and it is gradually changing from n1 to n2, n1 to n2 here and here it is n1
and here it is n2. And as we go from here to here it is gradually changing from n1 to n2
and this will lead to different characteristics. Let us see how it works with the help of
animation.
(Refer Slide Time: 39:40)

This is the case with of step index fiber. As you can see light is going through total
internal reflection and it is going to the other end. now there can be another ray with a
different angle of incidence and as you can see the number of reflections that it suffers is
different and as a result if you measure the path length of these two rays this ray and this
ray then they will be different they cannot be the same and as a consequence this leads to
spread out.

(Refer Slide Time: 40:00)

This is the case of multimode graded index fiber.


(Refer Slide Time: 40:15)

(Refer Slide Time: 41:10)

In the previous case as we have seen there is an abrupt reflection and this leads to some
kind of distortion. This can be minimized in multimode graded index fiber and here the
refractive index gradually changes from centre to the interface between the cladding and
core the light will be gradually bending as you can see here. There is no sharp bending
the light is gradually bending and going in this manner. However, bending is occurring
gradually but you can have different angle of incidence and as a result multiple rays are
passing from source to destination from one end to the other end.

On the other hand this is the case for monomode step index fiber.
(Refer Slide Time: 41:15)

As you can see in this particular case as a single ray is going, actual ray is going from
source to destination. So the question of multiple rays does not exist in case of single
mode fiber. What are the consequences?

The consequence of this is the single mode fiber will have much less distortion that
means the signal will not spread and normally we will call it measurement of the eye so
that eye will be quite open in case of single mode fiber compared to multimode fiber.
And the spacing between the repeaters will be much longer in case of single mode
compared to multimode. That means in case of multimode fiber the repeater spacing will
be closer than single mode fiber.
(Refer Slide Time: 41:30)

Here we have some of the different optical fiber types.

(Refer Slide Time: 42:20)

Here it is designated by the core diameter by cladding diameter. So here is the core
diameter 50 micron, cladding diameter 125 micron, this is multimode graded index, here
is another example 62.5 by 125 where 62.5 stands for core diameter and 125 micron is
the cladding diameter which is also multimode graded index fiber and then we have got
100 by 125 and here the core diameter is 100 micron and cladding diameter is 125 micron
so this is also multimode graded index and then 7 by 125. Here we see that the diameter
is only 7 micron in case of single mode fiber or monomode fiber and cladding diameter
as you can see is the same for all the four cases. Now an important parameter is the
numerical aperture. Numerical aperture actually relates to the difference between the
refractive indices of the core and cladding.

(Refer Slide Time: 43:15)

So n1 is the refractive index of the core and n2 is the refractive index of cladding. So this
numerical aperture is square root of n1 square minus n2 square or you can write it as n1
square root two delta where delta is the core cladding index difference. That means n2 is
equal to n1 into 1 minus delta. That means core will have higher refractive index than the
cladding. And the delta has a typical value of 0.011 and as you know we can use the
material either plastic or silica so if we use silica then n1 is equal to 1.48.

Now with this parameter in mind if we consider n1 and n2 the refractive indices of two
mediums then there is a problem here which is find out the critical angle for the above
situation. That means situation is where you have got two core cladding materials with
difference in refractive index of delta that is 0.01. So you can find out the critical angle
and this is given as a problem to you.

What are the sources of light in case of optical fiber?

Usually there are two sources one is light emitting diode. Light emitting diodes are of
very low cost, it is used in almost all equipment for display purposes that can be used as a
source of light, it is very cheap, it has got greater temperature range, it can withstand
grater temperature range, it has longer life because it is a semiconductor material.
(Refer Slide Time: 46:03)

However, because the power coupled is very small for example in case of 50 micron
optical core diameter the power that can be coupled is only 25 microwatt so as a result it
can be used for only short distance communication and particularly it is used in
multimode fiber. On the other hand there is another alternative source of light that is
injection laser diode which is costlier. However, it is much more efficient than the light
emitting diodes and it allows longer distance because it can coupled much higher power
about 1mW in case of monomode fiber. That means in case of monomode fiber
commonly that injection laser diode is used and for multimode fiber commonly light
emitting diode is used.

At one end you will be using some light source ad at the other end you have to receive
the signal detect the light signal. The detection of light is done with the help of two
different types of photodetectors or photodiodes. One is pin photodiode or pin
photodetector, another is avalanche photodiode or APD. Obviously the APD has better
characteristics better signal to noise ratio than the pin photodetector. However, the
avalanche photodetector is costlier than the pin photodetector.

Normally pin photodetector can be used with multimode fiber where the source of light is
LED and where low cost is the primary criteria. On the other hand in case of single mode
fiber the laser source is used, injection laser diode is used as a source of light and
avalanche photodiode is used as the detector. Here is the attenuation characteristic of
optical fiber. As you can see here the loss is dB per kilometer.
(Refer Slide Time: 00:48:05)

Here it goes down to one dB per kilometer so attenuation is very low and particularly it is
used in three normal ranges. One is 850nm band, 1300nm and 1500nm so these three
ranges are used. Here as you can see you have 1500nm, here is your 1300nm and 850nm.
These are the three frequency aspect ranges which are commonly used.

(Refer Slide Time: 48:40)

What are the advantages of optical fiber?

Optical fiber provides you higher bandwidth leading to greater capacity. As you know
you can pass 2Gbps over tens of kilometers. It is used in long-haul fiber transmission and
it is becoming increasingly popular in telephone network. Nowadays most of the
telephone networks are used in this optical fiber communication. So you can send about
900 miles with 20000 to 60000 voice channels because of very high bandwidth of optical
fiber and the material that is used is silica or plastic. It is very cheap, small in size, very
light in weight as it is not a metal and it has got very low attenuation. Moreover another
important property is it has got resistance to corrosive material.

(Refer Slide Time: 49:35)

Metals get corroded, oxidation occurs. When it comes in contact with moisture and
temperature that kind of corrosion does not occur in case of optical fiber material and
another important property is it is immune to electromagnetic interference. So it is
immune to electromagnetic interference that means it is very suitable for outdoor
applications, outdoor cabling. All the outdoor cabling in Local Area Network is
commonly done by using optical fiber and it has got greater repeater spacing compared to
either twisted pair or coaxial cable.

The cost of the three different guided media considered can be compared in terms of cost,
bandwidth, attenuation, susceptibility to external noise and security. So as you can see
here the UTP Unshielded Twisted Pair has lowest cost, bandwidth is much lower,
attenuation is much higher, electromagnetic interference is higher and also security-wise
it is very low. We can do draping very easily each draping can be done very easily.
(Refer Slide Time: 00:51:03)

On the other hand coaxial cable which is of moderate cost with bandwidth of 350 MHz
gives you data rate of 500 Mbps, it has got moderate attenuation and electromagnetic
interference is moderate, security is low more or less same as your UTP. On the other
hand optical fiber has got little higher cost, very high bandwidth 2GHz, you can have
data rate of 2 gigabits per second, attenuation is low, you can have repeater spacing of 10
to 100 kilometers and EMI interference is low because electromagnetic interference
cannot take place with light and it has got higher security, it is very difficult to get data
from optical fiber.

(Refer Slide Time: 52:10)


(Refer Slide Time: 00:52:17)

So here if you compare the attenuation characteristics as you can see attenuation is much
low for optical fiber compared to twisted pair and coaxial cable. the main difference
between twisted pair and coaxial cable is that coaxial cable has got higher bandwidth
compared to twisted pair.

Here are the review questions:

1) What parameters the quality of transmission depends in case of guided transmission


media?

2) Why wires are twisted in case of twisted pair of transmission medium

3) Give a popular example where coaxial cables are used for band broadband signaling.

4) Find out the critical angle for a step indexed optical fiber for n1 is equal to 1.04 and
delta is equal to 0.04.

(Refer Slide Time: 52:35)

5) In what way multimode and single mode fibers differ?

So these are the five questions to be answered in the next lecture.

Here are the questions that I gave in the previous lecture.

Let us look at the first question, answer is attenuation. Attenuation will be equal to 10
log10 1 by 50 is equal to minus 16.9 dB.
2) Assuming there is no noise in a medium of bandwidth 4 KHz, determine channel
capacity for encoding level four.

As you know the equation the C is equal to Blog2m, here it is log2 (4) this gives us 16
Kbps. So this is the channel capacity that is possible in this case.

(Refer Slide Time: 54:15)

(Refer Slide Time: 00:57:02)

3) A channel has a bandwidth of 10 MHz determine the channel capacity for signal to
noise ratio 60 dB. So as we know the formula C is equal to Blog2 (1 plus S by N).
So with the help of this we can find out so 10 log2 1 plus 60 gives you 59.307 and here it
is MHz so Mbps so this is the maximum possible channel limit you can say 59 Mbps

4) The digital signal is to be designed to permit 56 Kbps for a bandwidth of 4 KHz


determine the number of levels and signal to noise ratio. This can be calculated by using
two different formulas. In the first case you have to use this formula i is equal to 2 into M
log2M. Here you have to find out the M and i is given already as 56 so 56 is equal to 2
into 4000 into log2M. From this you will get M is equal to 2 to the power 7 that will be
56 kilobits so you have to have 0s that is 56000 zeros 7 is equal to 128 levels.

Similarly you can find out the signal to noise ratio from this equation. That means signal
to noise ratio will be found out from this formula C is equal to Blog2 (1 plus S by N) so
here C is given, B is given so a signal to noise ratio has to be found out. C is equal to
56000 is equal to bandwidth is 4000 log2 1 plus signal to noise ratio has to be found out.
From this we can find out S by N is equal to 43 dB. So here we have the answer for all
the questions that were asked in the last lecture, thank you.
Data Communication
Prof. Ajit Pal
Department of Computer Science & Engineering
Indian Institute of Technology, Kharagpur
Lecture No # 6
Unguided Media

Hello and welcome to today’s lecture on unguided media.

(Refer Slide Time: 00:55)

On completion of this lecture the students will be able to explain:

Propagation methods in wireless communication system


Broadcast radio communication
Terrestrial microwave communication
Satellite microwave communication
Infrared communication and
Compare wireless communication techniques.
(Refer Slide Time: 00:56)

Here is the outline of this lecture:

I shall give a brief introduction about this lecture, why this wireless communication is so
important in the context of data communication systems. Then we shall discuss about
spectrum of wireless communication, various frequencies kept spectrums that are used
for wireless communication then various configurations which are used for wireless
communication.

(Refer Slide Time: 01:52)


Then we shall discuss various propagation methods such as broadcast radio in the context
of wireless communication then we shall discuss broadcast radio, terrestrial microwave
and satellite microwave and finally the infrared communications. So essentially we shall
be covering these four wireless communication techniques such as broadcast radio,
terrestrial microwave, satellite microwave and infrared communication.

(Refer Slide Time: 02:30)

As I mentioned in the earlier lecture the communication media provides the physical path
between transmitter and receiver in the data communication system. As you know it can
be divided into two broad types; guided transmission media and unguided media. And in
the last lecture we have discussed in detail the various guided communication media such
as twisted pair of cable, coaxial cable and fiber optic cable.
(Refer Slide Time: 03:08)

Now in case of unguided media it provides a means for transmitting electromagnetic


signals through air but do not guide them so we may call it wireless communication. The
question naturally arises as why wireless communication is so important so it is with the
proliferation of portable mobile equipments such as laptops, palmtops, cell phones, PDAs
and so on. What is happening is people are carrying various electronic equipments with
them. As a consequence they want to communicate from any location so how that is
feasible. That cannot be supported by guided or wired communication media. So, any
time communication is feasible only by wireless communication and that is the topic of
today’s lecture.

Broadly as you can see the electromagnetic spectrum used for wireless communication is
divided into three parts; the radio wave and microwave again this can be divided into two
parts radio communication and microwave communication as we shall see then there is
infrared and then light wave. So this wireless communication is possible in these three
ranges.
(Refer Slide Time: 04:42)

Now, in case of wireless communication as I mention in the last lecture communication is


dependent on the antenna. Antenna plays a big role in wireless communication. The
characteristics of the antenna and the frequency spectrum that it transmits in the air will
play an important role. Or in other words it will decide the quality of transmission, the
bandwidth of signal and various other things.

(Refer Slide Time: 06:07)

So, for transmission what we require is an antenna that radiates electromagnetic radiation
in the air and for reception we also require another antenna that will pick up
electromagnetic waves from the surrounding media. So it is somewhat like this (Refer
Slide Time: 5:50) you have got an antenna which is the transmitter and here you have got
another antenna so this is your transmitter and this is your receiver so electromagnetic
signal that is passing will go from one to another through this antenna. And in this case as
I mentioned an antenna will play a very important role a key role as we shall see.

(Refer Slide Time: 07:31)

Basically there are two types of wireless communication. In one case it is called point to
point communication and in this case transmitting antenna puts out a focused
electromagnetic beam in a particular direction. So the antenna will focus a signal in a
particular direction and in that case the transmitter and receiver must carefully align and
obviously this will allow point to point communication. So in this configuration you have
got two antennas which are communicating with each other so it is a point to point
communication. This is one possible configuration. The second possible configuration is
the transmitter signal spreads out in all directions. In such a situation wherever the signal
spreads out in all possible directions the signal can be picked up by many receivers with
the help of antennas. So the signal can be received by many antennas which is known as
broadcast communication. We have got two basic configurations; point to point and
broadcast. So there are various ways of doing it.

Now let us discuss the various propagation methods, the methods by which
communication takes place in the air. First one is known as ground propagation. Ground
propagation will take place below two mega hertz. We have seen that this ground
propagation is possible below two mega hertz and in this particular case communication
take place with the help of ground signals.
(Refer Slide Time: 08:28)

Before that let me explain the overall situation. suppose this is earth then you have got air
then there is ionosphere, why it is called ionosphere is because in that upper layer of the
atmosphere the air is in charged condition that means ionized condition, there are positive
and negative particles that’s why it is called ionosphere. So you have got earth then I
have shown an antenna and here is the ionosphere. So ionosphere will play an important
role in some cases as we shall see but let us see what happens in case of ground
propagation.

In case of ground propagation as you can see ionosphere does not play any role. So what
is happening here is the signal is propagating in all directions and signal is hugging the
earth. So the signal is propagating by hugging the earth so around the ground the signal is
going on. As a result with along the ground it can propagate over long distances. One
typical example is AM radio which we listen like Calcutta A, Calcutta B and so on. That
AM radio is the example of this ground propagated signal. Now this is the first mode of
communication.

The second mode of communication is known as sky propagation. This sky propagation
is possible for frequency range from 2 to 30 MHz, 2 to 30 MHz of signal can propagate
through air and in this case the mode of propagation is known as sky propagation. Why it
is called sky propagation? In this scale as we shall see the electromagnetic signal which
will come out from the transmitter will go to the ionosphere layer then it will bounce
back. Let us see how it happens.
(Refer Slide Time: 11:36)

So as you can see the way in which signal is propagating then it is reflected back by the
ionosphere and it is receiving the antenna. So the signal gets reflected by the ionosphere
and comes back to the receiving antenna. In this case the ionosphere plays a very
important role because as you ionosphere is at a much upper layer of the atmosphere and
with the help of this sky propagation the signals can be sent over a very long distance.
For example we hear Voice of America radio picking and various short wave signals that
comes through sky propagation and the citizens when radio is in thousand in range. So
sky propagation is very useful for long distance communication with the help of the
ionosphere.

Then the third mode of communication as I mentioned is the line of sight communication
that is above 30 MHz. And above 30 MHz the signal behaves somewhat light as the
frequency is very high and the wavelength is small so in this case it has to be a point to
point communication. The signal gets obstructed if there is building, if there is some
artificial or natural structure. So in such a case you must have line of sight propagation.
(Refer Slide Time: 12:31)

Here we have got two antennas both are communicating with each other with the help of
line of sight communication. So in this case the requirement is that the antenna must be
relatively quite so high so that the line of sight communication is visible and as you know
that FM radio, television signals, cellular phone, terrestrial microwave, satellite
microwave all these uses the line of sight communication. That means sky propagation
and ground propagation is essentially broadcast communication.

(Refer Slide Time: 13:48)

On the other hand communication which takes place for frequencies above thirty mega
hertz is line of sight communication.
The wireless transmission techniques can be broadly divided into three types as I
mentioned earlier. First one is radio wave, second one is microwave and the last one is
infrared. So first one is in the radio frequency range, second one is the microwave range
and this is essentially light and we have the very high frequency range that is infrared as
you have seen in that frequency spectrum. These are the broad wireless communication
techniques that we shall discuss.

First let us consider the broadcast radio. As I mentioned the broadcast radio operates in
this range 30 MHz to 1 GHz and in this case the communication is omnidirectional in
nature. Omnidirectional in nature means it propagates in all directions from the antenna.
As you can see here the signal is propagating just like a wave in all directions from the
antenna just like if you throw a stone in a pond the waves propagate in all directions with
that as the center point.

(Refer Slide Time: 16:00)

Similarly here the antenna is the center point with that antenna is the center point the wire
is propagating in all possible directions as you can see here. So this wave propagation is
omnidirectional in nature which propagates in all direction.

But here the communication has to be line of sight. This is a very important characteristic
in this case. The signal gets obstructed by other structures so it has to be line of sight
communication. And as a consequence the distance that it can communicate distance that
can be covered is given by this relation 7.14 root Kh. This is the equation that is used for
maximum distance where h is the height of antenna so h is essentially height of the
antenna and this k is a correction factor that is equal to 4 by 3. That means there is some
bending of this electromagnetic signal as it goes from the transmitter to the receiver. As a
result it covers a little more than what is possible by line of sight communication. So that
factor is taking care of by k and as you can see here the distance that it can cover is 7.14
so that’s what you normally do. As you can see in this particular case you have put the
antenna height quite high.

Usually there is a tower on which the transmit data antenna is installed or that antenna is
on the top of a root or antenna is on top of a building or sometimes on top of a mountain.
So, to cover a larger area usually the antenna is placed on some height and that height of
the antenna is made higher so that you can reach longer distance and in this frequency
range 30 MHz to one GHz it can penetrate well that is one advantage as well as
disadvantage.

What is the advantage? The advantage is that this type of signal you can receive inside
your room because the antenna can be inside the house or inside the room. So the signal
can penetrate the wall and can reach you so that way it is advantageous but its
disadvantage is there is no privacy. And since it is a broadcast signal it can be received by
many people. So if you want to send some private signal which should not be
communicated to others you cannot do that. However, you can do that by using
encryption which we shall discuss later.

And for these types of signals the ionosphere is transparent. What do you mean by that?
For this kind of communication the signal passes through the ionosphere, it is does not
get reflected by the ionosphere. So the problem of multipath communication does not
exist. In other words multipath communication leads to many problems. What are the
problems that can arise? Because of that you can have some kind of fading which you
must have observed in case of short wave radio. Whenever you listen to some short wave
radio which comes by sky propagation and not this range so in such a case what happens
is there is fading. So in this particular case there is no problem of fading and multipath
propagation does not take place, you receive signal directly from the transmitter to
receiver that is an advantage in this particular frequency range 30 MHz to 1 GHz.

However, here the attenuation is dependent on two parameters and it is given by this
relationship 10log4 pi d by lambda square. So as you can see here d is the distance and
the attenuation increases at the rate of b squares. And as the distance increases the
attenuation increases significantly that’s why longer the distance the weaker the signal
becomes. Moreover the attenuation is inversely proportional to the square of the
wavelength. That means higher frequencies are attenuated more than the lower
frequencies. This is another characteristic of the frequency range and these types of
signals are less sensitive to rainfall.
(Refer Slide Time: 20:42)

And as I mentioned signal can travel long distances and multipath interference does not
exist in this particular case. So these are the several parameters or characteristics of
broadcast radio.

The typical applications of broadcast radio are FM radio, television, data networking etc.
After broadcast radio communication comes the microwave frequency, microwave
frequency lies in the range 2 to 40 GHz. So in this case range 2 to 40 GHz is quite a high
frequency and in this case it is possible to have very highly directional beams. that means
you can send highly directional beam from the transmitter towards the receiver and you
can have point to point communication and that’s why it is very suitable for point to point
and satellite transmission. And for that purpose usually we use two different types of
antennas. As you can see here this antenna is known as horn antenna, this is known as
horn antenna. So here the signal comes (Refer Slide Time: 21:40) and then it gets
reflected by this particular shape of the antenna then all the rays travel in a particular
direction so there is some kind of parallel beams of light that goes in a particular
direction.

Therefore from the transmitter it comes here to the antenna then it gets reflected in this
way then it reaches another type of antenna which is known as dish type of antenna. This
is known as dish type antenna and this is horn type antenna.
(Refer Slide Time: 22:54)

So here you can see that the horn type of antenna receives parallel signals then it focuses
it at a particular point and at this point some sensor is kept and then this signal is taken to
the amplifier for amplification and for other things. So both these antennas are fixed
rigidly because here it has to be of point to point communication. So the antenna must be
placed very rigidly in one place like this and this is another antenna placed very rigidly in
another place with the help of some structure which helps us to have point to point
communication between these two antennas.

So it can focus a narrow beam to achieve line of sight communication as I mentioned


with the help of these two types of antennas. This microwave communication has got two
varieties; first one is terrestrial microwave. In case of terrestrial microwave it is used for
long haul communication and usually it uses this 4 by 6 GHz band.

Obviously since the frequency is in the lower end it requires bigger antenna of 10 m or
more diameter and it is primarily used for long haul communication. Long haul
communication means whenever we have to transmit signal over a long distance. For
example, let us assume that this is the surface of the earth then you can erect antenna like
this (Refer Slide Time: 24:02) so it will transmit in this direction then you can place
another antenna here so it will receive signal from this so it will go here then after
receiving it it will do the amplification and transmit to another antenna.
(Refer Slide Time: 25:25)

In this way a cascade of antenna can be used one after the other to send signal over a long
distance. So, not only this type of communication is possible by this but you can also
have point to point link between buildings. In this particular case you can use higher
frequency so that antenna size is small which is of 20 GHz so the antenna will be smaller
in size and cheaper in cost.

And this terrestrial microwave can be used as an alternative to coaxial cable and optical
fiber. As you know coaxial cable and optical fiber is also used for long haul
communication in telephone. Instead of using the coaxial cable and optical fiber one can
use terrestrial microwave. But in this case as I mentioned the antenna has to be properly
placed and then the height should be proper and distance should be adjusted so that the
signal reaches the antenna and the signal is amplified and then be transmitted. So
essentially they are acting as repeaters. And as I mentioned in this case also the formula
of maximum distance d is equal to 7.14 root kh is applicable.
(Refer Slide Time: 26:59)

Here d is the distance in kilometer and h is the antenna height in meters. And as i
mentioned the value of k is an adjustment factor because it is not fully line of sight but
the signal gets bent and the adjustment factor is 4 by 3 that is 1.33.

Let’s assume that antenna height h is equal to 100 m. So, if we substitute this here by
taking the value of k is equal to 4 by 3 then you will find that the distance is roughly
equal to 82. As this is 100 m this will be roughly equal to 82 Km so with antenna of
height 100 m you can have the distance that can be covered as 82 Km. So here the height
of the antenna is in meters and distance is in kilometers.

Therefore with an antenna height of 100 m you can cover a distance of 82 Km that’s why
we will be finding repeaters at a range of 60 to 80 or 100 Km distance because antenna
height is typically around 100 m.

For example, if you come from Calcutta to Kharagpur near Paspura so it is midway that
means around 60 Km you will find there is microwave towers which are acting as
repeaters between the signals coming from Calcutta to Kharagpur or Kharagpur to
Calcutta. This is the distance and height relationship then comes the transmission
characteristic that is attenuation. Therefore as I mention here also the attenuation
characteristic is 10log4 pi d by lambda square that means the attenuation is proportional
to distance square and inversely proportional to wavelength or attenuation is proportional
to frequency. That means higher frequency signals are attenuated more.
(Refer Slide Time: 28:52)

However higher frequency signals provide more bandwidth. That means we prefer to use
higher frequency bandwidth, higher frequency for communication because it gives you
higher bandwidth. But higher frequency gives you more attenuation. So it’s some kind of
trade off so we have to judiciously select the frequency so that you don’t have much of
attenuation but at the same time get reasonably good bandwidth.

For microwave signals attenuation increases with rainfall. This is a very important factor.
That means the frequencies in this range are affected severely by rainfall. Apart from
distance, apart from frequency we can see rainfall plays a very important role in
attenuation. That means when the weather is bad or during rainy conditions the signal
will be poor and that is the reason why more error occurs during rainy season. The signal
to noise ratio degrades so as a result more error occurs in the signal.

We have discussed the terrestrial communication now comes the satellite microwave
communication. We have seen in the previous slide that distance d is proportional to 7.14
root kh. That means if we can make the height very high we can probably cover a very
large distance so why not put an antenna in the sky that is the basic idea of satellite
microwave. So, in satellite microwave what we are trying to do is we are trying to put a
relay station in the sky at a very large distance. So in this case we shall see the antenna
height is quite large for example it can be as high as 36000 kilometer. Hence this will
allow to cover a very large distance.
(Refer Slide Time: 30:45)

And here with the help of this microwave you can link two or more earth stations. There
can be two earth stations and essentially that antenna in the sky the satellite will be acting
as relay station. Thus with the help of that relay station two or more ground stations can
communicate with each other because we can have two different modes of
communication. Again it can be point to point or broadcast. Let us see the configurations.

(Refer Slide Time: 31:23)

So in this particular case as you can see it is a point to point communication. Here we
have got the satellite in the sky and here you have got the two ground stations. This is one
ground station and this is another ground station. So here there is a point to point link and
here the signal is going from this ground station to the satellite, here it is received and
then amplified and returns back to the station so signal from here also goes to the
satellite, here it is received then amplified and then it is returned back to this ground
station (refer Slide Time: 31:55) so this way it can cover a very long distance. And this is
the second alternative where you can achieve broadcast with the help of satellite
microwave.

(Refer Slide Time: 32:38)

Here you have got the satellite and it can cover a quite large area which is in the footprint
it is called the footprint of the satellite. All the ground stations which is under the satellite
in the footprint of the satellite can receive signal from here. So here you have got one
transmitter so from this transmitter the signal goes to the satellite and then it is
transmitting to a number of stations, it is broadcasting the signal to a number of other
stations. So you can see here this is broadcast link via satellite microwave
communication.

Now question arises as where do we put the antenna in the sky? There are two
alternatives. One alternative is the satellites can be about say 800 m above the earth that
is one possibility but not very high. Then a large number of satellites can rotate and with
the help of that communication can be done. But it has been found that the most
conventional way or the most popular way is geostationary orbit.

Let us go back to the previous diagram (Refer Slide Time: 33:45), in this case the
advantage is what we want is that the satellites should remain stationary with respect to
the sky with respect to the earth. That means if this is the earth and here there is a ground
station and here is another ground station they want to communicate with the help of the
satellite. So here you have got the satellite (Refer Slide Time: 34:11) and if the
communication has to be reliable and steady it is essential that this satellite remains
stationary with respect to the earth, so how that can be achieved. That can be achieved by
placing the satellite in the geostationary orbit and the distance is about 35784 Km so
distance is high it is quite large. So at this height you have put the antenna and it remains
stationary because it is placed in geostationary orbit so relative position of this ground
station with respect to the satellite does not change.

(Refer Slide Time: 34:50)

What are the advantages of this?

One big advantage is that the satellite can cover a very large distance because the height
of the antenna is very large and it can be shown that with the help of three antennas the
entire globe can be covered.

For example, here you have got your earth so one satellite here, one satellite here, and
120 degree apart from this you have another satellite and here you have another satellite
(Refer Slide Time: 35:40) so with the help of these three satellites one can cover the
entire earth so the entire globe can be covered with the help of three satellites so this is a
big advantage, the height it is utilized and you can see to communicate any ground station
on the earth only three satellites are sufficient so this is one very big advantage. However,
now there is competition to put satellites in the sky. All the countries are competing with
each other to put a satellite of their own in the sky and that has created a problem and
particularly we cannot have many satellites, there is some restriction.

For example you have got only 360 degrees. So when you have got three sixty degrees in
the geostationary orbit then a minimum of four degree spacing is required if we use 4 by
6 GHz band and a minimum of three degree spacing is required if we use 12 by 14 GHz
band and as we shall see these are the popular frequency bands used for microwave
communication.
So the number of satellite is restricted because of these minimum spacing required so that
there is no interference between the signals of generated by two satellites because they
are using this common frequency bands 4 by 6 GHz frequency band or twelve by 14 GHz
frequency band

Then one another very important characteristic is that you can have the number of
transponders and a single orbiting satellite operate on a number of frequency band called
transponders that means the single satellite can operate on a number of frequency bands
during that four or six 4 by 6 GHz frequency band.

(Refer Slide Time: 38:36)

And with the help of number of transponders the communication with many ground
stations is possible. Moreover the satellite receives transmission on one frequency band
amplifies and repeats the signal and transmits it onto another frequency band so you have
got two frequencies as you can see. One is known as uplink frequency and the other one
is the downlink frequency. So here is your satellite and here is your ground station. These
two can communicate with the help of two frequencies. Therefore you have to use two
separate frequencies so that there is no interference and with this a two way
communication is possible.

And here is the common frequency bands used as you can see C band 4 by 6 GHz band
and here the downlink frequency range is 3.6 to 4.2 GHz then uplink frequency band is
5.925 to 6.425 GHz so this is downlink frequency band and this is the uplink frequency
band. Similarly for q band this is 12 by 14 GHz band and as you can see here the
downlink frequency is 11.7 to 12.2 GHz and uplink frequency is from 14.0 to 14.5 GHz.

Similarly there is another frequency band known as ka frequency band which is in the
range of 18 by 28 GHz, the downlink frequency is 17.7 to 21.0 GHz and uplink
frequency is 27.5 to 31.0.
(Refer Slide Time: 41:06)

Here one point you must notice that downlink frequency band is lower and here we are
using higher frequency, the uplink frequency is higher, what is the reason? One possible
reason for that is, as you know higher frequency is attenuated more because the
attenuation is proportional to frequency square or 1 by lambda square. So since the
satellites are powered by solar cells there is constraint on the availability of power so the
satellites could not put high power.

On the other hand ground station can put much higher power in the antenna so uplink
frequencies with higher frequency bands can drive with higher power which can sustain
higher attenuation. On the other hand downlink frequencies transmitted by satellites
which are powered by solar cells cannot have very high power so they choose lower
frequencies so the attenuation is lesser for downlink signals than the uplink signals.
Therefore here the uplink signals are compensated by higher power of the ground
stations.

There is another very important innovation which is the Very Small Aperture Terminal, it
is a very low cost solution. Here as you can see a number of subscriber stations are
equipped with low VSAT antennas. Particularly whenever communication has to be
established among a number of geographically dispersed stations this is a very convenient
technique of achieving it.
(Refer Slide Time: 42:03)

Here what can be done using some protocol these stations can share a satellite channel for
transmission to a hub station this is hub based, there will be a central station called hub
station and the hub station can exchange messages to each of the subscriber as well as
relay messages between the subscriber. So we can see here this VSAT is a very low cost
alternative to have communication among number of stations. Let us see how this is
done.

This is the typical VSAT configuration and this is the hub. Here you have got the hub and
these are the ground stations. Here (Refer Slide Time: 42:45) as you can see the size of
these antennas is small and that’s why it is named so.
(Refer Slide Time: 43:18)

VSAT stands for Very Small Aperture Terminal. By Very Small Aperture Terminal we
mean that the diameter of this antenna will be quite small compared to the hub. And in
the hub the diameter can be around say ten meter. On the other hand these VSAT
terminals can have much smaller diameter around say 1 m a little more than a meter in
diameter, it depends on the frequency ((43:27)) that is being used. For q band satellite it
can be quite around 1 m even lower.

So you can see here how communication is taking place. So each of these VSAT
antennas can send signal as you can see here 1 2 3 so it can send n such stations and as
we can see here they are going in some time division multiplex form that will go to the
hub. Then the signal coming from the hub is broadcasted by this satellite and it goes to all
the other stations. That means here you have got a number of other stations known as the
slave stations which will be receiving this signal. So here you can see that each is being
sent at the rate of 56 Kbps. So this 56 kilo bits, 56 kilo bits are all time shared and it is
reaching the hub. On the other hand 256 Kbps data rate is coming from the hub to the
satellite then it is broadcasted to all the VSAT receivers here.

What are the advantages of this satellite communication? One advantage is wide
geographical areas as I mentioned. You can go for a very large area. For example, with
the help of three satellites you can go for the entire globe. Another very important
advantage is independence from terrestrial communication infrastructure. Whenever you
want to have wired infrastructure you have to first lay the cable.

For example, in our country recently there was deployment of optical fiber cable by
Reliance and various other groups so you have seen that cable length is really very
complex and it is a tedious process. That is not required and that kind of infrastructure is
not required. You simply put a VSAT antenna and do the communication, it can be
anywhere. And because of this it has got very high availability particularly in wired
configuration you have seen that there may be many problems because it is relayed
through a number of stations but it is essentially direct and the only link is the satellite
and as a consequence the availability of this satellite microwave communication is very
high. Another important feature is that the communication costs are independent of the
transmission distance.

(Refer Slide Time: 44:52)

Whenever it is a wired communication depending on the distance the charge varies even
for our telephone calls.

For example, if you want to make a call from kharaghpur to Calcutta the charge is much
less than if you want to make a call from kharaghpur to Bombay because of the higher
distance that means STD rates varies from with the distance. But here that is not so. Here
the communication costs are independent of the transmission distance. You can go from
any point to any point in the country for any point to any point in the other part of the
globe and the cost of communication is same and it is a very flexible network
configuration. Why it is flexible is because it can be very easily scaled up in number of
stations that can be communicating with each other can be increased very easily and in a
flexible manner and it is also because you don’t have to deploy anything but the antennas
at different locations. So this allows a rapid network deployment.

Another very important advantage is centralized control and monitoring. You have got a
central hub by monitoring and controlling and it is that you have a very good control
that’s why the centralized control and monitoring is visible in this particular technology.
And as a consequence this satellite microwave communication is becoming very popular
and it is finding use in various areas. I have listed only a very limited number of
applications but the application domain is very high and it is numerous.
For example, it is used for television distribution. We are familiar with the LPTS Low
Power Transmitters and high power transmitters that is I use. Actually the Low Power
Transmitters and high power transmitters are receiving the satellite signals from the
studio with the help of the satellite signals and it is also used for long distance telephone
transmission.

(Refer Slide Time: 51:09)

Long distance telephone transmission is possible with the help of this satellite
communication. As we have seen you can put repeaters at a distance of about 60 to 80
Km and you can achieve that signal with the help of this VSAT. Instead of terrestrial you
can use either terrestrial microwave or you can use VSAT satellite and you can set up a
private business network using VSAT. For example an organization can have offices in
different parts of the country and if they want to communicate with each other they can
have VSAT antennas at various places and the main hub in one place.

This will allow them to have private business network or they can set up an intranet
where the lands of an organization in different places can be linked with help of VSAT
antennas to set up an intranet which we shall discuss in more detail later.

And many other applications like video conferencing in-house training these can be done
with the help of this satellite microwave communication. These are some of the important
areas where it is being used. NICNET for example is national informatic center
informatic center, they are having VSAT antennas in all restricted quarters with their
central hub in Delhi so they collect all the data with the help of VSAT antennas so it is
called NICNET, this is one such very important area.

For example another project is educational research network. This educational research
network was also set up with the help of VSAT antennas and by using this satellite
microwave network. Nowadays you will find advertisement in TV for DD direct that
means in your home you can receive television signals by a very small antenna and a
small set top box so that is also possible with the help of this satellite microwave signals.
And the NPTEL project for which this lecture is getting recorded will also transmit
signals to various colleges in a country with the help of satellite microwave system.

However, this has got a number of disadvantages. One very important disadvantage is
long propagation delay.

(Refer Slide Time: 52:24)

as we have seen, in case of satellite the antenna is located at a distance of 36000 Km


away from earth, the signal has to travel 36000 Km and it has to return back to the
receiver for another 36000 Km. So this round trip delay is about one quarter of a second.
That possesses a problem particularly to identify whether two stations are sending
together that means collision detection is difficult, so that has to be taken care of.

Another disadvantage is it is inherently a broadcast facility. So since it is inherently a


broadcast facility the antenna transmits something and the entire number of receivers are
listening to it so it is difficult to transmit private data.
(Refer Slide Time: 52:43)

However, this can be taken care of by suitable encryption technique. So new techniques
are emerging with the help of which security can be achieved. We shall discuss about this
later on.

Finally because it is passing through the medium there are various problems. You have
performed error control. Data can be corrupted and sometimes it faces the problem of
slow and fast communication known as flow control.

(Refer Slide Time: 53:15)


So you have to perform error control or flow control for reliable communication. We
shall discuss about these two issues; error controller and flow control later on.

Finally comes the infrared communication in the range of .03 to 200 terahertz. It is very
useful for point to point and multipoint applications within confined areas such as a
single room. Here it uses transceivers that modulates non-coherent infrared light. In this
case also since it is light it has to be a line of sight either directly or via reflection from
the light colored surface or walls, ceiling of the room etc.

(Refer Slide Time: 54:17)

One advantage is in this case the signal does not penetrate walls so secrecy can be very
easily maintained. Another very important advantage is there is no frequency allocation
issue so there is no license required. Particularly in this context IRDA plays a very
important role, IRDA stands for Infrared Data Association. This is a consortium of 150
companies to maintain and develop standard. So IRDA has developed some standard
with the help of each. Communication can be performed over very short distance.
(Refer Slide Time: 56:29)

So the objective is to have device to device seamless communication over a short


distance typically 1 m. So, particularly we have found that nowadays if we want to
connect your laptop to a computer you have to use a cable or if you want to connect a
printer to a laptop you have to use a cable, so that can be avoided with the help of this
IRDA communication infrared communication. Here there is no need for cable because
sometimes cable also posses a problem.

However, the data rates are not very high. Initially it was 2400 bits per second to 115.2
Kbps and it has now been extended to 1.15 to 4 megabits per second. And a layered
protocol has been developed. Here is the physical layer irPHY and this is the data link
layer having three components HLDC, irLAP and irLMP. These protocol layers have
been developed by the IRDA Infrared Data Association.

For example, here you have got a transmitter, it can transmit over an angle of 15 to 30
degree and here is a receiver which can receive with an angle of 15 degree and
communication can be done with the help of 1 m. Therefore this is the IRDA standard
that has been developed for infrared communication. With this we come to a conclusion
on this data communication by using wireless communication.

Here are the review questions.


(Refer Slide Time: 57:07)

1) On what parameters the quality of transmission depends in case of transmission


through unguided media?

2) What are the factors responsible for attenuation in case of terrestrial microwave
communication?

3) What parameters decide the spacing of repeaters in case of terrestrial microwave


communication?

4) Why two separate frequencies are used for uplink and downlink transmission in case
of satellite communication

5) Why uplink frequencies are higher than downlink frequencies in case of satellite
communication?

These questions will be answered in the next lecture. Here are the answers to the
questions of lecture number 5.
(Refer Slide Time: 57:44)

1) On what parameters the quality of transmission depend in case of guided transmission


media?

As I have discussed it is mainly decided by the frequency of transmission and the


characteristics of the transmission medium. That happens in case of guided media.

2) Why wires are twisted in case of twisted pair of transmission media.

As I have discussed in the last lecture it minimizes electromagnetic interference between


the pairs of wires which are bundled together so that the cross talk is minimal.

3) Give a popular example where coaxial cables are used for broadband signaling.

One popular use is cable TV you are familiar with, that is where it is used.

4) Find out the critical angle for a step indexed optical fiber for n1 and w is equal to 0.04.

So n1 was 1.48 given. So here the calculation has been given and as you can see here you
can find out the angle which is roughly equal to 82 degree for these parameters.
(Refer Slide Time: 58:22)

It can be very easily found because the angle of reflection and refraction is 90 degree so
from that you can very easily calculate it.

(Refer Slide Time: 59:18)

And as you can see here the angle of reflection incidence has to be more than 82 degree
so that you have got reflection and other signals will pass out through the cladding
material. So here is the answer for the last question.

In what way multi-mode and single-mode fibers differ?


Essentially it is the core diameter that is the difference here and the cladding diameter
and repeater spacing is 2 Km for multi-mode fiber and for single-mode fiber as we can
see here the code diameter is 8 to 12 micron, cladding diameter is 125 micron and the
repeater spacing is 20 Km.

(Refer Slide Time: 59:37)

So with these we come to the end of this lecture thank you.


Data Communication
Prof. Ajit Pal
Department of Computer Science & Engineering
Indian Institute of Technology, Kharagpur
Lecture No # 7
Transmission of Digital Signal-I

Hello and welcome to today’s lecture. In the last couple of lectures we have discussed
about various transmission media.

Now we shall focus on a signal that we shall transmit through the communication media.
And today we shall discus about transmission of digital signal and it is the first lecture on
the transmission of digital signal.

(Refer Slide Time: 01:21)

So, topic is transmission of digital signal minus I. here is the outline of the lecture. First I
shall give a brief introduction then we shall discuss the important characteristics of line
coding then popular line coding techniques such as unipolar, polar and bipolar then we
shall consider a very important characteristic that is the modulation rate of various codes
that we shall discuss in this lecture then we shall compare the line coding techniques that
we shall cover in this lecture.
(Refer Slide Time: 01:58)

And obviously on completion the students will be able to explain the need for digital
transmission, why digital transmission is required. They will be able to explain the basic
concepts of line coding, explain the important characteristics of line coding, they will be
able to distinguish among various line coding techniques such as unipolar, polar and
bipolar and they will be able to distinguish between the data rate and modulation rate.

(Refer Slide Time: 02:15)

Before I discuss about the transmission of digital signal let us very quickly give an
overview of the transmission media that we discussed in the last two lectures.
(Refer Slide Time: 03:23)

As you remember we discussed about two types of transmission media; the guided media
and unguided media. The first few here for example UTP, coaxial cable, optical fiber
these three belong to the guided transmission media. On the other hand radio, microwave,
satellite and infrared belong to unguided transmission media. And here the first column
gives you the cost comparison. obviously the UTP is of lowest cost, coaxial has moderate
cost, optical fiber has got high cost, radio communication has got moderate cost,
microwave has got high cost, satellite has got high cost and infrared has got low cost.

In terms of bandwidth as you know UTP has got lowest bandwidth and optical fiber has
got the highest bandwidth then so far as the unguided media is concerned microwave
gives you the highest bandwidth and of course microwave is used in two cases. The first
one essentially terrestrial microwave and second one is satellite microwave which gives
you high bandwidth.

In terms of attenuation we find optical fiber attenuation is the lowest and of course for
microwave as you can see the attenuation is variable. Depending on the atmospheric
condition it will vary.

Electromagnetic interference is high in the first two cases UTP and coaxial cable. in case
of optical fiber it is minimum as you can see here, then for radio that electromagnetic
interference is very high, for microwave and satellite these are also high. for infrared it is
low.

And so far as security is concerned the UTP and coaxial cable is not at all secured they
are prone to …….. dropping and on the other hand optical fiber has got very high security
(()) is very very difficult and on the other hand radio wave since it is broadcasting nature
it has got very low security, microwave and satellite has got moderate security because it
is not that easy to receive the signal from terrestrial microwave or satellite where you
have to set up an antenna which will require good amount of effort. On the other hand
infrared has got high security because it is within a room.

So with this background now we shall discuss the transmission of digital signal.
But before doing that let us consider the various options available to you. As you know
you can have digital data and you can get it converted into digital signal by suitable
encoding technique similarly you can have analogue data which can also be converted
into digital signal by suitable encoding then you can have analog data which can be
converted into analog signal by suitable modulation technique. We shall discuss this later
in this lecture then we can have digital data which can be converted into analog signal by
suitable modulation techniques. So these are the various options available to us.

(Refer Slide Time: 06:30)

The question arises as what type of signal we should use? Is it digital signal or analog
signal? As we shall see each of these signals has its own characteristics and features,
good points and bad points and as we know a digital signal requires a low pass channel or
transmission media and bandwidth requirement is more. on the other hand analog signal
requires a bandpass communication media or bandpass channel. And most of the
practical channels that we shall encounter are bandpass in nature as we shall see.

So depending on the situation and the bandwidth availability we shall choose either
digital signal or analog signal. So in some situation we shall use digital signal and in
some other situation we shall go for analog signal so we shall use both.

So in this lecture we shall start with the digital data and digital signal. And as you know
the bandwidth of the medium will play a very crucial role when we send through
transmission media. We have to match the bandwidth of the signal with the bandwidth of
the transmission media so that the signal can pass efficiently without much attenuation,
without much distortion and this will require a good encoding technique which we shall
discuss in this lecture. So in this lecture we shall consider digital data which is converted
into digital signal and the technique that is being used is known as line coding. Line
coding technique is used for converting digital data into digital signal. This can be
explained with the help of the simple diagram.

(Refer Slide Time: 09:03)

here you have got the digital data 1000001 this data may be coming from some memory
of the computer or may be stored in the digital disk and that we shall apply to a system or
we shall do line coding which will convert it into a signal as you can see here, here you
have got the signal. This signal can be transmitted through the communication media.

Line coding is the technique that we shall discuss today.

What are the important characteristics that we have to consider for line coding?

Here are some of the important characteristics are given. First one is the number of signal
levels. As we do the encoding then the number of signal levels can remain 2 because it is
binary signal binary data it can be coded with two levels or you can do multilevel
encoding. for example as it is shown here your data has got two levels 0 and 1 and the
signal has also got two levels so two signal levels one is 0V and the other is 2A volt that
means 2A is the amplitude and 0 is the amplitude. So it has got two different levels.
(Refer Slide Time: 10:28)

Here you have two data levels and two signal levels. This is an example.

And at the bottom as you can see you have two data levels 1 0 1 0 so it is binary data but
it has got three signal levels plus, 0 and minus A that is plus A, 0, minus A that means
positive voltage, negative voltage and 0 voltage. So as you can see here you have got
positive A then 0 here then negative A here. These are the three levels that is being used
here for the purpose of encoding. And whenever you do multilevel encoding the
information that is contained can get changed and as we know I is equal to 2B into log2M
that means the number of signal levels that is being used will decide the information rate
where B is the bandwidth of the medium so if you get 2 Achieved higher data
information rate we can go for multilevel encoding if necessary.

Then comes the question of bit rate versus baud rate.

As I have explained in the last lecture the bit rate is essentially the data rate the rate at
which the signal is transmitted. On the other hand the baud rate is essentially the number
of signaling elements. So the number of signaling elements used can be more than the
data rate or can be less than the data rate. So, if you use multi-level encoding then the
baud rate will be less than the data rate. in some cases the baud rate can be more than the
data rate. Hence we have to decide on that but obviously we shall prefer lower baud rate
to achieve higher data rate of transmission through the transmission medium.

Then comes the question of DC component. As we know whenever we try to send signal
through a transmission media it is difficult to send DC signal because of the presence of
transformers, capacitor in the equipment through which we are sending the signal. So,
because of the presence of transformer and capacitor DC component the DC component
will be blocked which will lead to attenuation.
But sometimes there will be DC component. for example if it is encoded in this manner
and if you have got 0V and say another volt is plus k then as you can see this will be for
1, this is for 0 and this is even for 1 and so on so the average value will have a DC. So
that DC component will not pass through the medium. That’s why we shall try we shall
try to do the encoding in such a way that DC component is not present. That means we
shall transform the sig data into a form of signal which will not have DC component.

Then comes the question of signal spectrum.

Signal spectrum will play an important role because the bandwidth of the signal that we
are trying to send through a medium will decide whether it will be distorted or not. That
means the bandwidth has to match with the transmission media so signal spectrum has to
be taken into consideration when you do the encoding.

Noise immunity: We have to see whether the encoding that we do has higher noise
immunity or not. As we have seen some of the transmission media has got inherent
noises. We have already discussed in the transmission impairment lecture. There we have
seen that in presence of noise we have to transmit signal and obviously if the encoding is
done such that it has higher noise immunity it will be beneficial. Then we can try to do
encoding which will help us in detecting errors. Whenever the signal is sent through the
communication media because of various types of noises error is introduced in the signal.
Thus at the receiving end if you can detect error because of the encoding use that will be
helpful. So we shall look into it

Then comes the question of synchronization.

In synchronization we shall be sending signal. This is sent by the transmitter. We want


the same thing to be received at the receiving end. But if this position is not matched then
the receiver’s end must be able to identify when this is going from low to high or when it
is going from high to low.
(Refer Slide Time: 16:58)

That means these positions have to be identified at the other end. Usually a clock is used
at the transmitting end to generate the signal. Similarly at the receiving end a clock is
used to regenerate the signal. Now these two clocks may not have exactly the same
frequency so that may lead to lot of synchronization. Not only they may not have the
same frequency but their phases may also be different so it is necessary to synchronize
them. And for the purpose of synchronization there are 2 alternatives. What can be done
is a separate line can be used, a separate channel can be used for sending the clock to the
receiver. The receiver can use that clock for receiving the data but that is not really
feasible because of high cost. So commonly it is necessary to regenerate clock from the
received data with the help of some special hardware such as phase lock loop.

However, this regeneration of clock at the receiving end is possible only if when the
signal has got sufficient number of transitions. That means suppose you are sending a
long sequence of 0 or a long sequence of 1 if the encoded signal does not have enough
number of transitions then the phase lock loop will not be able to regenerate the clock so
it is necessary to have enough number of transitions in the signal for the purpose of
synchronization. Therefore while doing encoding we have to take into consideration this
synchronization aspect. Finally the cost of implementation we have to see because
ultimately you have to encode the signal with minimum cost. So these are the important
characteristics you have to look into for encoding.

Now let us see what are the various line coding schemes that is commonly used. The line
coding techniques can be broadly divided into three types; unipolar, polar and bipolar. So
these are the three basic schemes that is being used.
(Refer Slide Time: 17:32)

The unipolar is the simplest and here you have got the two voltage level. Uni means 1 so
you are sending only voltage of one polarity say 2A.

(Refer Slide Time: 18:23)

And here you are not sending anything that is considered to be 0V. That means when you
are not sending anything that is 0V when you are sending something that is 2A volt. So
only one polarity signal you are sending and when you are not sending that is been
considered as 0V. That’s why although you are sending one polarity we may consider it
as two voltage levels. So don’t get confused whenever we say unipolar at the same time
two voltage level. Essentially we are sending voltage of one polarity but whenever the
volt signal is not present then we consider it 0.

For example, for 1 we are sending voltage of 2A and when it is 0 we don’t send anything.
Similarly for 2A again we send 2A voltage and for 0 don’t send anything. That means the
data is represented by voltage level. That means one is voltage level 2A and 0 is no
voltage or no signal is being sent. That’s how this unipolar encoding is done.

Unipolar encoding has got the following characteristic. It uses only one polarity of
voltage level as I mentioned and here the bit rate is same as the data rate. The means the
modulation rate or baud rate is same as the data rate. In other words you have got only
one signaling element per bit. Unfortunately as you have seen we are using signal of only
one polarity and as a consequence there will be DC component of the average value of
the signal that will be received at the other end will have DC so there will be DC
component present in signal.

(Refer Slide Time: 19:14)

And as you can see from this signal whenever you have got long sequences of 0s or long
sequences of 1s only one signal level is transmitted or no signal is transmitted. In other
words there will be no transitions and that will lead to loss of synchronization for long
sequences of 0s and 1s. So this unipolar scheme is possibly the simplest but it is obsolete.
It is not used because of the limitation.

For example, the DC component is present which is a bad feature, loss of synchronization
is a bad feature and of course data bit rate and data rate is same. We may come we may
not consider it go good but it is not definitely bad. But it is simple because it uses only
one polarity of signal. Let us see what are the polar schemes used.
(Refer Slide Time: 20:39)

In case of polar essentially two voltage levels are used; one is positive and the other one
is negative. So we are using one positive voltage and one negative voltage and the polar
encoding scheme has got several alternatives like NRZ Non-Return-to-Zero, RZ Return-
to-Zero, Manchester encoding, Differential Manchester encoding. so these are the four
different encoding schemes available under polar encoding.

Let us consider their features one after the other, Non-Return-to-Zero that is NRZ. In this
particular case as you can see from this diagram voltage level is constant during a bit
interval. That means for 1 it has got two varieties NRZ L and NRZ I.

(Refer Slide Time: 22:56)


First let us consider the NRZ L. in case of NRZ L Non-Return-to-Zero L stands for low
level. Here it means that the binary value 1 is represented by low level. As you can see
here binary value one is represented by low level minus A and 0 is represented by high
level plus A. So here we are using signals of two polarity plus A and minus A and these
two polarities are use to represent a binary 0 and 1. So 1 is represented by A the minus A
polarity voltage and 0 is represented by plus A. That means the signal levels state of the
data is represented by signal level and that remains constant during a bit interval as I
mentioned. So you see here that during the inter bit interval the data is either plus A or
minus A (Refer Slide Time: 22:40). So when it is 1 it is minus A and when it is 0 it is
plus A.

Let us look at the other alternative NRZ I where I stands for inversion. Here in case NRZ
I what we are doing is for each one in the bit sequence the signal level is inverted.

(Refer Slide Time: 23:11)


Here as you can see this is your 1 for which it was minus A. whenever the next A is
encountered you can see here it is making a transition from minus A to plus A. Another
one is here so in this boundary again it is making transition from plus A to minus A. Here
there are two 0s so there is no transition. So whenever a 1 is encountered the signal is
making a transition or it is getting inverted from the previous signal level.

So here it was inverted from minus A to plus A and here it was inverted from plus A to
minus A because of the presence of this one, again another one is encountered here, here
it is inverted from minus A to plus A. This is how a transition from one voltage level to
the other voltage level takes place in this particular case. In the previous case as you can
see (refer Slide Time: 24:11) whenever there is a transition from 1 to 0 the data changes
from 1 to 0 so there is a transition or in the next case whenever consecutive 1s are there
or a 1 is encountered there is transition.

But whenever you have a long sequence of 0 then there is no transition in the signal and
that will lead to loss of synchronization. So here are the characteristics of this NRZ
encoding. Here you can see it has got two voltage levels plus A and minus A, here the bit
rate is same as the data rate because the number of signaling elements per bit is 1 so data
rate is same as the baud rate. So in other words it is M is equal to 2.

(Refer Slide Time: 24:53)


Loss of synchronization for long sequences of 1s and 0s, whenever you send long
sequence of one or long sequence of 0 you will find there will be loss of synchronization.
And in this case if you look at the signal spectrum you will find that most of the energy is
concentrated between 0 and half the bit rate as it is shown in this diagram.

(Refer Slide Time: 25:44)

Here you find for NRZ I and NRZ L the signal spectrum is shown here. On the x axis you
have got normalized frequency f by r where r is the bit rate and means square voltage per
unit bandwidth. So here you find most of the energy is around this region that is half of
the bandwidth. That means this is one, this is the bandwidth of the signal. You can see
here that most of the energy is between DC and the middle point of the signal around 0.5.
So here major part of the signal is shown and most of the energy is concentrated between
0 and half the bit rate. So this is the characteristic of NRZ encoding both NRZ L and
NRZ I.

Synchronization has been found to be a major concern in the two schemes that we have
discussed. To overcome that we can use RZ encoding that is known as Return-to-Zero.
So here what we do is for each bit it returns to 0. If it is 1 it is 1, if data is 1 it is plus A
then in the middle point of the bit it returns to 0 and it remains there and for 0 the signal
value is minus A again in the middle point it returns to 0.

(Refer Slide Time: 26:52)

So here we see for each bit there are two transitions. One is from low to high and another
is high to low. Thus two transitions per bit is taking place irrespective of whether is 0 or
1.

(Refer Slide Time: 28:51)


So if you have long sequences of 0 or long sequences of 1 you will not face any difficulty
because the number of transitions is quite large and as a result it is very good for
synchronization. However, it uses three levels of encoding so it is possible to have log23
that means the information content can be more but unfortunately you are encoding only
a two data that means the data rate is not increased although it has the capability to have
higher information rate.

So however the bit rate is double than that of data rate because the number of transitions
it is making is not at all an efficient encoding so here baud rate will be equal to two that
of data rate so this is definitely bad that the signal requires higher bandwidth than the data
rate. However, it provides you good synchronization as you have seen. And the increase
in bandwidth is the main limitation or concern because of this the signal has got double
the bandwidth than the data rate. Data rate is multiplied by two to the baud rate and as a
consequence RZ encoding is good from the view point of synchronization but it is bad
from the view point of the bandwidth requirement.

Let’s see how we can go for a better scheme. This Manchester encoding is also a bipolar
encoding. Here we are also using plus A and minus A the signals of two polarity levels.
However, as you can see we can consider it that each signal element is divided into two
phases biphase, two phases the first part and second part. So in Manchester encoding
there is a mid-bit transition so mid-bit transition is taking place.

(Refer Slide Time: 29:42)


So each mid-bit transition for example for one the transition is from minus A to plus A
for 0 it is from plus A to minus A. So, if the next bit is one then it has to go from minus A
to plus A so there is no need of transition in the middle, however, the next bit is 1 and
here it will make transition from low to high (Refer Slide Time: 30:26) and in this middle
position it has to again make transition from low to high so in the beginning it is making
transition from high to low. Hence it is possible to have low to high transition. That
means whenever you have two consecutive 1s as we can see the number of transitions
will be low to high, high to low then low to high.

On the other hand whenever you have got two consecutive 0s again there will be
transition from high to low, low to high and again from high to low. So the number of
transitions is more for consecutive 1s and consecutives 0s. However, when you have got
consecutives of 0s and 1s then as you can see the number of transitions is one per bit.

However, you have got enough number of transitions. this is Manchester encoding where
the signal levels are represented by the number of transitions, that is it is essentially it is
making from high to low or low to high in the middle bit position.

(Refer Slide Time: 32:48)


Let us see the other one the differential Manchester which is known as biphase encoding.
Here the presence of transition in the beginning of a bit position represents a 1. Here this
is the beginning of a bit position, this is the beginning of a position and this is a
beginning of a bit position (Refer Slide Time: 31:52) if it is a 0 then there is always a
transition in the beginning. For example, here there is a transition but it can be from high
to low or low to high there is no distinction. If it is 1 there is no transition so it depends
essentially on the previous bit.

For example, for one or 0 in the middle there will be always transition which is used for
synchronization purpose. That means in the middle there is always transition. So as you
can see here there is transition, here there is transition in the mid-bit position but only in
the beginning there is transition if it is a 0.

So this is a 0 bit so there is transition in the beginning, this is a 0 bit there is a transition
in the beginning, this is another 0 bit so there is transition in the beginning so for each of
this bits so there is transition. Hence there is transition here, here, here for all the 0s and
on the other hand there is transition in the middle for the purpose of synchronization. So
you can easily recover data from this and at the same time it will provide you very good
synchronization. So it has got two voltage level plus A and minus A as you have seen and
there is no DC component.

As you can see here alternately you are making it high and low and high and low so the
average value will be 0 so there will be no DC component present. It is a very good
feature and it provides you very good synchronization because number of transitions per
bit element will be at least one. There may be more than one transition in case of
consecutive 0s or consecutive 1s but it gives you at least one transition so it gives you
very good synchronization.

(Refer Slide Time: 34:13)


However, the bandwidth is increased here, it gives you higher bandwidth due to doubling
of bit rate with respect to data rate. That means when you are sending a sequence of 0s
and sequence of 1s then the bit rate is a data rate, the baud rate is double than the data
rate which is a bad feature and definitely this is not advantageous. But because of these
two features it is attractive and widely used in many practical encoding schemes.

Let us look at the bandwidth of these two signals Manchester encoding and differential
Manchester encoding with respect to NRZ I and NRZ L.

(Refer Slide Time: 35:21)

As you can see in case of NRZ I and NRZ L the bandwidth was very close to 0. On the
other hand for Manchester encoding as we can see there is no DC component, there is no
signal component closer to DC 0V and most of the NRZ is concentrated around the bit
rate that is 1.

However, it spreads further so it can have higher frequency signal components for both
the cases and it gives you some kind of bandpass which means this can be very efficiently
sent through a bandpass channel because the frequency spectrum has got higher
frequency components around the middle and on either side the frequency components
are less. So it is a very good frequency characteristic except that the bandwidth
requirement is higher.

(Refer Slide Time: 35:43)

Now let us focus on the bipolar encoding schemes. We have discussed unipolar and polar
and now you are considering on focusing on bipolar encoding. In bipolar encoding we
shall be using a technique known as AMI - Amplitude Mark Inversion. This terminology
mark has come from the day of telephony. In telephony we used to send signals in the
form of mark and space so that terminology has been borrowed here and here the AMI is
one kind of bipolar encoding. As we shall see this is known as bipolar AMI. It uses three
voltage levels such as plus A, 0 and minus A. And unlike Return-to-Zero the 0 level is
used to represent 0 here. In case of RZ encoding what we did the 0 voltage level was not
representing either 0 or 1 so signal was returning to 0 both for 0 and for 1.

(Refer Slide Time: 37:54)


But here it is not so. The 0 is represented by, that means the 0 bit is represented by 0V.
On the other hand a one can have either positive voltage or negative voltage so it
alternately changes. For example, for this it is plus A, next one will have minus A, next
one will have plus A, next one will have minus A so that’s why it is called alternating
binary 1s or having alternating positive and negative voltages. So this alternating positive
and negative voltages can be sensed to identify 1s.

However, if there is a long sequence of 0s then obviously there will be no transition so


this will again lead to loss of synchronization as we shall see.

So here there is no DC component because it uses positive and negative. However loss of
synchronization occurs for long sequence of 0s and as a consequence it gives you lesser
bandwidth.

Somewhat similar to bipolar AMI we have got pseudoternary. It is the same as AMI but
the alternating positive and negative pulses occur for binary 0 instead of binary 1. In the
previous case we have seen that alternating voltages are taking place for 1. In case of
pseudoternary if we replace the 1s by 0s and 0s by 1s it will be pseudoternary.

(Refer Slide Time: 38:28)


So the characteristic will be same as bipolar AMI and not different that means the
characteristic will be same as the bipolar AMI for pseudoternary. But the only difference
is that here binary 1 represents 0 and for 1 it is alternating positive and negative voltages.
That’s the difference between bipolar AMI and pseudoternary. So you can use either of
the two having the same characteristics. And obviously this is a very good feature, there
is no DC component, unfortunately loss of synchronization for long sequences of 0s is
not useful. However, it has a very attractive feature which is lesser bandwidth as we shall
see in the subsequent diagrams.

So before we compare the various techniques that we have discussed in this lecture. One
important parameter I have already mentioned about it has to be taken into consideration
which is the data rate and another is modulation rate.

Modulation rate is expressed in terms of baud b a u d and data rate as we know is


expressed in terms of bits per second. Now what is the relationship? The relationship
between data rate and baud rate is data rate d is equal to R by b where b is the number of
number of bits per signal element. That means a signal element can have more than one
bit, can represent more number of bits or it can have less number of bits. So depending on
that the data rate can be more or less.

(Refer Slide Time: 41:52)


For example, if the number of levels L is the number of different signal levels that is
being used then the data rate will be more than the baud rate R by b, R is the data rate that
means D is the modulation rate that means R is equal to D into log2L that means the data
rate R will be more whenever you are using more number of signal levels as we shall see.
And because of this the efficiency will be more whenever your data rate is more than the
baud rate.

Let us see what the situation is for the codes that we have discussed.

(Refer Slide Time: 42:08)


This is the modulation rate (Refer Slide Time: 42:00). the minimum modulation rate for
NRZ L is 0 0 0 both for NRZ L NRZ I and bipolar AMI does not change at all for some
values 0s and 1s so it is 0, for 1 0 1 it is alternating 1s and 0s, for NRZ L it is 1, for NRZ
I it is .05 and for bipolar AMI it is 1.0, for Manchester it is 1. 0, for differential
Manchester it is 1.5.

And look at the maximum, the maximum occurs in case of differential Manchester say
2.0, 2.0 both for Manchester and differential Manchester. And as you can see here from
the view point of modulation rate we find that bipolar AMI has got good characteristic on
the other hand the Manchester and differential Manchester requires higher modulation
rate compared to the other schemes and as a result they are not very attractive from this
view point.

However, we have seen that Manchester and encoding gives you very good
synchronization facility. That means whenever you are sending the signal either a long
sequence of 0s or long sequence of 1s it will not lead to synchronization failure because
at the other end the phase locked loop will be able to regenerate the signal without any
problem.

If we compare the bandwidth of the different signals we find that the NRZ L and NRZ I
has got lower bandwidth. However, because of the presence of DC component and the
low pass nature of the signal it is not attractive.

(Refer Slide Time: 44:07)

On the other hand Manchester and differential Manchester encoding has got very good
bandwidth spectrum shape. It has got most of the frequency components in the middle
however it has got higher bandwidth. The AMI and pseudoternary has got lesser
bandwidth and it has got very good frequency spectrum and most of the energy is
centered around the middle of the frequency and there is no DC component as you can
see. Hence this is the bandwidth comparison of the different encoding techniques that we
have discussed.

Of course there are other special purpose codes that is being used in digital transmission
one is known as 2B1Q, 2 binary 1 quaternary. Compared to polar or bipolar it is four
voltage levels in contrast to two voltage levels that is commonly used in polar. Here we
see for 0 0 it has got minus 3V, for 1 1 it is plus 1.5V, for 0 1 it is minus 1.5V and for 1 0
it is plus 3V. So we see here it has got four voltage levels so it gives you higher data rate.
That means R is equal to D if we look at the previous diagram this one that R is equal to
R by b that means R is equal to D into log2L.

(Refer Slide Time: 46:57)

So here R is equal to D into log2L and L here is 4. Because it is using four levels the data
rate will be two times and as a result each signal element will be able to represent two
bits. Two bits means this signal element minus 3 is representing 0 0 or this signal element
that is plus 1.5 is representing 1 1 so we see that by using this multilevel encoding we are
able to send more data with lesser number of signal elements. In other words with lesser
baud rate your data rate is more. However, because of larger number of signal levels it
will be more prone to error. So signal to noise ratio of the medium has to be better only
then this can be used.

Then there is another special purpose code that is used known as MTL-3 Multiline
Transmission. Here also we use three levels it is very similar to NRZ I. in case of NRZ I
we have seen that we are using inversions Non-Return-to-Zero inversions but instead of
two levels here we are using three levels.
(Refer Slide Time: 47:31)

Here we see that signal transition occurs at the beginning of each one bit so whenever it
is a one it is making a transition from 0 to 1 and here also it is making a transition from 1
to 0 so here we can see that we are using say 0 to plus 1 then plus 1 to 0 again from 0 to
minus 1 so in this way it is using three voltage levels and signal transition is occurring at
the beginning of each one. So it is very similar to NRZ I but uses three different levels
that is the basic difference that is used in the beginning of each one bit that is the
characteristic of this code.

Now we have seen that bipolar AMI has got very good characteristics but unfortunately it
has got bad signal characteristic so the bandwidth characteristic is bad which can be
improved by a technique known as scrambling scheme. This can be considered as
extension of bipolar AMI. Particularly when we are sending signal over a long distance
the bandwidth of the channel is very precise.
(Refer Slide Time: 50:04)

When it is for short distance communication like local area network then bandwidth is
not really very important. On the other hand whenever we are sending over a long
distance bandwidth has to be very judiciously utilized. In such cases we shall go for this
kind of special schemes and try to develop better utilization of the bandwidth. And some
of the characteristics of these special encoding codes are no DC component no long
sequence of 0 level, line signal and so on. That means whenever there is long sequences
of 0 level we shall try to enforcibly introduce some transition with minimum increase in
bandwidth as we shall see in the next lecture.

Then it should have some error detection capability. Examples of such scrambling
schemes are B8ZS and HDB3 which we shall discuss in the next lecture. So in today’s
lecture we have discussed about several schemes like NRZ, RZ and bipolar schemes and
here are some of the review questions.

1) Why do you need encoding of data before sending over a medium?

2) What are the four possible encoding techniques give examples?

3) Between RZ and NRZ encoding techniques which requires higher bandwidth and
why?
(Refer Slide Time: 50:55)

4) How Manchester encoding helps in achieving better synchronization?

Here are the answers to the questions of lecture-5.

On what parameters the quality of transmission depends in case of transmission through


unguided media?

(Refer Slide Time: 51:12)


Answer is: In case of unguided media the transmission quality primarily depends on the
size and shape of the antenna and the bandwidth of the signal transmitter.

So we have already discussed in detail the unguided media. There we have seen the size
and shape of the antenna and the bandwidth of the signal that we are sending will play
important role.

2) What are the factors responsible for attenuation in case of terrestrial microwave
communication?

(Refer Slide Time: 51:46)

We have seen that the attenuation characteristic is represented by the formula 10 log 4 pi
d by lambda square. So it depends on the distance and the attenuation increases at the rate
of distance square. However, the attenuation also increases with frequency or inversely
proportional to lambda square. Moreover during rainfall attenuation is less if there is no
rain. So in other words when there is rain attenuation is more.

3) What parameters decide the spacing of repeaters in case of terrestrial microwave


communication?

The parameters are the height of the antenna h and the adjustment factor k based on this
relationship 7.14 root kh where d is the distance in kilometer between the two antennas.
And based on the height of the antenna used one can calculate the distance and usually
the value of k is equal to 4 by 3 that can be used to compute the distance.

4) Why two separate frequencies are used for uplink and downlink transmission in case
of satellite communication
As I mentioned in the lecture two separate frequencies are used so that one cannot
interfere with the other and full duplex communication is possible. If we use same
frequencies then there will be interference and we cannot have full duplex
communication.

(Refer Slide Time: 52:53)

So to achieve full duplex communication two separate frequencies are used.

5) Why uplink frequencies are higher than the downlink frequencies in case of satellite
communication?

(Refer Slide Time: 53:44)


The reason I mentioned was the satellite gets power from solar cell so the transmitter is
not being of higher power because it is taking power from the solar cell. On the other
hand the ground station can have much higher power as we want less attenuation and data
signal to noise ratio, lower frequencies are more suitable for downlink and higher
frequency is commonly used for uplink. So this is the answer to the question 5 that was
the last question, thank you.
Data Communication
Prof. Ajit Pal
Department of Computer Science & Engineering
Indian Institute of Technology, Kharagpur
Lecture No # 8
Transmission of Digital Signal - II

Hello viewers welcome to today’s lecture on transmission of digital signal.

(Refer Slide Time: 01:01)

This is the second lecture on this topic. In this lecture we shall cover the following topics.
(Refer Slide Time: 01:25)

First I shall give a brief introduction this will be followed by scrambling coding schemes
then we shall discuss basic concepts of block coding, block coding steps, conversion of
analog data to digital signal which will be a different type then two basic approaches of
analog data to digital signal that is Pulse Code Modulation and Delta Modulation then we
shall discuss limitations of Pulse Code Modulation and Delta Modulation and compare
these two approaches.

And on completion of this lecture the students will be able to explain scrambling coding
schemes, explain the need for block coding, explain the operation of block coding, the
students will be able to explain the coding techniques used for conversion of analog data
to digital signal they will be able to distinguish between the coding techniques such as
PCM and DM which are used for coding from analog data to digital signal that is
conversion of analog data to digital signal and then they will be able to compare the
advantages and limitations of Pulse Code Modulation and Delta Modulation.
(Refer Slide Time: 01:59)

Here is the summary of what we discussed in the last lecture. In the last lecture we started
our discussion of transmission of digital signal. Essentially we discussed various schemes
for conversion of digital data to digital signal. So here is the summary. And as you know
the conversion technique is known as line coding. They can be classified into three broad
categories; the unipolar, polar and bipolar. Unipolar as you know is not really used but
the polar scheme has the number of different types like NRZ, RZ, Manchester and
differential Manchester. All of these coding techniques we have discussed in detail.

(Refer Slide Time: 03:07)


We have also discussed one particular technique of bipolar encoding that is your
amplitude AMI Amplitude Mark Inversion.

(Refer Slide Time: 03:30)

Let us pick up where do we left in the last lecture. I mentioned that although AMI is a
good scheme it has some limitation because it does not provide you synchronization
although it is very good from the view point of bandwidth it does not have a large
bandwidth, the bandwidth is not high and that’s why it is attractive from the bandwidth
point of view but it does not provide good synchronization. So how this problem can be
overcome retaining the advantage or good feature of bandwidth, that is what is being
tried in B8ZS which stands for Bipolar with 8 Zero Substitution.

(Refer Slide Time: 04:32)


As I mentioned the limitation of bipolar AMI is overcome in B8ZS which is particularly
used in North America. As we have seen the main limitation of amplitude mark inversion
was that whenever you have a long sequence of zeros there is lack of synchronization.
Synchronization fails because there is no signal transition whenever you have a long
sequence of zeros. Today we shall discuss about how that can be overcome.

What is done in this particular case is whenever you have got 8 zeros that is being
replaced by the following encoding. Whenever you have a sequence of 8 zeros it is
replaced by 0 0 0 plus minus 0 plus minus if the previous pulse was positive. On the other
hand if the previous pulse was negative then that sequence of 8 zeros is replaced by 0 0 0
minus plus plus minus 0 plus and minus. So you see normally it should be 8 zeros so in
place of 8 zeros some of the zeros are being replaced by positive pulse, negative pulse
and 0. Now this should be done at the transmitting end and also it should be done at the
receiving end so only then it is it is possible to do communication. So it is some kind of
protocol that is being followed by both sides for data communication.

So as you see here as long as the number of zeros does not exceed eight then there is no
problem. When it is eight or more then 8 zeros or replaced by this code. Let me illustrate
it with the help of an example.
(Refer Slide Time: 06:48)

Suppose the bit sequence is 0 0 0 1 1 0 0 0 0 0 0 0 0 here you have got 8 zeros 1 2 3 4 5 6


7 8 and after that it has got one 0 0 0 0 there are 4 zeros after that there is a 1 1 0 0. So if
we use bipolar AMI as we find there is a transition from 0 to 1 here and another one is
there so again there is transition from plus 1 to minus 1 so as we know the bipolar AMI
uses three levels 0 level, positive plus b and minus b positive and negative and for each
occurrence of one as we know there is inversion of the pulse. That means for this one it is
plus b and for the next one it is minus b and as you see here as long as there is no 1 here
all are zeros so there is no transition. Therefore only whenever there is a 1 here at this
point then again there is opposite transition. Here it was negative transition and here it is
a positive transition.

Now these 8 zeros are replaced by 0 0 0 and here V stands for bipolar violation. Why we
are introducing violation?

The reason for that is the other side should be able to understand that this is not a proper
code. So it has been forcibly introduced for the purpose of synch synchronization so it
will be replaced by all zeros. That means here as you can see these two will remain then
after 0 0 0 there will be a violation then the next one is b then 0 positive then again
violation and as you see here (Refer Slide time: 8:36) there are two violations and then
opposite one is minus b. So in this case sensitivity is followed by a negative transition as
we have seen in the previous case if the previous pulse was negative. then it will be 0 0 0
negative positive 0 positive negative that is precisely what is being done in this particular
case. And as you can see here now we have enough transitions so that at the other end the
receiver will be able to regenerate clock and synchronization will be achieved.

However, in this particular case as you can see there are four Zeros 0 0 0 0 these zeros
however is not replaced. But as I said as long as the number of 0 does not exceed 8 no
replacement is done. so this is how B8ZS code is sent and this is being sent over the line
and at the other end this will be received and this will help you to achieve
synchronization and then you will be able to get back your original signal like 0 1 1 0 0 0
whatever it was that can be retrieved and from these violations you will know that this
has been introduced so that enough transitions are there. But these are not really the
transitions related to that bipolar AMI code. This is how the B8ZS works.

Now another alternative code is the high density Bipolar-3 Zeros and in this case this
particular alternative is used in Europe. The previous one was used in America but this is
particularly used in Europe and Japan.

(Refer Slide Time: 10:46)

So it replaces a sequence of 4 zeros by a code as per the rules given in the table below so
here as you can see it also depends on the polarity of the previous pulse. If it was
negative then it is replaced by 0 0 0 and if the number of bipolar pulses since substitution
is odd then it is replaced by 0 0 0 negative. On the other hand if the number of
substitutions was even then it is replaced by positive 0 0 0 positive. You may be asking
why this is being done. This is done so that the average value is 0 that is the purpose.

Similarly if the polarity of the preceding pulse was positive and if the number of bipolar
pulses since last substitution was odd then it will be replaced by 0 0 0 positive and if it
was even it will be replaced by negative 0 0 negative. Let me illustrate this with the help
of an example. The same example is shown here.
(Refer Slide Time: 11:54)

Here as you can see the number of substitutions was two and the previous pulse was
negative. Since the number of substitutions that have already taken place is two that
means even and the previous pulse is negative so it shows that it is even and number of
pulse is negative so it is replaced by positive 0 0 positive. So you see that here it is
positive 0 0 positive.

Now here again another 4 zeros are there (Refer Slide Time: 12:28). In this case since the
previous pulse is positive here this will be done as per this role negative 0 0 negative so
here you see negative 0 0 negative. This is how all the 8 zeros the first 4 zeros replaced
by this code and next 4 zeros are replaced by this code and you see here two positive and
two negative are there so average value will be 0.

Similarly, here again 0 0 0 0 there are 4 zeros. So far there are one two and three ones so
number of ones has occurred odd and the previous pulse is positive so in this particular
case the rule followed is odd and it was positive so it is 0 0 0 positive. Therefore here we
get 0 0 0 0 positive. So you see that here also the violations are introduced. As we know
as per bipolar AMI code the next pulse has to be positive but it is not done here. So
forcibly violations are introduced so that they are identified as signals sent for
synchronization so that the proper data is retrieved from the received signal and at the
same time synchronization is achieved.
(Refer Slide Time: 13:53)

So here is the summary of the characteristics of B8ZS and HDB3. As we have seen it
uses three levels positive, 0 and negative and there is no DC component which is a very
good parameter or very good feature and it allows good synchronization for both the
codes and the advantage is most of the energy is concentrated around a frequency which
is equal to half the data rate. So the bandwidth is less and it is well suited for high data
rate transmission over long distances.

(Refer Slide Time: 15:07)


Let us look at the frequency spectrum. Here as you can see as compared to other course
we have already discussed here is the B8ZS and HDB3. Here is a nice spectrum that we
have got and most of the energy centered around the middle of the bandwidth.

So here one corresponds to the bandwidth so most of the energy is concentrated in the
middle of the bandwidth which is definitely a very good feature and that way it will allow
transmission through a medium with lesser distortion and attenuation because it will
match perfectly with most of the bandpass nature of transmission media.

So we have completed all the codes. Now there is another type of coding known as block
coding. This block coding was introduced to improve the performance of line coding. So
far we have discussed various types of line coding which is used for conversion of digital
data to digital signal for sending through the transmission media.

(Refer Slide Time: 16:09)

But before you do the line coding a co a technique known as block coding was introduced
so that the performance of line coding can be improved. Performance in terms of what?
Performance in terms of synchronization, performance in terms of bandwidth,
performance in terms of error detection these are the three important parameters which
are considered. As I mentioned the basic concept that is being used here is redundancy.

So it introduces redundancy to achieve synchronization, also because of the redundancy


introduced here it allows error detection and apart from synchronization it also allows
error detection to some extent.

Let us see the basic scheme used. Basic scheme has got three distinct steps division. For
example here is your input data so the division is an mb division that means this sequence
is divided in blocks of m bits.
(Refer Slide Time: 17:17)

For example, if we are using 4b 5b encoding in that case this m bits will be 4. For
example, if the data is 0 1 1 0 0 1 0 0 1 then it will be divided into 0 1 1 0 this is one
block then 0 1 0 0 another block (Refer Slide Time: 17:22) so this is how it is divided so
it is divided in terms of m bits then that m bit is applied to mb to nb substitution so each
m bit is replaced by n bits. So in this case each of these four bits will be replaced by five
bits for this particular example. Then ultimately these n bits are line coded and that output
signal is transmitted through the transmission media. We have already learnt about line
coding. This is the basic scheme.

Let us see the example of 4B/5B encoding. While doing the encoding here you see the
encoding is done little cleverly.

(Refer Slide Time: 18:10)


The five bit code is done in such a way that it has got no more than one leading 0 and no
more than two trailing zeros. So it is ensured that transitions are there then more than
three consecutive zeros do not occur. So these are the two things that are done and finally
the line coding is done by using NRZ I code. so what you are essentially doing is suppose
here it is the code we set as 4B codes and in 4B codes as you know there will be 2 to the
power 4 symbols and obviously the 5B codes will be larger so here the 5B codes will
have two to the power five that means here it is 16 and here it is 32. So what will be done
is this four B code should be mapped to a subset so it is mapped to a subset such that
these properties are satisfied. That means five bit code has no more than one leading 0
and no more than two trailing zeros and more than three consecutive zeros do not occur
this is how it is being done.

You may be asking how error detection is done. So you see this is a code and this is also
a code (Refer Slide Time: 19:35) but here it is a five bit code. Now suppose at the
receiving end if the code is not within this code so it becomes a non code. When it
becomes a non code then we know that some error has occurred during transmission from
the transmitter side to the receiver’s side. That’s how the error detection is possible in this
case.

Let us see the example of how the encoding is done. Here there were 4 zeros 4 zeros have
been replaced by 1 1 1 1 0 this is five bit and this is four bit, 0 0 0 1 is replaced by 0 1 0 0
1. And you can see here that when long sequence of zeros like 4 zeros or 4 ones are there
then there are zeros and ones that are introduced and as I mentioned it was necessary to
have no more than one leading 0 and no more than two trailing 0 that is being satisfied
here.
(Refer Slide Time: 20:07)

Here you find no more than one leading 0 and no more than two trailing zeros. Here there
are two trailing zeros but you will not find three trailing zeros anywhere. so this is how
the encoding is done. However you have got some other codes which are used for other
purposes like quiet, idle, halt essentially control codes. But so far as the data codes are
concerned it satisfies that synchronization property and it helps you to achieve error
detection.

Now other block coding examples are 8B to 10B so here you have got 8-bit data codes
data blocks which are substituted by 10-bit code. So each 8-bit data is encoded by 10-bit
data. Obviously it allows more redundancy.

More redundancy means it provides more error detection capability but unfortunately it
leads to increase in bandwidth. Whenever you have got more redundancy there is more
error detection capability. So unfortunately this error detection capability is advantageous
but there is increase in bandwidth. So this is acceptable and this is not acceptable (Refer
Slide Time: 22:10), how do you overcome this problem? This problem can be overcome
by using suitable line coding.
(Refer Slide Time: 23:33)

For example here let’s assume that 6B to 8B encoding is done then the line coding can be
done from 8B to 6T. So suppose after encoding the 8-bit data is 3F so 3F means this is 0
0 1 1 and F is 1 1 1 1 so now while doing the encoding you are not generating 8-bits but
you are generating 6 values for transmission so negative, 0, positive, negative, 0 and then
positive so you are generating six symbols during transmission. So obviously the
bandwidth of this will be lesser compared to 8-bit signals. So here is the advantage of this
line coding, it reduces the bandwidth but at the same time it provides you higher
redundancy, higher error detection capability and also synchronization.

So here we now change gear from digital data to digital signal to analog data to digital
signal. There are many situations where we have to send analog data but we want to send
in digital form or we want to send in the form of digital signal. Because sending signal in
the digital form has many advantages from the view point of signal to noise ratio and
other things. For example analog data such as voice, video and music are to be converted
into digital signal for communication through transmission media. One example is voice
that is being sent through telephone line. So here we are trying to send analog data. But
as you know in modern telephony it is not sent in analog form but it is converted into
digital form so digital signal is transmitted. Similarly, video and voice, video and music
can be sent. For that purpose there are two basic approaches.
(Refer Slide Time: 24:57)

The two basic approaches are; first one is known as Pulse Code Modulation (PCM) and
second one is known as Delta Modulation (DM).

So in this lecture today we shall discus about these two schemes; Pulse Code Modulation
and Delta Modulation one after the other.

Let us first concentrate on Pulse Code Modulation. The Pulse Code Modulation has got
three basic steps; sampling, quantization and line coding.

Suppose this is the analog signal so this analog signal has to be converted into digital
form for sending through a transmission media. First step is known as quantization or
first step is known as sampling.

The sampling process converts the analog data analog signal you can say as this is
essentially analog signal into a form known as Pulse Amplitude Modulation. so PAM
stands for Pulse Amplitude Modulation. How it is done? This analog signal is sampled at
regular interval, this is the interval this is the interval at which it has to be sampled (Refer
Slide Time: 26:32) obviously they will be equispaced. When you do the sampling you
have to you have to follow the nyquist criteria.

Nyquist criteria says if the signal has a bandwidth B then the sample rate has to be 2B.
Suppose you are sending voice signal the voice signal has the bandwidth of 4 KHz so
your sample rate has to be 8 KHz so that is the rate here in this direction that is being
sampled. now the sampling is done here, here, here, here at these points (Refer Slide
Time: 27:20) so as you sample it now you are also holding that value till the next sample
then you are again increasing then you are holding it to the next sample then you are
going to the next sample again holding it till the next sample then you are going there and
holding it to the next sample and then you are holding it to the next sample value and so
on. And this sample value again you are holding it to the next sample value, here again
you are holding it to the next sample value then again you are sampling it holding it to the
next sample value and so on and so forth. In this way you convert it into a signal and this
signal is known as PAM. that means this signal as we have obtained after sampling it and
then holding it for the next duration with the help of a hardware known as sample and
hold circuit you will be able to do this. Therefore the signal that is being generated at the
output of the sample and hold circuit is known as PAM.

(Refer Slide Time: 29:29)

Now this PAM has got different values so you have to now do the quantization.
Quantization is done with the help of a hardware known as analog to digital converter A-
D converter.

AD converter will do the quantization because that analog to digital conversion is done
and the analog to digital converter will have limited number of bits say 8-bit or 6-bit. If it
is 8-bit then it will have 256 different quantization levels. Now the analog value is having
all possible voltage levels. It is now converted into some discrete values depending on the
number of bits used in the analog to digital converter and that leads to what is known as
quantization error. After we have got the digital data it can be converted into digital
signal using suitable line coding which we have already discussed.
(Refer Slide Time: 29:46)

This is the process analog input signal which is obtained with the help of a microphone
and if it is voice then using sample and hold you are converting into PAM signal then AD
converter has got two parts quantizer and encoder then it generates the digital output as it
is shown here.

(Refer Slide Time: 30:03)

So this is the analog signal, this is the sample pulses and this is the PAM signal (Refer
Slide Time: 30:12) and after quantization and encoding you get the binary output the
digital data that you are getting which can be line encoded for sending through the
transmission media. So this is the transmitter part which generates digital output signal
from the analog input signal and then this digital input signal can be transmitted through
the transmission media after line coding which will be received at the receiver and again
you have to perform the reverse process.

(Refer Slide Time: 30:54)

The reverse process first is done with the help of digital to analog converter or DA
converter and the D-A converter will generate analog signal which has to be filtered.
After PAM we got the quantized values so that has to be smoothed with the help of the
filter and we shall get the analog signal recovered from the system. So this is your
receiver and here is your transmitter, this side is your transmitter. So on this side and this
side is the transmitter and this side is the receiver as I have shown.

As an example let us consider an input signal which is voice which has a bandwidth of 4
KHz and as I said the sampling frequency has to be at least twice the highest frequency in
the input signal otherwise a problem known as aliasing error will occur. You may be
asking why the sampling rate has to be more than twice the maximum signal. If it is not
done that way it will lead to you a kind of error known as aliasing error.
(Refer Slide Time: 33:12)

Sometimes the signal is band limited before doing the sampling to avoid aliasing error. so
the 4 KHz bandwidth signal at sample at the rate of 8 KHz and then you are using an
analog to digital converter and using an 8-bit A-D converter each sample value is
converted into 8-bit. So each sample value that means in one second you have got 8 k
sample values and you multiply with 8-bit to generate a digital data having 60 Kbps so
data rate is 64 Kbps. So you see here a voice of 4 KHz band bandwidth has got a 64 Kbps
data rate whenever you convert into a digital data, this is the data rate. This has to be
properly encoded by line coding and then sent over the transmission media.

As I mentioned whenever you use an analog to digital converter of suitable number of


bits quantization error is introduced, essentially it depends on the step size. Suppose you
are using 5V the maximum is from 0V to 5V and quantization level is 8-bit that means
256 levels so the step size is essentially 5V by 256. So any intermediate levels between
this 5V/256 will be translated into those discrete values and that will lead to error. And
even when the input signal is of low amplitude or high amplitude we are encoding by
using the same step size and we are not doing conversion in a different way. Therefore as
a result with the constant of a fixed number of levels the situation can be improved using
variable step size.
(Refer Slide Time: 33:46)

Because of the fixed step size the signal to noise ratio is more for high level signals.
Unfortunately for small amplitude signals the signal to noise ratio is poorer. How that can
be overcome? That can be overcome or improved by using variables step size by a
technique known as companding. It is essentially a technique by which we are
introducing non-linear encoding. that means whenever the signal amplitude level is low
then we are using smaller step size so that we get the better accuracy and on the other
hand when the amplitude level is high step size can be high. That is precisely what is
being done in this case. For that purpose we can use a non-linear circuit to convert the
signal in this form.

(Refer Slide Time: 35:30)


So here as you see (Refer Slide Time: 35:41) for low amplitude values whenever you are
not using compression this particular curve is linear. For all values smaller and higher
values the slope of this curve is same in this particular case.

But whenever you are using compression then you see that this slope is more than
whenever you have got higher values. That means slope is more for smaller values than
higher values. Then after passing through this non-linear circuit you can apply the signal
to the analog to digital converter and this will avoid the problem that we have already
discussed.

So what we are doing is the steps are close together at low signal amplitude and further
apart at high signal amplitude. This improves the signal to noise ratio particularly when
the signal level is low. So, companding is the technique which can be used to improve the
signal to noise ratio.

Now let us look at the limitations of Pulse Code Modulation PCM technique that we have
already discussed.

(Refer Slide Time: 37:08)

Pulse code modulated signal has got high bandwidth as you have seen. Say 4 KHz analog
signal will require a bandwidth of 4 Kbps.

For a voice signal you require a data rate of 56 if you are using 7-bit encoding or 64 Kbps
if you are using 8-bit encoding. to overcome this problem a technique known as
differential PCM can be used. A differential PCM can be used whenever you are using
this PCM technique. But this is based on the observation that whenever you are using
PCM that is sending analog signal we have observed that the signal is changing very
slowly. Since the voice signal changes very slowly why not send the change rather than
the absolute values that is the basic idea of this differential PCM.

Instead of sending the absolute values the difference between two consecutive sample
values instead of the sample value may be sent, this is the basic idea of differential PCM.
Since the signals are changing slowly the difference will have small values so the number
of bits required will be less so bandwidth will be less so that’s the basic idea of this
differential PCM. And you can use either analog technique or mixture of analog and
digital technique or purely digital technique to do this. It can be done in one of the three
ways to generate differential PCM signals.

(Refer Slide Time: 38:50)

However, a very special case of differential PCM is DM which stands for Delta
Modulation. In what way it is a special case of differential PCM? In differential PCM
what we are trying to do is we are sending the difference between two consecutive
sample values. Now what we are doing is we are reducing the difference to a single bit.
We are sending either 0 or 1. That means if the difference is positive that means if the
next sample value is more than the previous sample value we shall be sending one. And if
the previous sample value is more than the present sample value then we shall be sending
0. So instead of sending 8-bit per sample we shall now send only one bit per sample.
That’s the basic idea of Delta Modulation. So depending on higher or lower we are
sending output one or 0 and we are sending one bit at a time so as a consequence the
bandwidth will be much less as you can see here.
(Refer Slide Time: 40:16)

(Refer Slide Time: 40:16)

This is the example of the Delta Modulation. here is the analog signal changing very
slowly as you can see and this change in being monitored and if it is increasing that
means the previous sample value is less than the previous sample value you are sending
ones and whenever it is less we are sending zeros. Therefore we are sending one bit at a
time so the bit stream that you have to send is 1 0 1 0 1 0 1 0 1 1 1 and so on and you can
see here that per sample we are sending only one bit so the bandwidth is less as you can
see in case of Delta Modulation.
You can use a circuit like this (Refer Slide Time: 41:00) an analog comparator, here is
the analog input and with the help of this clock we are finding out whether this
comparator output is 0 or 1 and depending on that 1 was 0 and with the help of that this is
essentially some kind of integrator or adder which regenerates the reconstructed signal.
That means analog waveform that you are receiving is reconstructed here and with one
bit unit delay you are feeding it so that the previous value is compared with the present
value, here is the previous value and here is the present value (Refer Slide Time: 41:45)
these two are compared with the help of this comparator. And with the help of such
simple circuit you can generate the delta modulated signal. Here you get the zeros and
ones the binary output which can be sent through the transmission media after suitable
line coding.

(Refer Slide Time: 41:23)

As you can see here the receiver is also very simple, here is the simple adder, the
previous value after one bit delay is added with the binary input and then the
reconstructed waveform is obtained here. So we can see here that Delta Modulation does
not require complex hardware. Here are the advantages of Delta Modulation.

Simplicity of implementation: Each sample is represented by a single binary digit as you


have seen which makes it more efficient than Pulse Code Modulation technique.
(Refer Slide Time: 42:29)

However, there are two important parameters; the size of the step and the sampling rate.
These two play a very important role.

This fixed step size leads to overloading. What do you mean by overloading? As you can
see here if the analog is signal changing very slowly we can see that the data that you
have to send will become 0 1 0 1 which has to be sent alternately which leads to
overloading. That means it is slowly changing.

(Refer Slide Time: 44:51)


If the step size is such with the input changes within the step size then alternately you
have to make 0 1 0 1 this is known as overloading. And overloading occurs not only due
to higher voltage but also due to slope. Higher loading means whenever it exceeds the
limit then it is overloading. Moreover error arises whenever the slope is fast then we shall
call it slope overloading as you can see in this diagram. Here this kind of slope is getting
converted properly. As you can see we shall be sending 1 1 1 1 and 1 consecutive ones.

But in this particular case the sampling rate is such that it is not able to follow the first
changing signal. This is leading to slope overloading. And as you can see here whenever
you reconstruct the signal from this received signal we will not get whatever is
transmitted that means whatever is received so this leads to distortion. This distortion can
be overcome. First of all distortion is arising because of fixed step size and the sampling
rate and that can be overcome by using Adaptive Delta Modulation.

This technique can be used to overcome the problem of overloading particularly slope
overloading. What we are doing is when the step sizes are controlled in such a way that
step sizes are small when the signal changes are small and the step sizes are made large
when the signal changes are large.

(Refer Slide Time: 45:22)

So what we are doing is the step size is not made fixed. When the input changes slowly
the step size is made smaller and when the input changes rapidly then it says the step size
is made larger. That is how it is able to overcome the overloading problem. So here we
shall compare the Pulse Code Modulation with the Delta Modulation.

We have seen that for voice signal with 256 quantization level that means AD converter
has got 8-bit using 8-bit ADC the data rate is 64 Kbps and this requires a channel
bandwidth of 32 KHz. we know that channel bandwidth is 2B so from that the channel
bandwidth has to be 32 KHz. Moreover it requires more complex hardware. That means
the PCM does require higher bandwidth and more complex hardware to generate the
pulse code modulated signal.

On the other hand the Delta Modulation does not require higher bandwidth. However,
there is a point. Point is does Delta Modulation give you good quality signal or good
quality reproduction at the receiving end?

Unfortunately it has been observed that to obtain comparable quality a sampling rate of
100 KHz is required in case of Delta Modulation to send voice signal. as you have seen in
case of pulse code modulated signal the sampling rate was 8 KHz and then after
converting into Pulse Code Modulation by using 8-bit encoding you were getting 64
Kbps. On the other hand to get the same quality of performance in terms of intelligibility
and quality the sampling rate has to be 100 KHz otherwise you don’t get that much
quality.

So we find that ultimately bandwidth was the key parameter in favor of Delta Modulation
and it is only possible whenever we compromise in quality and intelligibility. That means
if we compromise in quality and intelligibility then Delta Modulation requires lesser
bandwidth.

(Refer Slide Time: 49:13)

As you have seen Delta Modulation requires much simpler hardware compare to Pulse
Code Modulation. So we find that Delta Modulation has some advantage, Pulse Code
Modulation has some advantage but whenever we want quality and we don’t want to
compromise on quality and intelligibility Pulse Code Modulation is preferred. That is the
reason why in modern telephony you will find Pulse Code Modulation is widely used
rather than Delta Modulation.
But whenever we can compromise on quality and intelligibility we can use Delta
Modulation. So it’s a trade off, whether you want lesser bandwidth or higher quality. So
depending on these two we can choose either PCM or DM. But nowadays better quality
is more important that’s why the PCM is preferred.

We have discussed various techniques for encoding digital data to digital signal and also
for encoding digital analog data to digital signal so both types of encodings we have
discussed. Let us look at some examples or application examples. We have discussed
Manchester encoding. We shall find how Manchester encoding is used in Ethernet LAN.

(Refer Slide Time: 50:11)

Differential Manchester encoding is used in token ring LAN. We have already discussed
this. Then we have discussed the 4B 5B block coding and using NRZ I line encoding that
is used in FDDI Fiber Distributed Data Interface LAN. So we are finding that the digital
to digital conversion is widely used, those encoding techniques are used in LAN
technology. On the other hand that Pulse Code Modulation technique is used in Public
Switched Telephone Network or PSTN.

We shall discuss all these applications later on in detail. With this we come to the end of
today’s lecture. We have some review questions to be answered in the next lecture.
(Refer Slide Time: 51:19)

1) Why B8ZS coding is preferred over Manchester encoding for long distance
communication?

2) Why is it necessary to limit the bandwidth of a signal before performing sampling?

3) Distinguish between PAM and PCM signals. What is the difference between Pulse
Amplitude Modulated signal and Pulse Code Modulated signals?

4) What is quantization error? How can it be reduced?

Obviously we are referring this question with respect to PCM.

5) Explain how and in what situation differential PCM performs better than PCM.

The fifth question will be answered in the next lecture.

Here are the answers to the questions of lecture-7.


(Refer Slide Time: 52:44)

2) Why do you need encoding of data before sending over a medium?

Answer is: suitable encoding of data is required in order to transmit signal with minimum
attenuation and optimize the use of transmission media in terms of data rate and error
rate.

So apart from that I should add synchronization. That is another parameter which is
achieved with the help of suitable encoding.

2) What are the four possible encoding techniques give examples.

The four possible encoding techniques are digital data, digital signal which are used in
transmitters. Second is analog data to digital signal used in codec, codec stands for coder
decoder, third is digital data to analog signal used in modem and the fourth is analog data
to digital signal example is telephone system.
(Refer Slide Time: 53:01)

3) Between RZ and NRZ encoding technique which requires higher bandwidth and why?

RZ encoding requires more bandwidth as it requires two signal changes to encode one
bit. We have seen that per bit two signal changes are present. Obviously RZ will require
higher bandwidth than NRZ.

(Refer Slide Time: 53:35)

4) How Manchester encoding helps in achieving better synchronization?


In Manchester encoding there is a transition in the middle of each bit period and the
receiver can synchronize on that transition hence better synchronization is achieved in
Manchester encoding.

With these we conclude our discussion on digital transmission. In the next lecture we
shall discuss on transmission of analog signals, thank you.
Data Communication
Prof. Ajit Pal
Department of Computer Science & Engineering
Indian Institute of Technology, Kharagpur
Lecture No # 9
Transmission of Analog Signal-I

Hello viewers welcome to today’s lecture on transmission of analog signal.

(Refer Slide Time: 01:06)

In the last two lectures we have discussed about the transmission of digital signal. there
we have seen how analog and digital data can be encoded into a digital signal form which
can be transmitted through the media transmission media various types of transmission
media and we have seen how the encoding is done so that the bandwidth of the signal
matches with the bandwidth of the transmission media so that it passes with less
attenuation, less distortion and also it provides you necessary signaling for
synchronization, error detection and other purposes.

Now we shall focus on the transmission of analog signal. In this lecture we shall cover
the following topics: First we shall discuss why modulation? It is essentially an
introduction to this particular lecture, the need for modulation. Then we shall consider
various modulation techniques. And in this particular lecture we shall primarily focus on
amplitude modulation.
(Refer Slide Time: 02:11)

So we shall introduce to you the basic concepts of amplitude modulation then we will see
the concept called modulation index which will be introduced to you, frequency spectrum
of AM signal, average power of different frequency components that is whenever a signal
is modulated it generates different frequency components, and we shall also see what is
the average power of different frequency components. Then we shall consider some
special situations like Single Side Band SSB and DSBSC Double Side Band Suppressed
Carrier Transmission. Finally we shall discus about the recovery of baseband signal.

After attending this lecture the students will be able to explain the need for modulation,
they will be able to distinguish between modulation techniques, they will be able to
identify the key features of amplitude modulation then they will be able to explain the
advantages of SSB and DSBSC. Finally they will be able to explain how the baseband
signal can be recovered from the received signal.
(Refer Slide Time: 03:46)

So this diagram gives you the basic scheme of analog data to analog signal. Here as you
can see we have used one analog data as an input to the analog to analog conversion
system.

(Refer Slide Time: 05:58)

Here apart from applying this analog data another auxiliary signal usually a sinusoidal
signal is applied to this analog to analog converter known as carrier. And this carrier is
essentially a sinusoidal signal and as you know a particular analog signal whenever it is a
sinusoidal there are three important parameters by which the signal can be characterized
that is the amplitude, phase and frequency. Either one of them individually or a
combination of them are modified to generate a signal which is known as modulated
signals. So here is your modulated signal and this process of applying a signal to be
modulated is known as modulating signal.

Modulating signal and carrier are applied to the modulator and this process of conversion
from analog data to analog signal which involves manipulation of one or more of the
parameters of the carrier that is amplitude, frequency of phase that characterizes the
analog signal is known as modulation. So this process is known as modulation. And you
may be asking why modulation is necessary?

When we do modulation one important operation that is being performed on the signal is
known as frequency translation. What it does is it translates the signal from one region of
frequency domain to another region, it’s like this. Suppose this is the representation of the
signal in frequency domain so here you have got your f (Refer Slide Time: 6:44) and
obviously in this you have got the amplitude. Now suppose you have a signal with
frequency range say f1 and f2 so this is the range of frequency and we can say that this is
how you can represent it.

(Refer Slide Time: 07:32)

You have frequency components from f1 to f2. Now by modulation this signal can be
translated to another frequency range say f1 dash to f2 dash to this so it can be translated
to another frequency range. Usually this frequency range f1 dash or f2 dash is much
higher than f1 and f2. So this is how this can be translated. However, the information
content of the translated signal is such that the original signal can be recovered from it.

Now you may be asking what is the benefit of this?

One important benefit of this is that you will be able to use an antenna of practical size.
So whenever you do translation say baseband signal to higher frequency can be
transmitted through a bandpass channel using an antenna of smaller size. Suppose you are
trying to send 1 KHz signal so what is the wave length for this? If this is the frequency
what is the lambda value? As we know the wavelength will be 300 000 m, we have
already discussed about it.

Now whenever you are trying to send a frequency of 1 KHz the wave length is 300 000 m
and obviously the antenna has to be comparable to this size. So obviously an antenna of
this size 300 000 m is impractical you cannot do it.

(Refer Slide Time: 09:19)

However, if you translate it to a frequency say 10 MHz whenever you translate it to 10


MHz this will correspond to a lambda that is equal to only 30 m so obviously it is quite
possible to have an antenna of 30 m. So after translating this to a higher frequency then
you can have an antenna of smaller size and then this modulated signal can be transmitted
very easily using smaller size antenna. And of course at the other end after receiving it
you have to do demodulation which we shall discuss later.

So this is the first benefit of modulation, we will be able to have an antenna of practical
size.

What are the other benefits? Another important benefit of this modulation is narrow
banding.

Suppose you are trying to send a frequency range of say high fidelity audio frequency
range which has frequency range of 20 Hz or 20 KHz so you can say hi-fi audio,
obviously you send music and other things. So you see the ratio of the highest frequency
to lowest frequency is quite high. That means highest frequency is 20 KHz and the lowest
frequency is 20 Hz. The ratio between highest frequency to lowest frequency is thousand.
Obviously if you design an antenna for this frequency then it is not at all suitable for this
frequency. Or if you design an antenna for higher frequency then it will not be suitable
for transmission for lowest frequency. So a single antenna will not be able to cover or
able to transmit both the signals effectively and efficiently.

Now suppose you modulate it using 1 MHz signal the carrier frequency is 1 MHz. Now
this frequency is translated to 1 MHz that is your 10 to the power 6 plus 20 Hz and this
one becomes 20 into 10 to the power 3 plus 10 plus 10 to the power 6 Hz.

(Refer Slide Time: 13:05)

Now if you take the ratio between the two the ratio will be only 1.002. That means if you
design an antenna for highest frequency that will be able to send the lowest frequency
very easily because as you can see the ratio between the highest and lowest frequency is
only 1.002. In other words a signal antenna will be able to transmit both the frequencies
efficiently and effectively. This process is essentially narrowbanding.

Here we are converting a wide band signal wide band not in terms of absolute values but
20 Hz to 20 KHz their frequency is not really very high frequencies. However, you may
consider it as a wideband in terms of the ratio of the highest frequency to lowest
frequency. Now it is translated into a frequency of very narrow band and narrow band
having the ratio of highest to lowest is only 1.002. This allows you to transmit very
effectively by using suitable antenna. Therefore this narrow banding is another benefit of
this modulation.

Third benefit is multiplexing. as you have seen whenever you do the multiplexing
suppose here is your original signal having bandwidth say 0 to let’s assume this is f1 to f2
where f1 is very close to 0 so this signal is usually called baseband signal.
(Refer Slide Time: 14:47)

Now you translate to a frequency from f1 dash to f2 dash. Now this is converted into a
bandpass signal which can be transmitted through a bandpass channel. Now another
frequency of the same range can be translated to frequencies such as f1 double dash to f2
double dash. Now both of them can be sent through the same channel. As you can see
here you can be separate them with the help of bandpass filter at the receiving end.

In other words this allows you to send these two separate signals simultaneously through
a transmission media. This process is known as multiplexing and this allows frequency
division multiplexing.

As we can see here the frequency range or the bandwidth available is divided like this,
this is one part, this is another part and so on so in this way you can divide it into a
number of frequency range which can be sent through the transmission media then at the
other end the receivers will be able to separate them by suitable filtering. So this is
another benefit of this modulation signal that is multiplexing, it allows you multiplexing.
(Refer Slide Time: 15:04)

Here are the various modulation techniques that is possible. As I told you will be able to
modify one of the three parameters amplitude, frequency and phase. Whenever you vary
the amplitude we get amplitude modulation that means amplitude of the carrier, the
second alternative is known as angle modulation where you are modifying the frequency
that means the frequency of the carrier is modified based on the signal to be sent or
modulating signal or it can be phase modulation. These two together are known as angle
modulation.

In this lecture we shall primarily focus on amplitude modulation. So here you see we are
giving an example of a signal. This is a baseband signal of low frequency. In this case it
is a sinusoidal signal and this is your carrier. And here as you can see the amplitude of
this carrier has been modified with the help of this baseband signal. That means now here
the amplitude is maximum so here you have got maximum amplitude, here it is minimum
so you have minimum amplitude so we get a carrier frequency with time varying
amplitude this is known as modulated signal.
(Refer Slide Time: 16:51)

Let us see how it is done. The waveform is represented by this equation em(t) is equal to
Em cos (2pi fmt).

(Refer Slide Time: 17:16)

Here we are considering the modulating waveform as a sinusoidal wave with frequency
fm having maximum amplitude Em and the carrier signal is represented by ec(t) is equal to
Ec cos (2pi fc t plus phic).here you see that the maximum amplitude is Ec it has got
frequency fc and a phase difference phic.
Then the equation of the modulated signal can be given by s(t) is equal to (Ec plus Em cos
2pi fmt) cos 2pi fct so this is the signal which is generated after modulation. And there is
an important parameter called modulation index which is represented by m.

(Refer Slide Time: 17:56)

The m is equal to (Emax minus Emin) by (Emax plus Emin) is equal to Emin by Emax if you
look at this diagram here you see this is your Emin and this is your Emax of the modulated
signal (Refer Slide Time: 18:26). Now if Emax and Emin are the maximum and minimum
values then modulation index m is equal to Emax minus Emin by Emax plus Emin. As you can
see here Emax is equal to Ec plus Em that means here this maximum amplitude will be
equal to Emax is equal to Em minusEc.
(Refer Slide Time: 19:26)

Then modulation index that we get m is equal to (Emax minus Emin) by (Emax plus Emin) Em
by Ec. That means in this case we get m is equal to Em by Ec. In this particular case Em is
equal to given by the maximum amplitude of the modulating signal and Ec is the
maximum amplitude of the modulated signal.

Now this kind of waveform you get whenever the value of m is equal to less than one.
That means the maximum amplitude of he modulating signal is less than the maximum
amplitude of the carrier then we get a waveform like this.

(Refer Slide Time: 20:14)


Now let us see what happens whenever you increase the modulation index. As you
increase the modulation index or the value of m as you can see here the value of Em has
been increased compared to the previous diagram.

(Refer Slide Time: 20:44)

So here what has been done is Em has been made equal to Ec. So in that case what will
what is happening is you are getting the value of m is equal to 1 because m is equal to Em
by Ec.

So, for m is equal to 1 as you can see in this case this signal amplitude of this and
amplitude of this is same then the difference is 0and the maximum value is 2Ec so it
varies from 0 to 2Ec so in that case we get the maximum modulation. That means this is
the maximum permissible modulation possible because as we shall see in the next slide
when you get m is equal to greater than 1 here the value of Em has been made greater than
Ec. So when Em is greater than Ec then value of m is greater than 1. And in such a case we
get a waveform like this.
(Refer Slide Time: 21:39)

And it can be shown that whenever m is greater than 1 then it is not possible to recover
the signal at the other end. That means recovery of the signal will not be possible
whenever the modulation index is greater than 1. Let us now look at the frequency
spectrum of the modulated signal.

(Refer Slide Time: 23:08)

Let us consider modulating using a sinusoidal AM that means the modulating signal is a
sinusoidal signal. And in that case the modulated signal will be equal to s(t) is equal to Ec
[1 plus m Cos 2pi fmt] Cos 2pi fct which can be expanded to Ec Cos 2pi fct plus m Ec Cos
2pi fmt Cos 2pi fct. This particular term can be represented by this form; so m by 2 Ec Cos
2pi fc minus fmt plus m by 2 Ec Cos 2pi fc plus fmt. So we notice that there are three
frequency components. One frequency component is fc, another frequency component is
fc minus fm and third frequency component is fc plus fm. So after modulating a sinusoidal
carrier with the help of a sinusoidal modulating signal we get three frequency
components and their respective amplitudes are shown in this diagram.

(Refer Slide Time: 23:58)

Here we see we get a carrier of peak amplitude Ec, we get lower side band this is known
as lower side band (Refer Slide Time: 23:30) this is your carrier and this is your upper
side band or higher side band. So this higher side band has an amplitude mc by two Ec so
m by 2 Ec, Ec and m by 2 Ec these are the three signals having different amplitudes. So Ec
cos 2 pi fct and M by 2 Ec cos twice pi fc minus fmt so these are the three frequency
components.

Since AM is as you know the ratio of Em by Ec essentially this will be equal to Em by 2


and this will be also equal to Em by 2. That means the side band frequencies will have
amplitude that is equal to Em by 2 because M by 2 into Ec will be equal to as you know
m is equal to Em by Ec so Ec will cancel out and that will give you Em by 2 (Refer Slide
Time: 25:02) that means the amplitude of the side frequencies will be equal to Em by 2
which is dependent on the amplitude of the modulating signal.

Now what is the effect of this? Let us take with the help of an example.

Suppose here a carrier of 1 MHz and peak value 10V is modulated by 5 KHz sine wave
having maximum amplitude 6V determine the modulation index and frequency spectrum.
So in this case what is the value of m? Value of m is equal to 6 by 10 where Emis equal to
6 and Ec is equal to 10 so this is 6 by 10 that is 0.6 obviously this is less than 1 that is
very good, we get good quality signal which can be recovered. That means original signal
can be recovered from the modulated signal that is being received.
Now, what will be the range of frequencies? Hence 1 MHz means 10 to the power 6
minus 5 KHz 5 into 10 to the power 3 to 10 to the power 6 plus 5 into 10 to the power 3.

(Refer Slide Time: 26:40)

This will be the range of frequencies and bandwidth is equal to is


difference of the two that means equal to 10 into 10 to the power 3 Hz or the bandwidth
is 10 KHz. So you see that bandwidth is not very high twice that of the modulating
signal. So this is shown in this particular diagram.

(Refer Slide Time: 26:50)


Here we see the modulation index as we calculated 0.6 and the side frequencies are 995
KHz and since we are modulating the sinusoidal other frequencies are not there we shall
get three frequency components 995 KHz, 1000 KHz and 1005 KHz.

So these are the frequencies and here we get the three amplitudes of the frequency
components, the carrier will have 10V and the side bands will have 3V. This is the
frequency domain representation of the modulated signal. Obviously the time domain
representation we have already seen. The time domain representation will be somewhat
like this (Refer Slide Time: 27:38). On the other hand the frequency domain
representation as we can see will have something like this that means three spectral
components (Refer Slide Time: 27:47).

So now let us consider the bandwidth whenever the modulating signal is not a simple
sinusoidal wave. We are now modulating the signal with the help of audio signal. And in
that case the audio signal has a bandwidth of Bm. From almost 0 to Bm is the bandwidth
of the audio signal. Now the modulated signal will have bandwidth starting from fc minus
Bm so here is your fc minus Bm to fc plus Bm and as you can see bandwidth is equal to 2
Bm that is the bandwidth of the signal. So the bandwidth of the modulated signal is two
Bm as it is written there.

(Refer Slide Time: 29:13)

So you see that after modulating a carrier of frequency fc we get frequency translated
signal having twice the bandwidth of the modulating signal. Thus modulating signal has
bandwidth Bm so you get twice the bandwidth and of course the amplitudes of different
frequencies will be based on the modulation index as we have seen. Now how much
power will be associated for transmitting this signal? It is very important to understand
the power required for transmission of the analog signal. As we have seen there are three
different frequency components. Thus each of the frequency components will require
power for transmitting through the antenna.
(Refer Slide Time: 30:47)

So let’s assume that the power is developed across a resistor of value R. Then for the
carrier the average power is equal to Ec square by 2R. This is for sinusoidal wave (Refer
Slide Time: 30:07) this is voltage m is E by 2 so that is ((30:18)) square by 2R so here it
is Pc into M square by 4. Since there will be two side bands to calculate the total average
power you have to add the power required for transmission of the carrier and also the
power required for transmission of two side bands. So if you add up you get the total
power required for transmission that is equal to Pc into 1 plus m square by 2.

Now one very interesting observation from this is that maximum power is required for
transmission of the carrier. We have seen in the previous diagram you can see here it is
Ec square by 2R and as you can see depending on the modulation index the side band
power as you know usually m is less than equal to 1 so this value will be always less than
one fourth. That means you are using less than one fourth of the power to transmit one of
the side bands. That means half of the power is transmitted to send the two side bands
less than half and half of the power you are using to transmit the carrier signal.
(Refer Slide Time: 32:21)

Now for recovery purpose do you really need all the three frequency components. it has
been found that we can recover the original signal even if even when the carrier is not
transmitted. That means even if we have two side bands then we can recover. So that has
lead to what is known as Double Side Band with suppressed carrier modulation. So
transmission is possible by suppressing the carrier and by sending only the two side
bands.

Another alternative is you can send one of the two side bands. So the original signal can
be theoretically recovered. There are some practical aspects that we shall consider later.
However, it is possible to recover the modulating signal from one of the two side bands.
So if we can send only one of the two side bands and can recover the original signal that
is modulating signal that means we shall be able to perform transmission with minimum
power.

The power required for transmission power required to derive the antenna will be much
less. So this has led to two different types of modulation; one is Double Side Band
suppress carrier modulation. This utilizes the transmitted power more efficiently than
Double Side Band AM transmission. The normal transmission is known as DSB AM
where we are transmitting both the side bands as well as the carrier.
(Refer Slide Time: 33:55)

This is your normal DSB transmission Double Side Band transmission. Now the carrier
has been removed and as we can see here we have got only two side bands. This is your
DSBSC (Refer Slide Time: 34:04) Double Side Band with Suppressed Carrier
transmission so you see the carrier is not present. To transmit the signal we will require
only less than half the power which is possible by using Double Side Band transmission.
so this is the primary advantage of the DSBSC Modulation. However, it has some
disadvantage in recovering the signal which we shall discuss later.

As we have mentioned another alternative is to use Single Side Band Modulation. So, in
Single Side Band Modulation as you can see you can send either the upper side band or
the lower side band. So, in this particular case as we have seen the power required for
transmission is not only one fourth less than one fourth I will say so if the modulation
index is less than one obviously the power requirement will be less than one fourth and
not only you will be able to transmit with lesser power but the bandwidth requirement is
also reduced.

Here you see for upper side band bandwidth is fc 2 fc plus fm and for lower side band it is
fc minus fm to fc. So whenever you have bandwidth trench you have to multiplex many
signals then Single Side Band is the solution. So as you can see the modulated signal can
have only bandwidth of Bm and as a consequence this is very efficient in terms of
bandwidth as well as in terms of energy that is required for transmission.
(Refer Slide Time: 35:48)

Now let us focus on the recovery of the baseband signal. How do you recover the
baseband signal at the receiving end? Obviously we are transmitting signal with the sole
purpose of getting back the modulating signal. That means if you are sending audio
signal then at the other end we would like to get back the audio signal and they started in
its original form.

How that can be done? One approach is by multiplying the signal second time. As you
can see let the baseband signal be empty and after multiplication with the carrier the
signal is now converted to m(t) Cos 2pi Cos Wct where Wc is 2pi fct. This is the
modulated signal m(t) Cos Wct is that is being transmitted. Now we multiply this with the
help of using the carrier signal. So, if we multiply the original signal that is your
modulating signal second time with the carrier signal and first time we multiplied with
the carrier signal to get the modulated signal now you are multiplying second time so
let’s see the effect. We get m(t) Cos square Wct which can be expanded to m(t) 1 by 2
plus ½ Cos2Wct. So we find that she has got two components one is m(t) by 2 another is
½ m(t) Cos2Wct

So you observe that the baseband signal has reappeared here. However, not only the
baseband signal has reappeared but you have got two other frequency components like
2fc minus fm and 2fc plus fm. If you expand this 1 by 2 m(t) Cos 2Wct you will get two
frequency components these are 2fc minus fm and 2fc plus fm.
(Refer Slide Time: 38:57)

However, normally fc is much greater than fm. As a consequence these spectral


components 2fc minus fm to 2fc plus fm can be very easily removed by using low pass
filter. That means if you use low pass filtering then you can get back only the baseband
signal so this approach is known as synchronous detection. That means whenever you
multiply the modulated signal second time by using a using the carrier you get back the
baseband signal after filtering out the other high frequencies such as 2fc minus fm and 2fc
plus fm and this process is known as synchronous detection. Now there is one important
limitation of the synchronous detection. the important limitation is the signal Cos2Wfc
has to be precisely synchronous.

(Refer Slide Time: 42:05)


So the synchronous detection approach is straight forward. But it has a disadvantage that
whenever you do the multiplication then that carrier signal which you are using for
multiplying second time has to be precisely synchronous. That means there should not be
any phase difference. If there is any phase difference then you will see that after
multiplication the signal that you get will not be the baseband signal.

Question is, how do you get the synchronous signal? You have to use costly hardware to
generate precisely synchronous carrier signal at the receiving end. That’s why it is better
to have normal Double Side Band modulated signal. So if you receive the carrier that
carrier can be used to regenerate or generate a synchronous carrier which can be
multiplied to get back the baseband signal.

However, if you don’t send the signal, if you use DSBSC then it will be very difficult to
generate the synchronous carrier at the receiving end. So this is a very difficult process.
On the other hand if we use that original DSM signal it can be recovered very easily by a
simple circuit like this (Refer Slide Time: 41:40). You can use one diode, a capacitor and
a resistor and this can be grounded. So across this you can get back. So here you apply
the modulated signal, here you will get back the baseband signal. This type of simple
circuit can be used if we use DSM signal. That means there is no need for synchronous
detection, you can use this kind of simple circuit as it is shown here.

(Refer Slide Time: 43:02)

A diode, resistor and capacitor combination can be used to get back the original signal.
As you can see here the value of R and C can be chosen in such a way that it will follow
the carrier. That means you will get this particular curve which is essentially the
baseband signal. And since this frequency is very high (Refer Slide Time: 42:55) fc is
very high than fm this will be quite smooth and you will get a single that baseband signal
recovered here by using this kind of simple circuit. This is known as detection with the
help of a diode and this is commonly used in many situations.

Of course the modulated signal received at the receiving end is greatly attenuated. As you
know the signal is passing through a long distance and the attenuation is proportional to 1
by distance square. And as a result if the receiver is at a long distance the signal will be
highly attenuated. Not only that the atmosphere is always generating some noise when
there is lightening, spark and various other atmospheric disturbances. so at the receiving
end the signal to noise ratio is very poor and also the signal level is very low.

Moreover, there may be many other channels adjacent to the signal. That means we will
be using some kind of frequency division multiplexing. Thus a number of channels will
be transmitted and they can be very close, side by side, so in such cases recovering the
signal may be little difficult.

(Refer Slide Time: 45:01)

So, whenever the signal is highly attenuated and you have got many channels adjacent to
it. The signal has to be amplified before you can do detection and the noises are to be
removed by suitable filtering. And for this purpose one approach that is commonly
followed is known as superhetrodyne approach.

Let me explain the superhetrodyne approach with the help of the block diagram that is
commonly used.
(Refer Slide Time: 45:10 - 48:22)

Here you have got an antenna, this is the complete receiver circuit (Refer Slide Time:
45:16), here is your antenna where you are receiving signal of the order of may be
microvolt or less, then there is an RF amplifier. RF amplifier is essentially amplifying the
carrier signal so this is tuned to the carrier frequency. So here there is a tune circuit, tune
amplifier that is amplifying the RF carrier.

Then in the superhetrodyne technique what you do is you use a local oscillator having
frequency Fose. This local oscillator frequency is greater than greater than the frf, it is
greater than the RF frequency. So this local oscillator frequency is mixed with the
amplified carrier signal amplified RF signal and whenever you do that as you know the
signal that we have received is shown here which has the carrier frequency and two side
bands.

Now whenever you do that mixing with the help of a local oscillator which has higher
frequency than frf that’s why it is called superhetrodyne. if this oscillator frequency less
than the rf then it is not superheterodyne (Refer Slide Time: 46:47) and whenever you
multiply a mix with fc and this modulated carrier you will get frequency spectrum like
this, you will get intermediate frequencies equal to fose minus fc and fose plus fc. Then
you get frequency fif is equal to fose minus fc and also fose plus fc these two will be
there.

Now, what you do is you simply filter that fif. That intermediate frequency is filtered
with the help of intermediate frequency amplifier. So intermediate frequency amplifier
not only it filters out this frequency component but it also amplifies it. So it does the
filtering and amplification together and then it is applied to a detector. So here it is
applied to a detector that detector will generate the baseband signal. Thus the baseband
signal and of course that RF the radio frequency noises can be filtered out with the help
of low pass filter as shown here then you will you will be applying to an audio amplifier.
So, after detection and low pass filtering you get back the audio signal as it is shown here
which is then amplified and applied to a loud speaker.

This is a typical AM receiver that we use in our houses; this is a common household item.

(Refer Slide Time: 48:41)

Superhetrodyne approach provides us a number of benefits. First of all it is used to


improve adjacent channel selection, it provides necessary gain because as you can see
amplification is done at different stages. So here it can be microvolt then it is here
millivolt mVwhenever you do the detection and here it can be of the order of Volt. So
microvolt to Volt a gain of about 1000 is performed as it goes from RF to audio. Then it
provides a better signal to noise ratio because of the filtering and also there is tune
filtering where only the carrier frequency is received so the noise is rejected and we get
good quality signal. This is the commonly used technique of popular AM receivers.
(Refer Slide Time: 50:12)

We have discussed the amplitude modulation technique, we have discussed what


modulation index is, the frequency spectrum, the power required for transmission and we
have also discussed how the signal can be recovered. And in the next lecture we shall
discuss about the angle modulation and as I have mentioned it has got two different
versions; frequency modulation and phase modulation and this is how it will look like,
there will be no change in amplitude but frequency is modulated.

(Refer Slide Time: 50:15)

Here are the review questions based on this lecture.


(Refer Slide Time: 50:35)

1) What are the benefits of analog modulation techniques?

2) What are the possible analog to analog modulation techniques?

3) What is the bandwidth requirement of amplitude modulated signal?

4) What is Single Side Band transmission? What are the advantages of SSB transmission
Single Side Band Transmission?

5) Why synchronous detection is not commonly used to recover the baseband signal?

Now it is time to give the answers of the previous lecture.


(Refer Slide Time: 51:31)

1) Why B8ZS coding is preferred over Manchester coding for long distance
communication?

As you know the B eight ZS encoding is preferred over Manchester encoding because
B8ZS encoding requires lesser bandwidth bandwidth as we have seen is equal to the
bandwidth of the baseband signa. But the Manchester encoding has bandwidth about
almost twice the baseband signal. So we see that B 8ZS has lesser bandwidth which is
useful that’s why it is preferred for long distance communication.
.
(Refer Slide Time: 52:27)
2) Why is it necessary to limit the bandwidth of a signal before performing sampling?

It is necessary to limit the bandwidth of a signal before sampling so that the basic
requirement of sampling theorem that is the sampling rate should be twice or more than
twice the maximum frequency component of the signal that is satisfied. This is known as
Nyquist rate as I have already discussed. If it is violated original signal cannot be
recovered from the sampled signal so it will suffer kind of distortion which is known as
aliasing error.

(Refer Slide Time: 53:05)

3) Distinguish between PAM and PCM signals.

In order to recover or in order to convert analog data to digital signal initially sampling is
done on the analog data by using sample and hold circuit. The output of the sample and
hold circuit is known as PAM signal. The PAM signal is then converted to PCM signal.
So PAM is essentially an intermediate step to get the PCM signal. After you have got the
PCM signal an analog to digital converter is use to quantize the signal then you use an
encoder to generate the PCM signal, line encoding is done. Thus PAM is essentially an
intermediate step for generating PCM signal.
(Refer Slide Time: 54:02)

4) What is quantization error? How can it be reduced?

To convert analog signal to digital signal the analog signal is first sampled and each of
this analog samples must be assigned a binary code. In other words each sample is
approximated by being quantized into some binary codes as we have already seen. As the
quantized values are only approximations it is impossible to recover the original signal
exactly and this leads to quantization error .and quantization error can be minimized by
using non linear encoding as we have already discussed.

(Refer Slide Time: 54:21)


5) Explain how and in what situation DPCM that is differential PCM performs better than
PCM?

DPCM performs better when the input is slowly changing as in case of voice signal. As
you have already seen whenever the signal is slowly changing then the differential PCM
will require very small number of bits. And as we know the delta modulation is the
extreme case where you require only one bit. However, if the signal is not slowly
changing then this approach cannot be used. So, only when the signal is slowly changing
this can be used, the PCM has a better performance than PCM. So with this we come to
the end of today’s lecture, thank you.
Data Communication
Prof. Ajit Pal
Department of Computer Science & Engineering
Indian Institute of Technology, Kharagpur
Lecture No # 10
Transmission of Analog Signal-II

Hello and welcome to today’s lecture on analog transmission. This is the second lecture
on this topic. And in this lecture we shall cover these following points. I shall give a brief
introduction on what we have discussed in the last lecture on the same topic.

(Refer Slide Time: 01:14)

After that I shall introduce to you the basic concepts of angle modulation and essentially
we will see angle modulation involves two types of modulation; frequency modulation
and phase modulation. Then I shall discuss the relationship between frequency
modulation and phase modulation. We shall then consider the bandwidth and power
required for FM and PM transmission, then we shall switch to a different topic known as
basic concepts of digital data to digital signal conversion where there will be three
different types. first one will be Amplitude Shift Keying, we shall consider in detail the
various issues related to Amplitude Shift Keying particularly frequency spectrum of ASK
signal, the power requirement for transmission of ASK signal and also we shall discuss
about the frequency and Phase Shift Keying. Then we shall end our lecture with
applications of different types of digital signal conversion techniques. and on completion
of this lecture the students will be able to explain the basic concepts of angle modulation,
they will be able to distinguish between the FM and PM particularly their inter
relationship, then explain the basic concepts of digital data to digital signal conversion,
explain different aspects of ASK FSK PSK and QAM conversion techniques they will be
able to explain bandwidth and power requirement for the transmission of ASK FSK PSK
and QAM signals.

(Refer Slide Time: 03:00)

As I mentioned in the last lecture we have discussed about the amplitude modulation. We
have started our discussion on analog modulation techniques and I have discussed in
detail the amplitude modulation. And in this lecture I shall cover frequency modulation
and phase modulation.

(Refer Slide Time: 03:32)


(Refer Slide Time: 04:54)

First let us start with frequency modulation. In frequency modulation the modulating
signal em(t) is used to vary the carrier frequency. And because of the name of frequency
modulation the change in frequency is proportional to the modulating voltage. So here k
is a constant and em(t) is the modulating signal. This k is known as frequency deviation
constant and it is expressed in Hz by V. It may be KHz per volt depending on the size of
frequency deviation. And the instantaneous frequency of the modulated signal fi(t) is
equal to fc plus k into em (t) where fc is the carrier frequency. Here this is in lower case
and it is carrier frequency. So what we find is that around the carrier frequency the
frequency will vary which will be proportional to the modulating signal. This is the basic
idea behind frequency modulation.

Let us take a sinusoidal Frequency Modulated signal in which case the modulating signal
em(t) is a sinusoidal signal. As you can see Em Cos 2pi fmt. That means here the peak
value is Em and this is the Cos pi Cos 2pi fmt so this is the sinusoidal signal (Refer Slide
Time: 5:30). And as I have mentioned the frequency deviation fi(t) is fc plus k em(t) or I
can substitute this to get the value of em(t) to get fc plus k em Cos 2pi fmt or I can express
it in this form fc plus delta f Cos 2pi fmt. And you can see delta f is the frequency
deviation with respect to the carrier frequency. therefore we can represent the modulated
signal that means the modulated signal s(t) becomes now Ec Cos pi t actually here you
will have other terms as shown here (Refer Slide Time: 6:33) Ec Cos 2pi fct omega ct phi
t.

Now this can be represented by Ec Cos 2pi fct plus 2pi f 2pi delta f0 to t Cos 2pi fmt dt.
(Refer Slide Time: 07:54)

Now we can express it in this way and then it can be represented at Ec Cos 2pi fct plus
delta f by f sine if we integrate it we get sine 2pi fmt. So here we see that the amplitude
does not change but the frequency is changing and here the change in frequency is
represented by the modulation index beta and it is given by beta is equal to delta f by fm
so it has got relationship with the modulating frequency fm and therefore the modulating
signal can be represented by Ec Cos 2pi fct plus beta sine 2pi fmt so this is the
representation of the modulating signal whenever you do frequency modulation and as
you can see beta is the modulation index.

Now you may be asking what is the bandwidth of this modulated signal. In the last
lecture we have seen for amplitude modulation whenever we modulate it by using
sinusoidal wave the bandwidth is equal to twice the bandwidth of the modulating signal
however the amplitude varies.

Let us see what happens in this case?

Because of the presence of this term (Refer Slide Time: 8:39) we find that the modulated
signal will contain various frequency components fc plus fm fc plus 2fm and many other
frequencies so the bandwidth will be much higher and this expression for the bandwidth
is not very simple. You have to expand it using complicated mathematical expression.
(Refer Slide Time: 08:43)

Instead of doing that we can represent the bandwidth by a simplified formula BT is equal
to 2(beta plus 1) Bm so Bt is equal to the bandwidth of the modulated signal BT is equal to
2(beta plus 1) Bm where Bm is the bandwidth of the modulating signal. And as we have
seen beta is equal to delta f by B which can be represented by nf Am by 2pi B or beta is
equal to Bt bandwidth is equal to 2delta f 2B. So what we observe from the simple
relationship is that the FM requires much higher bandwidth than Amplitude Modulated
signal. So the bandwidth requirement for FM is much higher. Let us see from this
relationship. And the peak deviation delta f is equal to 1 by 2pi nf into AM Hz where AM
is the maximum value of mt and the value of that bandwidth also depends on the value of
beta and for different values of beta the bandwidth will be different.

So what happens in this particular case?

Compared to AM signal in case of AM signal we have seen the bandwidth remains 2BT
or 2Bm BT is equal to 2Bm but this is not true here.
(Refer Slide Time: 12:33)

And another parameter was there. By increasing the modulation index we found that the
signal gets distorted in case of AM. Does it happen in this case? The answer is no. In this
case as the modulation index which is beta increases the effect of that on the modulated
signal is the bandwidth increase but the distortion does not take place. That means higher
value of beta will lead to higher bandwidth of the modulated signal. In case of FM now as
we can see bandwidth requirement is ten Bm. The Bm is the bandwidth of the modulating
signal and this bandwidth of the modulated signal is 10 Bm.

That means suppose we are trying to send audio signal using FM which has a bandwidth
let’s assume 15 KHz then the bandwidth of the modulated signal will be 150 KHz. In
such a case we find for FM transmission the bandwidth requirement is much higher. Let
us compare the two cases AM and FM. In case of AM for 15 KHz audio signal the
bandwidth requirement will be 30 KHz. So with some gaps on either side you can utmost
have another channel at the interval of 50 KHz or may be 40 KHz, the channels can be
very closely placed. On the other hand in case of FM the stations has to be at least 200
KHz apart. That means because of higher bandwidth the number of channels that we can
have in FM is limited because of higher bandwidth requirement of the FM signal.
(Refer Slide Time: 13:22)

Let us look at the power. As the amplitude remains constant the total average power is
equal to the unmodulated carrier power. Here as we can see the power requirement is Ac
square by 2. That means although AM increases the amplitude of the modulated signal
increases, increases the bandwidth it does not affect the power. And as a consequence the
transmission power for FM signal is much lower. In case of AM we have seen that half of
the energy is required for transmission of the carrier. And out of the first remaining half
of the power one fourth goes out where half goes for upper side band and another half
goes for lower side band. But here we see the power requirement is equal to the
unmodulated carrier power. So here you require lesser power for transmission. However,
this is possible at the expense of higher bandwidth. The conclusion is FM requires higher
bandwidth but lesser power for transmission compared to AM.
(Refer Slide Time: 15:20)

Let us now look at the phase modulation. In case of phase modulation we have seen that
the modulated signal can be represented by s(t) is equal to Ac Cos Wct plus phi(t) where
this phase phi(t) is proportional to the phase of the signal.

The angle Wct plus phi(t) undergoes a modulation around the angle phi is equal to Wct
that means around this phase there is change in the phase which is proportional to the
modulated signal. So signal is therefore angular velocity modulated signal. When the
phase is proportional to the modulating signal that means height is equal to np m(t) we
call it phase modulation and here np is the phase modulation index just like nf in case of
frequency modulation.

One interesting observation is this FM and PM are very closely related. Let’s see how
they are related.
(Refer Slide Time: 15:20)

Here we find that if this is the modulating signal m(t) (Refer Slide Time: 15:40) and if
this is integrated then passed through phase modulator we get Frequency Modulated
signal. On the other hand if the modulating signal is passed to a differentiator then you do
frequency modulation to get phase modulated signal. So you see we can get frequency
modulate modulated signal by using a phase modulator however before that you have to
do the integration of the modulating signal and the other one is also possible you can get
phase modulated signal with the help of the frequency modulator. However, the
modulating signal has to be differentiated. Let us see how exactly it happens.

Let the instantaneous frequency of a phase modulated signal is represented by s(t) is


equal to Ec Cos Wct plus k dash m(t) here k dash is a constant. Now let’s assume that m(t)
is derived as an integral of the modulated signal et. So em(t) is the modulating signal so
that m(t) can be written as k double dash 0 to t et then with k is equal to k dash into k
double dash we get s(t) is equal to Ec Cos Wct kt 0 to t et so we can replace the m(t) here
with this value (Refer Slide Time: 17:29) so this has been substituted in the previous
expression and we get this expression.
(Refer Slide Time: 18:17)

Now the instantaneous angular frequency of s(t) the modulated signal is 2pi fi(t). So if
you differentiate it we get the frequency by differentiating the phase. So this is the phase
we differentiate it and we get instantaneous frequency deviation is equal to fc plus 1 by 2
pi k into et so we find that this waveform is therefore modulated in frequency because
there is a presence of et which varies with frequency so this is em(t) is the signal in time
so we find that the frequency deviation is taking place. So whenever you do phase
modulation frequency modulation also take place.

In summary these two together are referred to as angle modulation. in other words you
will find this phase modulation, frequency modulation, angle modulation are used
interchangeably because if you look at the waveform we will not be able to find any
distinction.
(Refer Slide Time: 18:49)

As you can see this waveform we have obtained by angle modulation. By looking at this
you cannot tell whether it has been obtained by using frequency modulation or by using
phase modulation. Let us see all the four cases together.

(Refer Slide Time: 19:22)

So here is the signal to be modulated or modulating signal, this is the carrier signal and
here is the Amplitude Modulated signal (Refer Slide Time: 19:20) you see the amplitude
is changing and then the envelope corresponds to the modulating signal and here we have
got Frequency Modulated signal and here you have got phase modulated signal. We see
that the nature of these two signals is absolutely same. Of course there are some
differences in phase between these two signals that’s why they will appear to be identical
and their characteristics is same. So the characteristics of phase modulated signal and the
Frequency Modulated signals are identical. That means the bandwidth requirement,
power requirement for transmission is same for both Frequency Modulated and phase
modulated signals. That’s why together they are known as angle modulation, they are
referred to as angle modulation.

With this we now change our gear and move from digital data to analog signal
conversion.

There are many situations in which you have to convert digital data to analog signal
because we have to transmit the signal through analog transmission media. Analog
transmission media means it is bandpass in nature or there are many other advantages
when we send data in analog form.

(Refer Slide Time: 20:44)

That means when the analog signal is transmitted it has got many advantages. We have
already discussed about it. So the medium that is available can be analog in nature for
example telephone line. If we want to use telephone line for transmission of data then we
have to convert it into analog signal because the telephone line has been designed to send
analog voice signal. So if you want to send data we have to convert it into analog signal.
That is the basis for this digital data to analog signal conversion. Let us see how it
happens.
(Refer Slide Time: 22:22)

So here we see that you have got the digital data in terms of ones and 0s which is
converted into some signal then we get the analog signal. So here according to this data
modulation has been done to get analog signal. Here you see it is analog signal this
corresponds to logic bit one, (Refer Slide Time: 22:10) this corresponds to 0, this
corresponds to 1, this corresponds to 0, this corresponds to two ones, then two 0s so this
corresponds to the data that we are transmitting. But instead of sending one we are
sending analog signal. For 0 we are not sending anything and this is your analog signal.

There are three distinct types of digital to analog modulation technique. One is known as
Amplitude Shift Keying, Frequency Shift Keying and Phase Shift Keying. And as we
shall see Amplitude Shift Keying and Phase Shift Keying can be combined to have
another type of modulation technique known as Quadrature Amplitude Modulation
QAM. So we shall discuss all these four types of modulations ASK FSK PSK and QAM
in this lecture.
(Refer Slide Time: 22:55)

What is Amplitude Shift Keying?


Suppose the unmodulated carrier is represented by ect is equal to Ec Cos 2pi fct the
modulated signal can be written as kem Cos 2pi fct. As we can see here the amplitude is
being changed is proportional to the modulating signal. Now since the modulating signal
can be either 0 or 1 what is possible is this amplitude in one case is A1 and in the other
case it is A2 and as you can see here the carrier frequencies are same fc. So without
changing the frequency we are simply changing the amplitude of these two signals. For 1
we are sending one amplitude and for 0 we are sending another amplitude.

(Refer Slide Time: 24:33)


And a special case of this is whenever this At is equal to 0. That means whenever the data
is 1 the amplitude of the modulated signal is A1 and when the data is 0 the amplitude of
the modulated signal is 0. This special case is known as On by Off Keying or OOK. And
On by Off Keying as we shall see is susceptible to sudden gain changes. However, On by
Off keying has been found to be very suitable for transmission of data through optical
fiber. So in optical fiber we can say that we use On by Off Keying. That means whenever
it is 1 we send some light coming from light emitting diode or laser diode and when it is
0 no light is transmitted through the optical fiber. This is how the modulation is done. So
this is the OOK technique. On by Off Keying is used for transmission of digital data
through optical fiber.

(Refer Slide Time: 26:11)

So this is how it happens as shown here. This is essentially On by Off Keying.

Now let us look at the frequency spectrum. If we look at the frequency spectrum you will
find that bandwidth is relatively small compared to AM. In fact the bandwidth is decided
by the bandwidth of the modulating signal. As we know if it is a square wave sort of
thing then we have the fundamental the third harmonic and so on. so apart from the
carrier frequency we have fc plus f0 the carrier plus the fundamental frequency of the
signal then third harmonic fc plus 3f0 on the other side we have fc minus f0 fc minus 3f0
and so on. Of course in the other spectral components we will have lesser and lesser
amplitude.
(Refer Slide Time: 26:54)

So if we look at the frequency spectrum of ASK signal you can see that if Bm is the
overall bandwidth of the binary signal then the bandwidth of the modulated signal is Bt is
equal to Nb so neither side we have fc plus Nb by 2 and fc minus Nb by 2 so bandwidth is
equal to Nb square where Nb is the baud rate of the modulating signal. So we see that
bandwidth is not really very high whenever we transmit using ASK. It is very similar to
AM signal.

Now let us see Frequency Shift Keying.

(Refer Slide Time: 27:33)


In Frequency Shift Keying the frequency of the carrier is used to represent 1 and 0. That
means we are using two different frequencies, amplitude is kept constant. Whenever we
are sending binary 1 we are using a carrier frequency fc1 and whenever we are sending
binary 0 we are using a carrier frequency fc2. And whenever we do that it has been found
that it is much less susceptible to noise and gain changes. one drawback of Amplitude
Shift Keying is that the amplitude is changing and as a consequence there is a possibility
of high noise and that noise level is so high that it is not possible to identify or to detect
the lower and higher level of the carrier signals or 0 level or little higher level of carrier
signals. That’s why the ASK signal is susceptible to noise and gain changes of the circuit
or the amplifier. But in this case whenever we are using Frequency Shift Keying it is
much less susceptible to noise and gain changes because we are sending two different
frequencies and obviously because of noise the frequencies cannot change and the
frequencies can be very easily detected at the other end.

Here is an example how exactly it happens.

(Refer Slide Time: 30:17)

We find that here is your data 1011001 and this is one carrier with frequency fc1 (Refer
Slide Time: 28:53) and this corresponds to carrier fc2. So you see whenever it is 1 then we
are sending carrier fc1 and whenever it is 0 we are sending the carrier fc2 and here we have
got two ones so we are sending fc1 two 0s so two fc 0 twos and two fc1. So here you can
see that the frequency shift keyed signal looks like this. That means here is your
Frequency Shift Keying signal what we receive at the receiving end so we have two
different frequencies fc1 and fc2 are sent alternately depending on the data is 1 or 0.

Now the question is what is a frequency spectrum? As we can see this is equivalent to
sending two ASK signals, one of carrier frequency fc1 and another of carrier frequency
fc2. As a consequence the bandwidth of the frequency shift keyed signal is much higher.
So this corresponds to one ASK signal, this corresponds to another ASK signal as if we
are sending one Amplitude Shift Keying having frequency fc1 or f1 here as it is written or
another one with carrier frequency f2. So these two together will lead to much higher
bandwidth for Frequency Shift Keying signal just like the case for FM signal compared to
AM signal.

So here is the frequency spectrum of FSK signal. FSK may be considered as a


combination of two ASK spectra centered around fc1 and fc2 as I mentioned.

(Refer Slide Time: 31:06)

So the bandwidth is equal to fc2 minus fc1 so here fc2 minus fc1 (Refer Slide time: 31:00)
then on either side you will have Nb by 2 so total bandwidth is fc2 minus fc1 plus Nb so
bandwidth requirement is higher in case of FSK signal.

Now let us look at the third variation that is Phase Shift Keying.
(Refer Slide Time: 35:31)

In Phase Shift Keying just like your phase modulation the phase of the signal will change
or will be proportional to the modulating signal. That means whenever it is 1 we shall
send in one phase and whenever it is 0 we shall send the carrier with another phase as it is
shown here. So we are sending the carrier with phase shift of pi that means 180 degree
for binary 1 and whenever it is 0 we are sending the carrier without any phase shift that
means here the phase shift is 0. So the constellation diagram or phase state diagram as it
is shown here if it is corresponding to 0 we find the phase shift is 0 and for 1 the phase
shift is 180 degree. This is known as two PSK that means 2-Phase Shift Keying. We are
using two different phases that’s how you have got this name 2-PSK 2-Phase Shift
Keying.

As it is shown here (Refer Slide Time: 32:36) this is the signal corresponding to 0 and
this is the signal corresponding to 1. So here we see for 0 the phase is 0 we are starting
with 0 and on the other hand for 1 the phase is 180 degree. This is also one so it is
starting with phase of 180 degree and this is 0 so the phase is starting.

Therefore you can see that at the boundary whenever it is changing from 0 to 1 or 1 to 0
there is an abrupt phase shift which can be detected at the receiving end and to identify
that this is corresponding to data of 0 and this corresponds to data of 1. This is how the
Phase Shift Keying done and this is how a 2-PSK signal is generated and can be sent and
this is the time domain representation.

Now we can extend the basic concept of 2-PSK modulation to have QPSK. In QPSK we
shall be sending four different phases and since there are four different phases it is
possible to represent one two bits for each signaling element. So whenever phase is 0
degree then it corresponds to bits 0 0. that means each signaling element is able to send
two bits instead of one bit and if it is 90 degree then you are sending data bits 0 1.
Whenever it is 180 degree we are sending data bits 1 0 and whenever the phase shift is
270 degree data bits are 1 1. So here is the phase state diagram; for 0 0 it is 0, for 0 1 it is
90 degree, for 1 0 180 degree, and for 1 1 it is 270 degree. So, the phase shift occurs in
multiple of 90 degree in case of QAM.

(Refer Slide Time: 36:17)

You may be asking what is the advantage that we have gained by this? Earlier what was
happening was in this case you can see there is a term known as baud rate if you
remember. Baud rate is essentially the rate of signaling element. So here you have got
one signaling element, here you have got another signaling element and here you have
got another signaling element. So in case of 2-PSK the baud rate and data rate is same
because for one data we are having one signal element, for another data bit we are having
another signal element that’s why baud rate and data rate is same.

But in this particular case that is not true. We find that each signaling element
corresponding to phase 0 can have two bits so here the baud rate is half the data rate. Or
in other words data rate is twice the baud rate. That means the number of signal elements
will be half that of the data rate. That means for two bits we are getting one signal
element and as a consequence you are able to send higher data rate having the same baud
rate so that is the advantage of this QPSK.

The idea can be extended further. Here as you can see this is an 8-PSK signal and here
the phase shift key is by 45 degree. So 0 0 0, 0 0 1, 0 1 0 as you can see, this 0 0 0
corresponds to the 0 degree, 0 0 1 corresponds to 45 degree and so on as it is shown in
this table.
(Refer Slide Time: 37:39)

So 0 1 0 is 90 degree, 0 1 1 is 135 degree, 1 0 0 is 180 degree, 1 0 1 is 225 degree, 1 1 0


is 220 degree and 1 1 1 is 315 degree.

So in this case we are able to send three bits per signaling element. So in that case the
data rate will be equal to three times the baud rate because we are able to send three data
bits per that modulated signal element so as a consequence for the same baud rate we
shall be able to send higher data rate.

However, there is some limitation at the receiving end and the limitations come because
of the ability of the equipment to distinguish small differences of phase limits. Therefore
as a consequence the potential bit rate is limited. That means the ability of the equipment
to distinguish the small phase shift leads to a restriction on the maximum bit rate that can
be transmitted over the medium. So what’s the way out? The other alternative is apart
from changing the phase we can also change the amplitude. In other words we can
combine change in amplitude and change in phase leading to what is known as QAM
Quadrature Amplitude Modulation. So in this case it is a combined modulation technique
so QAM can be considered as a combination of ASK Amplitude Shift Keying plus Phase
Shift Keying.
(Refer Slide Time: 39:51)

For example, this is your 8 QAM. So in this case as you can see we have used two
amplitude levels. This is one amplitude level and this is another amplitude level and four
phase values so 0, 90, 180 and 270 so four phase values and two amplitude value so as a
consequence we are getting 4 into 2 and that leads to your 8QAM. And here you have got
another constellation diagram or phase frequency diagram (Refer Slide Time: 39:51)
where you see we have used again two amplitudes and 1 2 3 4 5 6 7 8 so 8 phases so here
it is possible to have 2 into 8 is equal to 16QAM. So we find that for each signal element
there is change in phase or frequency. That means you can have two different amplitudes
and eight different phases leading to four different bits to be sent per signal element so
we can send them at higher data rates so this is the 16 QAM. So in this way you can keep
on extending the number of phase changes and number of amplitude changes.

However, you will find that number of amplitude changes is much lower than the number
of phase changes. In other words detection of phase changes is easier than the detection
of amplitude of levels because amplitude levels usually get corrupted because of noise.
The phase changes usually do not get corrupted because of noise. That’s why normally
the number of amplitude levels is less than the number of phase levels. So here you see
the possible bit rates for different baud rates.
(Refer Slide Time: 44:45)

For ASK, FSK and 2-PSK we find baud rate and baud bit rate are same. For QPSK as I
have explained if baud rate is N then the bit rate is 2N. For 8-PSK as I have explained for
baud rate of N you get bit rate of 3N, for 16 QAM for baud rate of N you get bit rate of
4N. Similarly for 32 QAM for a baud rate of N you get 5N bit rate, for 128 QAM for
baud rate of N you get bit rate of 7N and for 256 QAM for baud rate of N you get 8N.

In other words the data rate can be eight times that of baud rate and this has very good
application in modem. Modem stands for modulator demodulator. It converts digital
signal to analog signal using either ASK, FSK, PSK or QAM that means modulation is
done at the transmitting end then at the receiving end by demodulation you get back the
original data so that is what is done in modem.

Now let us consider the bandwidth of a typical telephone line.


(Refer Slide Time: 43:23)

Here we find that for voice the bandwidth is 3300 minus 300 so 3000 Hz. But for data the
bandwidth is restricted that means it is 3000 minus 600 so you get 2400. So this is the
bandwidth that you get for transmission through telephone line. So whenever your
bandwidth of the channel is only 2400 Hz then what is the data rate that is possible for
transmission through telephone line.

Let us now go back to the previous table. So, if we use 16 QAM then the data rate can be
4N that means your bandwidth of the channel is 2400 into 4 that means we get 9600 Bps.
Hence by using 16 QAM the data rate possible for transmission using modem is 9600
Bps. On the other hand if you go for 256 QAM the data rate possible is 2400 into 8 that
means 19200 Bps. Thus it is possible to transmit at a much higher data rate than the
bandwidth of the channel by using suitable modulation technique like QAM which is a
combination of ASK and PSK. Hence we see that this modulation technique is finding
very good use in modems which we normally use for connecting our computer from
home to office or from home to the internet service provider so modems have many
applications.

Therefore this AM digital to analog modulation technique has found extremely good
application in modem. Now it is time to summarize what we have discussed in the last
two lectures. We have discussed about transmission of analog signal where we use
different modulation techniques such as amplitude modulation, frequency modulation and
phase modulation.

We have seen that in case of amplitude modulation there is a maximum limit of the
modulation index that means the value of N has to be 1. If the value of N is greater than 1
then the signal gets distorted. In other words at the other end it is not possible to recover
the signal. However, the power required for transmission the amplitude modulation signal
is higher that’s why the Frequency Modulated signal and phase modulated signal which
are combined and which are known as angle modulation requires lesser power however at
the expense of higher bandwidth so FM gives you very high quality signal because it is
not affected by noise. AM signals Amplitude Modulated Signals gets corrupted by noise
and disturbances. From our common experience we find that whenever it is cloudy or it is
raining when there is lightening there are lots of disturbances. On the other hand the FM
signal does not get that much noise because frequency is not affected. However, the
bandwidth requirement is much higher for frequency FM signals.

Coming to the digital data digital analog signal we have discussed ASK, FSK, PSK and
also QAM and we have found that ASK is simple and particularly the On by Off Keying
is very attractive which has found application in optical communication where that
optical signal is modulated by using this OOK On by Off Keying and bandwidth
requirement is less. On the other hand the FSK and PSK the bandwidth requirements are
higher compared to ASK signal and we have seen how PSK and ASK can be combined to
have higher baud rate for transmission of high speed data through the modem. We have
also seen that using lower bandwidth channel we can send data at a higher speed. So with
this we conclude our lecture. However, there are some review questions.

Review Questions:

(Refer Slide Time: 48:34)

1) Which modulation technique is used in optical communication?


2) What are the three modulation techniques possible in modems?
3) Why PSK is preferred as the modulation technique in modems?
4) Out of the three digital-to-analog modulation techniques, which one provides higher
data rate?

These questions will be answered in the next lecture. Now it is time to give the answer to
the questions of lecture-9.
(Refer Slide Time: 49:08)

1) What are the benefits of analog modulation techniques?

Answer: Frequency translation achieved using analog modulation helps to provide the
following benefits. As I have discussed in detail it allows you to have antenna of practical
size, it helps you to use the advantages of narrowbanding then it opens up the possibility
of sending more than one signal through a media simultaneously through Frequency
Division Multiplexing FDM technique. Later on we shall discuss in detail the
multiplexing technique where we shall see that the Frequency Division Multiplexing has
some use.

(Refer Slide Time: 49:56)


2) What are the possible analog-to-analog modulation techniques?

As I have already discussed there are three possible approaches particularly modifying
the three vital parameters like amplitude, frequency and phase and the modulation
techniques are known as amplitude modulation, frequency modulation and phase
modulation. However, as you have seen that frequency and phase modulations are very
similar in some ways and that’s why they are known as angle modulation.

(Refer Slide Time: 50:29)

3) What is the bandwidth requirement of Amplitude Modulated signal?

Answer: The bandwidth requirement of the Amplitude Modulated signal is 2B where B is


the bandwidth of the analog signals. So we find that the Amplitude Modulated signal or
AM signal requires lesser bandwidth for transmission.
(Refer Slide Time: 51:18)

4) What is SSB transmission? Why SSB transmission is used?

Answer: In SSB Single Side Band transmission only one side band is sent through the
media neither of the other side band, lower or upper side band is sent nor the carrier is
transmitted so only one side band is sent so the carrier is not transmitted and one of the
two side bands is transmitted.

SSB transmission is used for the following two advantages; lower bandwidth half the
bandwidth of the double side band transmission and lower power and it requires about
one six of the power that is required for Single Side Band transmission compared to
Double Side Band transmission.
(Refer Slide Time: 51:51)

5? Why synchronous detection is not commonly used to recover the base baseband
signal?

Answer: For synchronous detection it is essential to regenerate perfectly synchronous


carrier at the receiving end and it has to be perfectly synchronous in terms of phase and
frequency which is very difficult to do and as this requires complex hardware it is not
commonly used. On the other hand we have already discussed very simple circuit by
using diode and RC circuit elements. So that kind of detector is commonly used in AM
receivers. So, that answers all the questions that were given in a last lecture and with
these we come to the end of today’s lecture, thank you.
Data Communications
Prof. A. Pal
Department of Computer Science & Engineering
Indian Institute of Technology, Kharagpur
Lecture-11A
Multiplexing

Hello viewers welcome to today’s lecture on multiplexing techniques. In this lecture we


shall discuss various issues of multiplexing.

(Refer Slide Time: 1:05)

First we shall discuss why multiplexing? Why do you really need multiplexing?

Then I shall introduce to you the basic concepts of multiplexing and as we shall see
multiplexing can be divided into two different types; Frequency Division Multiplexing
and Time Division Multiplexing. And as we shall see the frequency multiplexing
technique has two different variations the Frequency Division Multiplexing and
Wavelength Division Multiplexing. And the other one is the Time Division Multiplexing
again has two different types synchronous and asynchronous which we shall discuss in
detail.

Finally we shall discuss about inverse Time Division Multiplexing which is also in use
today. And on completion of this lecture the students will be able to explain the need for
multiplexing, they will be able to distinguish between the multiplexing techniques, they
will be able to explain the key features of Frequency Division Multiplexing and Time
Division Multiplexing, they will be able to distinguish between synchronous and
asynchronous Time Division Multiplexing and explain the concept of inverse TDM.
(Refer Slide Time: 2:23)

Why multiplexing?
It is best on several observations. First one is it has been found that most individual data
communication devices typically require modest data rates. For example, when you are
sending the requirement is only 4 Kbps and obviously 4 KHz that is the bandwidth of
course whenever you convert it in digital form then it comes to 64 Kbps then of course
that one is also not very weak. Similarly the average data rate required is also not very
high.

On the other hand the communication media usually have much higher bandwidth. For
example, we can use coaxial cable, optical fiber, microwave which has several mega
bytes per second, hundreds of mega bytes of bandwidth and as a consequence the
communication media provide you much higher bandwidth. On the other hand individual
users have lesser data to send. And as a consequence the two communicating stations do
not utilize the full capacity of a data link. That means if you have got two users like say
one and two and they are linked by some kind of a communication link may be optical
fiber, coaxial cable or whatever it may be they are not able to utilize the link capacity
fully.
(Refer Slide Time: 4:09)

Another observation is that higher the data rate the most cost effective is the transmission
facility. For example if the data rate is small then the cost per byte or per kilo byte is
more and if the capacity is large say several Gigabytes then the cost per byte or cost per
kilo byte is much less.

(Refer Slide Time: 5:06)

That means if we use a link of higher capacity then we get it at a much lower cost so that
is the observation. So, based on this observation it has been found that we can use a
technique known as multiplexing. Let us see how? When the bandwidth of a medium is
greater than individual signals to be transmitted through a channel a medium can be
shared by more than one channel for signals by using multiplexing.

Essentially what we are trying to do is we are trying to share the channel capacity or
bandwidth of a particular media by several signals by several users. And for efficiency
the channel capacity can be shared among a number of communicating stations. That
means we have to utilize the channel capacity fully and for that purpose multiplexing can
be done. And as you know most common use of multiplexing is in long haul
communication using coaxial cable, microwave and optical fiber. As you know for long
distance communication normally we use optical fiber or a terrestrial microwave or
satellite microwave or optical fiber and all these transmission media have high bandwidth
and this has to be shared by using multiplexing.

(Refer Slide Time: 7:17)

I shall start with a very simple example. A telephone line which has bandwidth of only
2400 hertz is used for data transfer between two stations and as you can see you can
divide it if your data rate is small which can be satisfied by 1200 Hz then you can divide
it into two parts say half of 1200 say 600 to 1800 and another is 1800 to 3000 then the
first twelve hundred can be used to transfer data from user A to user B that means in one
direction and the other one can be used for transfer of data between user B to A. So what
you can do is you can achieve bidirectional data transfer with the help of a single
channel. And only thing that you have to do is divide the bandwidth as only one part is
used for one direction of the transfer and that can be done by using multiplexing. For
example, we can use Frequency Division Multiplexing FDM.
(Refer Slide Time: 7:56)

Let us see the basic concept of multiplexing. What we do we is we use a device known as
multiplexer which combines signals coming from n channels. Here you are getting signal
from n channel one to up to n. Signals coming from n channels are combined together
and sent through one media. As you can see you can have got only one medium so
signals coming from n channels are combined to be sent through a single medium and at
the other end what you are doing there is a demultiplexer. And the demultiplexer
performing the reverse operation is separating out the signals coming through a common
channel and is sending them to n different channels as you can see 1 to n. this is the basic
operation. So you require a multiplexer and a demultiplexer. Hence this multiplexer and
demultiplexer does the operation of combining and separating out. It can be done in two
ways.
(Refer Slide Time: 9:05 - 10:18)

First one is Frequency Division Multiplexing. Here as you can see the signals coming
from n channels 1, 2 up to n channels are multiplexed and they assign simultaneously
through a single channel. As you can see through the same medium we can send n
different signals and obviously we are doing Frequency Division Multiplexing. The
signals coming from different signals are frequency translated to different frequency.

At the other end the multiplexer will receive the signals and they will separate out the
channels using suitable devices. So here as you can see all are sent in parallel. So far as
time is concerned if you look from the timing point of view all these signals are
combined to form a composite signal. Therefore here we are forming some kind of
composite signal and that composite signal is again the demultiplexed at the other end by
suitable technique by Frequency Division Multiplexing.

The second approach is Time Division Multiplexing. Here as you can see the signals are
sent in such a way that in different time slots we are sending signals from channel one
then we are sending signal from channel two then channel three and so on and this is
being repeated. So in Time Division Multiplexing signals are multiplexed in terms of
time. So, for some time duration a particular signal is sent then another signal is sent for
another duration then another signal is sent for another duration as you see here (Refer
Slide Time: 11:09) then at the other end the demultiplexer separates out signals from
channel one to channel two and so on. These are the basic approaches.
(Refer Slide Time: 11:23)

So, far as the Frequency Division Multiplexing is concerned we use analog signal that
means we generate signal by using FDM and WDM technique. The analog signal is used
in two different techniques FDM and WDM. As we shall see the Time Division
Multiplexing is done by using digital signaling and that’s why it is called TDM and ATM
stands for Asynchronous Time Division Multiplexing. So we can say that analog signal is
used for Frequency Division and Wavelength Division Multiplexing and digital signaling
is used for Time Division and Asynchronous Time Division Multiplexing.

Let’s see the Frequency Division Multiplexing technique in detail.

(Refer Slide Time: 12:35 - 13:15)


What you do is first divide the available bandwidth into a single physical medium into a
number of smaller independent frequency channels. So the available bandwidth is divided
into a number of smaller independent frequency channels. That is the first thing that is
done. Then using modulation independent message signals are translated into different
frequency bands. As you have seen, by using modulation this can be done. That means
independent message signals can be translated into different frequency bands.

Then third thing that is done is all the modulated signals are combined into a linear
summing circuit to form a composite signal for transmission through the medium which
has higher bandwidth. Finally the carriers are used to modulate the individual message
signals. The carriers used to modulate the individual message signals are called the sub-
carriers as we shall see. So here we are not using a single frequency carrier but a number
of frequency carriers.

(Refer Slide Time: 13:33)

And as I mentioned we can use one of the three different techniques; Amplitude
Modulation, Frequency Modulation or Phase Modulation. Thus one of the three
techniques can be used to generate analog signal and to translate the individual
frequencies to different frequency bands by using different carrier frequencies.
(Refer Slide Time: 15:07)

On the other hand, if your data is digital in nature then you can use ASK, FSK or PSK or
a combination of ASK PSK to generate analog signal which is known as QAM. So,
depending on whether your data is analog or digital you will be using Amplitude
Modulation, Frequency Modulation or Phase Modulation if your data is analog in nature.
On the other hand if your data is digital in nature then you will use ASK, FSK, PSK or
QSK to generate the analog signal.

So the basic Frequency Division Multiplexing operation is explained in this diagram.


Here you have got source one, source two and source n that is 1 to n sources. Now the
frequency band each of the signals is shown here. That means it is like this in the
frequency domain (Refer Slide Time: 15:00).

Now as you do the modulation by using a frequency f5 as we know after Amplitude


Modulation the bandwidth becomes like this, around f5 the demand will become double
and the frequency of the carrier is f5. That means here the frequency of the carrier is f5.
Now we are modulating different signals with different frequencies so these are known as
sub-carriers f1, f2 and fn.

Hence as you can see here if the transmitted bandwidth is from here to here we can send
all the signals f1, f2 and fn that is all signals can be transmitted through the media. So here
after modulation we are combining and then sending through the transmission media. If it
is air then you don’t have to do this because all the signals will be sent through the air
and at the receiving end the separation can be done by suitable technique. As it is shown
here we are receiving the different signals f1, f2 up to fn so these signals are received and
then with the help of demodulator they are separated out and the demodulator then
separates out each of these carriers f1, f2, fn so here we get different frequency bands.
For example, here the carrier frequency f1 is filtered, here the carrier frequency f2 is
filtered, here the carrier frequency fn is filtered so they are filtered out then they are sent
to different destinations. Here it is shown in more detail as carrier frequencies fc1, fc2 and
fc3.

Apart from having separate bandwidths for each of these signals coming from different
sources some extra overhead source is there known as guard band. This guard band is
necessary so that channels are separated by strips of by unused bandwidth to prevent
inter-channel cross talk.

(Refer Slide Time: 17:28)

If the channels are very close then the signal of one channel will disturb the signal of
another channel. Sometimes we observe that in whenever we receive radio signals that
adjacent channels are so close that the filtering is not done properly then it disturbs the
signal of adjacent channels so that kind of thing can happen. So to avoid that we have to
use guard bands between each of these bandwidths each of these signals.

So we have to take care of not only the bandwidths of each of these sub-carriers but also
the bandwidth of the guard bands together to decide the bandwidth of the channel. That
means the channel must have a bandwidth which is sum of the individual channel
bandwidths plus the guard band bandwidths.

Frequency Division Multiplexing has found applications in our day to day life. As you
know the transmission of AM and FM radio is done by using this technique. And with the
help of radio receiver you can tune different channels. That means we can select different
frequencies and listen to that particular channel. So it is used in AM and FM radio
broadcasting. Similarly it is also used in TV broadcasting. And in TV broadcasting also
you can select different channels with the help of the tuner of your TV and watch
different channels. And in cable television same thing is done through a coaxial cable.
You know each channel will require about 6 MHz and the cable can provide you a
bandwidth of about 500 MHz.

(Refer Slide Time: 19:50)

So, through a single coaxial fiber coaxial cable that is being used for cable television you
can send as many as hundred different channels and then you can separate out with the
help of the receiver, the tune as in the receiver which are nothing but band pass filters. So
we see that analog transmission is necessary in all these cases so that the signal has a
band pass nature and then different frequency bands can be multiplexed through the
transmission media.

Wavelength Division Multiplexing is commonly used in case of optical fiber. Particularly


we have observed that optical fiber medium provides enormous bandwidth. Since it
provides enormous bandwidth WDM is the most viable technology that overcomes the
huge opto electronic bandwidth mismatch.

Here the mismatch is coming because the optical fiber has very high bandwidth and for
individual users bandwidth requirement is much much smaller so we have to send a large
number of signals through the optical fiber or the optical fiber bandwidth has to be shared
by the signal of large number of users. How that can be done? That can be done by
Wavelength Division Multiplexing. It is somewhat similar to Frequency Division
Multiplexing. But at these higher frequencies normally we refer to Frequency Division
Multiplexing as Wavelength Division Multiplexing because wavelength value is smaller.
(Refer Slide Time: 21:40)

Wavelength Division Multiplexing optical fiber network comprises optical wavelength


switches routers connected by point-to-point fiber links. That means there are switches
and routers that means you have got switch or router and you have optical fiber cable and
then the switch can receive number of signals and switch can also separate out large
number of signals. This is how it is being done this is your optical fiber cable (Refer Slide
Time: 22:07). Now end users may communicate with each other through all optical
channels known as light paths which may span over more than one fiber links. Here I
have shown only one link but you may not have only one link, in between there may be
more switches and as a consequence the optical signal the light signal can pass through a
number of switches and the optical domain can reach the other side. This can happen
whenever you have got optical switches. This Wavelength Division Multiplexing is
becoming very very popular and the basic concept is shown here. here again we have got
different sources (Refer Slide Time: 22:57) and they are modulated with different carrier
frequencies or wavelength as you can see so the bandwidth of in terms of wavelength a
particular source say i is from w lambda 1 to lambda j.
(Refer Slide Time: 23:59)

So here for example this is the signal coming from channel one lambda one to lambda
two channel to lambda three to lambda four (Refer Slide Time: 23:17). You can see there
is some guard band in between then for channel n lambda 2n – 1 to 2n. So these different
wavelengths can be multiplexed and sent through an optical fiber cable. So this signal is
going through optical fiber cable, optical signals of different wavelengths. And at the
receiving end they can be separated out and the filtering can be done and we can get
different frequencies different wavelengths. As you can see here lambda i to lambda j is
meant for a particular channel.

You may be asking how the filtering may be done in optical domain. You may remember
in your school days you have done experiment with prism. So here as you can see the
light coming out from the other side of the prism depends on the incidence angle and the
wavelength. So the lights can be made incident to side with different angles so that all of
them come out with the same angle from the other side then it can be sent through optical
fiber cable and at the other end again they can be separated out by using another prism.
This is a very simple example which shows how Wavelength Division Multiplexing can
be done, how the different light signals can be combined and then can be sent through a
single optical fiber and again they can be separated out at the other end with the help of
prism. Therefore some what similar devices can be used in practice in Wavelength
Division Multiplexing.

Then coming to the digital one, in the Time Division Multiplexing we have discussed the
FDM and WDM which uses analog signaling as you have seen. Now we shall consider
the two types in Time Division Multiplexing synchronous and asynchronous which is the
digital signaling. So Time Division Multiplexing is possible when the bandwidth of the
medium exceeds the data rate of digital signals to be transmitted. So here the bandwidth
of the medium exceeds the data rate of the signals to be transmitted. So the possibility
arises because of the same reason as Frequency Division Multiplexing however it is done
in a different way. Here what you can do is multiple digital signals can be carried on a
single transmission path by interleaving portions of each signal in time. So we shall
interleave different signals one after the other and send through the medium.

(Refer Slide Time: 26:31)

So interleaving can be done at the bit level or in block of bytes. It might be possible to do
the interleaving at the bit level say one bit of one signal, second will be the bit of another
signal and the third bit of another signal and so on. Hence interleaving can be done in this
way or interleaving can be done in terms of blocks of bytes. And of course as we know
here we have to generate data in digital form and you have to do suitable encoding like
digital data to digital signal encoding or if your data is analog in nature then you have to
convert into digital form by using pulse code modulation or delta modulation get the
digital signal then you can do Time Division Multiplexing.
(Refer Slide Time: 27:02)

(Refer Slide Time: 27:19)

So here what we do is the incoming data from each source are briefly buffered and each
buffer is typically one bit or one character in length. So first the signal is received then
the signals are buffered before doing the multiplexing then the buffers are scanned
sequentially to form a composite data stream. So we can say that the scanning is done one
after the other from different buffers to form a composite signal. The scan operation is
sufficiently rapid so that each buffer is emptied before more data can arrive. Here this is
being explained (Refer Slide Time: 28:14) and another requirement is that composite data
rate must be at least equal to the sum of the individual data rates.
(Refer Slide Time: 28:30)

That means the composite data rate that is being formed here should be more than the
sum of the individual data rates. So here some kind of framing is done. As you can see
interleaving is done and here (Refer Slide Time: 28:50) have received coming from
source one their sum apart of data from source one in time grid, data from source n and
so on so these are scanned one after the other then a frame is sent then comes the next
frame. Again it is taken from another source like one bit or more number of bits then
from channel two s2 source two it is one bit or more number of bits in this way it goes up
to channel length and again it is repeated so this is how scanning is done.

Now question arises whenever you do the framing say you are doing some kind of
framing, say this is from source one (Refer Slide Time: 29:50), this is from source two,
this is from source three in this way source n then of course another stream is coming,
this is one frame and this is another frame source one, source two and this is being sent.
(Refer Slide Time: 30:11)

Now question arises how the other side that is the receiving end identifies that this is the
beginning of the frame and after the beginning of the frame is identified then of course
the data from each of these sources can be taken out provided they operate at the same
frequency. So here the requirement is that transmitter and receiver must operate at the
same frequency and more over this will be able to identify the beginning of the frame.
That is being done by using a separate channel you can say and through this channel the
sync characters are sent. Therefore at the beginning of each channel the sync character is
sent, synchronization character is sent and usually the synchronization character is a
sequence of 1 0 1 0 and so on. So you may send eight bits 1 0 1 0 1 0 1 0 that forms the
synchronization character. And after this is detected at the receiving end subsequent slots
are for different data. So this is considered a channel say channel 1 and this again channel
1 so in this way channel 1 is repeated. So, for the source s1 data is taken from the channel
1 then data is taken from channel 2 this is your channel 2 from source 2 (Refer Slide
Time: 31:50) and it is predefined that data will be present in this time slot from source
one and in this time slot from source two and so on.

Sometimes it so happens that the data rate is not matching which is coming from the
source. For example, for the purpose of multiplexing your requirement is say 8000 Bps.
on the other hand from the source the data is coming at the rate of 7200 Bps so data rates
are different. In such a situation what will you do? In such a case a technique known as
pulse stuffing is used so that the synchronization of different data rates are possible. So
what is done is some dummy bits are inserted to generate data rate of 8000 Bps then they
are sent by the receiver at the receiving end then the additional bits which are received
are separated out to get back the original data which has the data rate of 7200 Bps. This is
how pulse stuffing is done. Later on we shall take up an example to discuss it.

So here the synchronous Time Division Multiplexing is shown.


(Refer Slide Time: 34:05)

Here as you can see the data is coming from four sources. Here you have got AAA
coming from source one BBB from source two, CC from source three and D from source
four. Now as you can see first frame has filled up all the slots A B C and D. That means
first it is taken from A, then it is taken from B, then it is taken from C and D so all the
slots of first frame is filled. However, as far as the second frame is concerned second A,
second B, second C is filled up but there is no data in the fourth frame so this remains
empty. So far as the third frame is concerned as you can see you have got only two data
A and B coming from source A and source B, and source C and source D has no data so
they remain empty. so we find that in synchronous Time Division Multiplexing some of
the slots may remain empty if the source has no data to send as it has happened in this
particular example (Refer Slide Time: 34:44)

As you can see in the four frames the first frame is filled up, in the second frame one slot
is empty, in the third frame two slots remain empty, in the fourth frame three slots remain
empty so this leads to inefficient utilization of the channel bandwidth or the medium
bandwidth, how do you overcome it? This limitation is overcome by using a technique
known as Statistical Time Division Multiplexing, it is also known as Asynchronous Time
Division Multiplexing or Intelligent Time Division Multiplexing. So, wastage that
happens in the case of synchronous time division multiplexing usually is referred to as
TDM Time Division Multiplexing.

Now this problem is overcome in statistical or Asynchronous or Intelligent Time Division


Multiplexing, how? In statistical TDM time slots are allocated dynamically on demand.
That means in the previous case we have seen each slot is pre-assigned for a particular
source. But here it is not so. Here the slots are located dynamically on demand as and
when required by different sources and it takes advantage of the fact that not all the
attached devices may be transmitting all the time. It is based on the observation that all
the sources may not be transmitting all the time. For example, whenever we talk through
telephone sometimes we keep quiet, sometimes we talk quickly, sometimes we talk
slowly so this is quiet common and sometimes we think something and then speak so
somewhat similar situation exists.

Therefore this kind of particular statistical behavior is exploited in asynchronous Time


Division Multiplexing and this is explained in this particular example.

(Refer Slide Time: 36:55)

(Refer Slide Time: 38:12)

Here what has been done is data is again coming from four different sources and here
different time slots are shown. In the first time slots as you can see we have got data from
channel source C and source A. this is your A1, (Refer Slide Time: 37:28) this is your C1
as filled in our first frame and this slot remains empty and this slot also remains empty.
That means two slots remain on frame one remains empty if it is synchronous TDM.
Then in the next frame as you can see we have got three data. That means there is no data
from source A but however you have got data from source B, C and D so this is your B1,
C2 and D1 so these are filled up with this data. On the other hand in frame three as you
can see we have data from source B and source D so slots allocated to source B and
source D are filled up but the other slots remain empty.

Similarly in the fourth slot as you can see the slot A is filled up. Slot corresponding to A
is filled up, slot corresponding to C is filled up and on the other hand the other two slots
remain empty. So this is how it is sent in synchronous Time Division Multiplexing and
this is the time required from here to here. So this is the overall time required in
synchronous TDM.

Now let us see how we can use Asynchronous TDM where we need not really follow the
same sequence. For example, data coming from any source can be sent. For example,
here we can send A1 C1 and then again we can send B1 C2 and as you can see here the
frame has got only two slots. So in frame one you have sent A1 and C1 and in frame two
you have sent B1 and C2 so essentially the bandwidth requirement is less here. So here
the bandwidth requirement was corresponding to four time slots but here you have got
corresponding to two time slots. So it is A1 C1 and in frame two we shall send B1 C2
then in the next time frame we shall sort D2 then A2 and finally in the fifth frame we
shall sort C3 so total time required is less than as it was in synchronous TDM. And as a
consequence the bandwidth requirement for transmission of data through Asynchronous
TDM will be much less. But there is a problem in this case.

Here (Refer Slide Time: 40:20) as you have seen there is no fixed slot. A1 is coming
here, then C1 has come here then as you see in the same slot you send B1, here in the
same slot we are sending D1 here, in the same slot we are sending D2, here in the same
slot we are sending C3 etc. So we see that here there is no pre-allocated slot for a
particular source. How the other side will know that this data has come from source
number S1? That’s the problem. That problem has to be overcome by providing this
information.

Hence since the data arrived from and is distributed to the IO lines unpredictably. That
means unpredictably or asynchronously the address information is provided to assure
proper delivery so that the data can be delivered to the proper destination at the other end.
This leads to more overhead per slot. So what we have to do is apart from data we have to
send the address. If there is only single channel we can send address followed by variable
length data that means one source per frame which means we are sending one source per
frame. However, the most general case is that we can send several data from several
sources per frame.
(Refer Slide Time: 42:11)

For example, (Refer Slide Time: 41:58) data for each source can have address field,
length field and data field. So there are three different fields corresponding to data
received from a single source and these are the overheads namely the address and length
because the other side has to identify that the data corresponds to this address and also the
length field has to be known so that we can identify the next slot. And to minimize the
overhead for address sometimes we use relative addressing rather than absolute
addressing depending on the number of sources.

Suppose you have got eight sources so if there are eight sources instead of providing the
full address three bits are sufficient. So you can take the modulo of the number of the
sources so three bits will be left out and the lower order three bits will be sufficient and
that can be used as address if we have got eight sources. Similarly we can say that if the
number of bytes is 1 then it is 0 0, number of bytes is 2 then 0 2 so like that it can also be
appended at the part of the address field. So in this way we can try to reduce the overhead
but whatever we do there will be some overhead which cannot be escaped so there will be
some overhead in Asynchronous Time Division Multiplexing because you have to send
the address information along with the data. However, it has got the advantage that it
makes better utilization of the channel.
(Refer Slide Time: 44:00)

Particularly in Asynchronous Time Division Multiplexing the data rate at the output is
less than the data rate at the inputs. That is the primary condition for Asynchronous Time
Division Multiplexing. However, there are situations there are peak periods where
usually more data is generated at other times. So in the peak period that condition may be
violated. That means the incoming data rate may be more than the outgoing data rate
possible. So in such a case the only alternative is to use buffers of suitable size then the
data can be temporarily stored before sending. Therefore this problem of asynchronous
Time Division Multiplexing is overcome by use of buffers. Obviously there is a trade-off,
it is the trade-off between the buffer size and how much extra data can be stored or what
is the maximum input data rate that can be supported where the output data rate is fixed.

Nowadays cost of memory devices has come down and as a consequence you can afford
to have more memory. On the other hand the channel capacity is quite costly. This is
usually overcome by having more and more memory. for example, let’s assume n is the
number of inputs, r is the data rate of each source, m is equal to effective capacity of the
output obviously m will be less than n into r and alpha is the mean fraction of time each
input is transmitting. That means each source is not transmitting all the time that is our
basic premise and obviously alpha varies from 0 to 1. Then a measure of the compression
C is the ratio of the M that is the effective capacity of the output by nr. Obviously it will
have values less than 1 but it will also have value in between alpha and 1.

So we can see here this compression that is possible that means you can have lesser
output bandwidth than the sum of the input bandwidths and that ratio is c equal to M/nr
and that can be achieved based on experiments where you can decide the value of M for a
particular application that will definitely depend on the value of alpha and also n and r.
Therefore the performance of Synchronous Time Division Multiplexing can be decided
based on the statistical phenomenon the mean fraction of time each input is transmitting
and based on that we also can add buffers to improve the performance.
Our discussion on multiplexing will not be complete if we don’t discuss about another
very important technique which is known as Inverse Multiplexing.

(Refer Slide Time: 48:39)

We have seen that multiplexing is used when the bandwidth of the channel is more than
the bandwidth of the input sources. But here we are performing something different.
Suppose an organization is sending voice, video and data through the same media the
multimedia signal so now voice bandwidth requirement can be 64 Kbps, video bandwidth
requirement is typically 1.54 Mbps and data bandwidth can be say 128 Kbps so this is the
requirement but usually you don’t send video all the time. Sometimes you are sending
voice, several voice signals, several data signals, sometimes you are doing video
conferencing sending video signals so at different instance of time your requirement is
different. one possibility to satisfy your requirement is you can use a link, you can have a
list line with capacity of 1.54 Mbps so you can send data and whenever you are sending
voice or video you will be sending through this particular channel and obviously you will
not be utilizing the full capacity of the list line or the medium that we have evacuate.

What an organization can do in such a case is they can have the bandwidth on demand.
What they can do is they can have several 128 kilobits channel available and the number
can be made available depending on the requirement. So the bandwidth required can be
available in terms of say 128 into n. that value of n is decided by the requirement at a
particular instant. So what we are doing is that input data is de-multiplexed then
multiplexed at the other end to generate video. That means the video data can be sent
through several channels and at the other end they can be combined to generate the video
data. So this particular concept is becoming more popular to proper utilization of the
bandwidth and also for cost effectiveness. These are various techniques of multiplexing
that we have discussed.
Now it is time for some review questions.

(Refer Slide Time: 48:39)

1) In what situation multiplexing is used?


2) Distinguish between the two basic multiplexing techniques?
3) Why guard bands are used during Frequency Division Multiplexing?
4) Why sync pulse is required in Time Division Multiplexing?
5) What limitation of TDM is overcome in Asynchronous Time Division Multiplexing
and how?
6) Design a Time Division Multiplexing system having output bandwidth of 256 Kbps to
send from 4 analog sources of 2 KHz bandwidth and 8 digital signals of 7200 bps.

So here you will design a Time Division Multiplexing system by making use of pulse
stuffing and other techniques. The answers to these questions will be different in the next
lecture. And here are the answers to the lecture – 10.
(Refer Slide Time: 53:01)

1) Which modulation technique is used in optical communication?


As I mentioned in the previous lecture that On/Off key is used in optical communication.
We are sending light signal. I mean whenever it is 1 and whenever it is 0 you are not
sending any light signal that’s why it is called On/Off key and this is being used in
optical communication.

2) What are the three modulation techniques possible in modems?


The three modulation techniques possible in modems are Amplitude Shift Keying,
Frequency Shift Keying and Phase Shift Keying or a combination known as QAM. Later
we shall discuss on modem in more details and we shall see how these different
modulation techniques are used in different standards. it is not really three we can say it
is 4. However, he QAM can be considered as a combination of ASK and PSK.
(Refer Slide Time: 53:47)

3) Why PSK is preferred as the modulation technique in modems?


In PSK scheme it is possible to send a signal having more than one value. The approach
is known as Quadrature PSK. That means here the baud rate is more than the data rate.
That means we can achieve higher data rate by using the QPSK so that’s why PSK is
preferred.

(Refer Slide Time: 54:09)

4) Out of the three digital to analog modulation techniques which one provides higher
data rate?
For a given transmission bandwidth higher data rate can be achieved in case of PSK. In
other words in PSK higher channel capacity is achieved although the signaling rate is
lower. So that is all in this lecture. In the next lecture we shall discuss about some
application of multiplexing techniques like telephone systems and other things, thank
you.
Data Communications
Prof. A. Pal
Department of Computer Science & Engineering
Indian Institute of Technology, Kharagpur
Lecture-11B
Multiplexing
(Contd….)

Hello and welcome to today’s lecture on multiplexing techniques. You will find that
these multiplexing techniques will have widespread application in data communication.
Here is the outline of today’s lecture.

(Refer Slide Time: 2:23)

First we shall discuss why we really need multiplexing then we shall be introduced to the
basic concepts of multiplexing and as we shall see there are two basic approaches of
multiplexing; first one is known as Frequency Division Multiplexing and second one is
known as Time Division Multiplexing.

One variation of this Frequency Division Multiplexing is Wavelength Division


Multiplexing which is used in the context of optical communication. Whenever we send
optical signal through optical fiber light signal through optical fiber then we call it
Wavelength Division Multiplexing although it is basically same as Frequency Division
Multiplexing. Then the Time Division Multiplexing has again two different types; the
synchronous and asynchronous. We shall discuss about them in details.

There is another technique which is known as inverse TDM which is also used in data
communication today. So we shall conclude our lecture by discussing about inverse
TDM.
On completion the students will be able to explain the need for multiplexing as why
multiplexing is needed, they will be able to distinguish between multiplexing techniques,
what are the different multiplexing techniques, they will be able to explain key features
of Frequency Division Multiplexing and also the key features of Time Division
Multiplexing. They will be able to distinguish between the two types of Time Division
Multiplexing, one is known as synchronous and another is known as asynchronous. And
finally they will be able to explain the concept of inverse Time Division Multiplexing.

(Refer Slide Time: 3:10)

To discuss about why multiplexing let us consider the observations that we have in our
day to day data communication. First one is most of the data communication devices
typically require modest data rate. As we shall see data an individual user requires very
small bandwidth. For example whenever we send wires we require a bandwidth of may
be up to 4 KHz or sometimes 3 KHz. Similarly when we send data that time also you
may not require high bandwidth. On the other hand the communication media which are
used nowadays have much higher bandwidth.

For example, if we use coaxial cable or optical fiber or if we use microwave technique
then the bandwidth of the medium is quite high. So whenever two users are
communicating through a link usually the full capacity of the link is not used, it is not
utilized. So how do you make full utilization of the link capacity?

In fact as we shall see higher the data rate the most cost effective is the transmission
facility. That means to make the transmission facility cost effective we want the medium
which has high data rate and high transmission capacity on the other hand individual
users have smaller capacity. The best one is observation where the multiplexing
techniques have been developed. Basically it can be used when the bandwidth of the
medium is greater than individual signals to be transmitted through the channel. That
means since the bandwidth of the individual signals is small then a medium can be shared
this is the key idea sharing by more than one channel of signals by using multiplexing.
That means what we are trying to do is we are trying to do share the bandwidth of a
channel by a number of users. It is just like in a city the water comes through a bigger
pipe and that water gets distributed through narrower pipe to individual residences so it is
somewhat like that.

And for efficiency the channel capacity can be shared by a number of communicating
stations. That means this also increases the efficiency of communication of the
transmission media. And since we are concerned about the cost we will find that most
common use of this multiplexing will be in long haul communication using coaxial cable
using microwave and optical fiber. So these are the three transmission media which have
quite high bandwidth and which are used for long distance communication. So this is
where we can use multiplexing because there we should make use of the bandwidth in a
very efficient manner.

(Refer Slide Time: 6:14)

Let us take up a very simple example even when the bandwidth is not really high say
telephone line.
(Refer Slide Time: 8:01)

In a telephone line as you know the analog telephone line has very small bandwidth of
which about 2400 Hz can be used for data. Now even this small bandwidth can be shared
for data communication in two directions as it is shown in this diagram. The first part 600
to may be 1800 means up to 1200, so 600 to 1800 then 1800 to 3000 so 600 to 1800 is the
part used for data communication in one direction may be from user A to B and the other
one can be used from user B to user A. So you see that even the small bandwidth can be
shared by making use of this multiplexing and here the multiplexing can be made
possible by encoding the data by using say PSK that is PS shift keying. So PSK increases
the data rate with a smaller baud rate so even with 2400 Hz bandwidth the data rate can
be quite high. This is a simple example. This can be used for bidirectional
communication between two users using the same telephone line.

Now let us look at the basic concept of multiplexing.


(Refer Slide Time: 8:40)

Here we use a device known as multiplexer. This multiplexer is combining the signals
coming from n channels. As you can see this is channel 1, channel 2 and this is channel n.
so from n channel signals are coming and it is combined with the help of a multiplexer.
The multiplexer is sending the combined or composite signal through a single medium.
So you have got only one medium and the signals of n channels are being sent through
the medium .and at the other end we have an opposite device known as demultiplexer.
This demultiplexer separates out the signals of different channels by using filtering and
here we get back the signal corresponding to channel 1, signal corresponding to channel 2
and so on up to channel n. So, this is by making use of two devices; one is known as
multiplexer which combines signals of n channels into a single composite signal which
can be sent through a single medium and at the other end a demultiplexer separates out
these signals of different channels that can be sent to different stations. This is how the
multiplexing and demultiplexing can be done.

Now as I mentioned there are two basic approaches. First one is known as Frequency
Division Multiplexing. And as we shall see Frequency Division Multiplexing is possible
because of the analog transmission that is possible by modulation. As we have seen the
modulation performs narrow banding of this signal. As a consequence it is possible to
send n different signals each of them having a small fraction of bandwidth through the
transmission media. So here what you are doing is the n channels are multiplexed and
you are creating a Frequency Division Multiplexing and here we are sending all the n
different signals simultaneously. Obviously they should be at different bands so that one
does not mix with the other and the other end by again filtering you can separate out the
different channels like channel 1, channel 2 and channel n. So this is Frequency Division
Multiplexing (Refer Slide Time: 11:20) so here everything is sent in parallel.
(Refer Slide Time: 11:33)

The second one is Time Division Multiplexing. Here the approach is little different. In
the previous case all the signals were sent simultaneously but here it is not so. as you see
the signals are coming from different channels and what is being done is the time is
divided into slots and in slot one signal one signal from channel 1 is sent, in slot two
signal is sent from channel 2 is sent, in channel three signal from channel three is sent
and in slot n signal from channel n is sent. so here all are not sent in parallel but as you
can see signals from different channels are sent one after the other in a sequence and each
is sent in a particular slot of time and at the other end they can be again separated by
reading data from this slot to channel 1, the data from slot two to channel 2 and so on so
it is some what like switch. This switch first selects this one and transmits then it selects
this one and transmits, then it selects this one and transmits (Refer Slide Time: 12:40) and
so on. Similarly at the other end it performs the opposite operation, first this signal is sent
to channel 1, then to channel 2 and then to channel n so in this way it goes on so this is
your Time Division Multiplexing.

Now as we shall see the multiplexing will require two different signaling.
(Refer Slide Time: 13:10)

As I have mentioned analog signals are used for Frequency Division Multiplexing and
also for Wavelength Division Multiplexing. Although basically they are the same thing
but this is used in the context of transmitting through optical fiber. On the other hand
digital signaling is used in Time Division Multiplexing which has got two versions; one
is Time Division Multiplexing this is essentially synchronous and the other one is
Asynchronous Time Division Multiplexing in which we use digital signals. So this
approach the Frequency Division Multiplexing makes use of analog modulation
techniques and this Time Division Multiplexing makes use of the encoding techniques.
We have discussed this already.

Let’s see how the Frequency Division Multiplexing is implemented. What is being done
is the available bandwidth of a single physical medium is divided into a number of
smaller independent frequency channels. So the available bandwidth is divided into a
number of smaller independent frequency channels. Look at this figure smaller and
independent frequency channels so this will not overlap with each other.
(Refer Slide Time: 15:19)

Then using modulation independent message signals are translated into different
frequency bands. As we have seen by using the modulation technique this can be done
because modulation allows narrow bending. And by narrow bending they can be
translated to different frequency bands. All the modulated signals are combined into a
linear by using a linear summing circuit to form a composite signal for transmission
through a media. Obviously you will require a number of carriers to modulate the
individual message signals which are known as sub-carriers. We shall illustrate with the
help of example and we shall make use of different modulation techniques.

(Refer Slide Time: 15:36)


Sometimes we shall make use amplitude modulation, sometimes we shall make use angle
modulation which has got two types namely frequency modulation and phase modulation.
Let’s see how it is being done. Similarly when the signal is digital then we have to make
use of Amplitude Shift Keying or Frequency Shift Keying or Phase Shift Keying. As we
know the Amplitude Shift Keying and Phase Shift Keying can be combined to form the
Quadrature Amplitude Modulation QAM. So these are used whenever the data is digital
and we would like to convert it into analog signal which has to be done whenever we
want to do Frequency Division Multiplexing.

(Refer Slide Time: 17:42)

Here the signals are coming from different sources; source one source two and source n
and here you have got the modulator and as you can see different sub-carriers f1 f2 and fn.
So these are the sub-carriers used to modulate different signals and if this is the
bandwidth of the modulating signal, the bandwidth of the modulated signal is shown here
it is around that carrier on both sides. if the bandwidth here is B the analog signal then
bandwidth of the modulated signal as we know is 2B and after these combined together
as you can see the total bandwidth is summation of the individual bandwidths so f1 plus
2B plus 2B in this way so obviously there should be some separation between f1 f2 and fn
so that there is no overlap. This is the transmitted signal bandwidth.

As you can see (Refer Slide Time: 17:44) the transmission bandwidth is sum total of the
individual bandwidths of different signals. At the other end that composite signal is
received and then demodulated. Demodulation is nothing but some kind of filtering and
here individual filters essentially band pass filters are having center frequencies f1 for this
case, f2 in this case and fn so on. So you can see these are the filters. After doing this
demodulation and filtering the signals can be sent then of course that signal has to be
converted back into……..(Refer Slide Time: 18:27) actually this filter should be here and
demodulator should be here. After demodulation we get the original signal which has to
be sent to the destination. So this will be here (Refer Slide Time: 18:46) this will be here
so this is how the Frequency Division Multiplexing is done.

And as I mentioned there should be some separation between different frequency bands.
So this is corresponding to the channel 1 (Refer Slide Time: 19:04) with sub-carrier fC1
and this is corresponding to the bandwidth of the channel 2 fC2 and so on. In this way you
have got channel 1 if its sub-carrier frequency is fcn. Now as you can see between each
band there is a small gap and this is known as guard band. This guard band is necessary
so that the channels are separated by strips of unused bandwidth that is your guard band
to prevent inter channel cross talk.

(Refer Slide Time: 19:40)

So if there is no separation there is a possibility of cross talk. If you place them side by
side without any separation there will be some overlap which will lead to cross talk.
Hence to avoid cross talk these guard bands are used and obviously it is an extra
overhead. So, apart from the sum total of the bandwidths some additional bandwidth is
wasted for these guard bands this is an extra overhead that is used in Frequency Division
Multiplexing.
(Refer Slide Time: 20:17)

And this Frequency Division Multiplexing has many uses as we know. For example, we
have the transmission of AM and FM radio signals. Everyday we are listening to AM
Amplitude Modulated radio stations and FM radio stations. FM has become very very
popular because of the quality of the signal nowadays and both are based on Frequency
Division Multiplexing FDM. And our TV broadcasting is also based on Frequency
Division Multiplexing because you have different TV stations and they use different
bands for transmission of their signals. In the TV receiver we can select different
channels with the help of filtering. And also we are familiar with cable television where
the signal is distributed with the help of coaxial cable. There also we use Frequency
Division Multiplexing. Later on we shall discuss about it in more details. So these are the
three important areas where Frequency Division Multiplexing is used. Nowadays cable
television is used not only for signaling video but the cable modem can be used for the
transmission of data.
(Refer Slide Time: 21:36)

As I mentioned there is one special type of Frequency Division Multiplexing called


Wavelength Division Multiplexing that is whenever we are sending light signals through
optical fiber. Why we call it Wavelength Division Multiplexing?

The reason is the frequency is very high so wavelength is small. So instead of stating in
terms of frequencies we state in terms of wavelengths. And particularly this Wavelength
Division Multiplexing is becoming very very popular because of the enormous
bandwidth provided by optical fiber media. To make use of the enormous bandwidth
Wavelength Division Multiplexing is the most viable technology that overcomes the huge
opto-electronic bandwidth mismatch.

As I have told the optical fiber can send very high bandwidth. On the other hand
individual users who are sending either audio or video their bandwidth requirement is
smaller. And only by using Wavelength Division Multiplexing we can share optical fiber
and we can make use of the huge bandwidth mismatch or make use of the enormous
bandwidth and it also overcomes the huge opto-electronic bandwidth mismatch.

Wavelength Division Multiplexing optical fiber network comprises optical wavelength


switches or routers interconnected by point-to-point fiber links. The end users may
communicate with each other through either all-optical multiplexing channels which are
known as lightpaths which may span over more than one fiber link. That means over a
very long distance the signal can be sent in the form of light and at the other end with the
help of suitable transducer. We can use the pin diodes for conversion from light signal to
electrical signal and then get back the original data.

So the basic approach as you can see is same. Here you have got a number of n sources
coming in as 1, 2 and n (Refer Slide Time: 24:04) and here is the bandwidth of the optical
signal and these are multiplexed and this is the bandwidth represented in terms of
wavelengths. this is the bandwidth of the first signal lambda 1 to lambda 2 then lambda 3
to lambda 4 coming from source two in this way 2n minus 1 to lambda 2 that is going
from source n. so these are expressed in terms of wavelengths essentially these are small
frequency ranges and these light signals are transmitted through optical fiber.

(Refer Slide Time: 24:34)

So here you can see the bandwidth is much more which can be easily sent through optical
fiber. And the optical fiber with the help of demultiplexer we can separate out different
optical signals having different frequencies and then they can be sent to destinations.

(Refer Slide Time: 24:55)


You may be wondering how really it can be done this multiplexing and demultiplexing.
This can be explained very easily with the help of this simple diagram where we have
used two prisms. As you know the light signal has two properties reflection and
refraction so you can make use of the refraction property to combine light signals coming
from three different sources then as you can see a single composite signal is here (Refer
Slide Time: 25:37) which can be sent through optical fiber through this region and at the
other end with the help of another prism they can be separated out and we can get back
all the three different signals. This is how the Wavelength Division Multiplexing can be
done.

Now let us focus our attention to Time Division Multiplexing.

(Refer Slide Time: 26:00)

As I mentioned Time Division Multiplexing is used when we are using digital signals.
Digital signals as you know are generated by different encoding techniques. We have
already discussed about them in detail. This Time Division Multiplexing is possible when
the bandwidth of the medium exceeds the data rate of the digital signals to be transmitted.
That means here there is a possibility of sharing.
(Refer Slide Time: 26:33)

The multiple digital signals can be carried on a single transmission path by interleaving
portions of each signal in time. What we are doing is essentially we are interleaving
signals first we are sending signal from channel 1 then signal from channel 2 then signal
of channel three in this way we are interleaving then we are sending it through the
medium having higher bandwidth. This interleaving can be done at the bit level or in the
blocks of bytes. That means we can take one bit from channel 1 then one bit from another
channel then one bit from another channel so you can do it this way this is called bit level
interleaving or we can take one byte from channel 1 then second byte from channel 2
then third byte from channel three and so on. So this way we can do interleaving in terms
of bits or in terms of blocks of bytes.

Obviously in this case as I told we shall be using digital signal and digital signals are
generated by using suitable encoding technique.
(Refer Slide Time: 27:45)

If the data was digital then the digital signals are generated by using three different types
of coding such as unipolar, polar or bipolar. Similarly if the original data was in analog
form then digital signal can be generated by pulse code modulation or delta modulation.
In either case ultimately we have got digital signals. These digital signals are expressed in
terms of bits per second or Kbps that is the data rate .and these digital signals can be sent
through a medium by using Time Division Multiplexing. However, this will require some
kind of buffers.

(Refer Slide Time: 28:29)


The incoming data from each source are briefly buffered and each buffer is typically one
bit or one character in length depending on the interleaving level.
The buffers are scanned sequentially to form a composite data stream. So, by sequentially
scanning the different bits a composite data stream is created and the scan operation is
sufficiently rapid so that each buffer is emptied before more data can arrive. Let us see
how this is being done.

(Refer Slide Time: 30:14)

Here data is coming from source one source two and source n so there is some kind of
switch you are taking from this source then from this source so by this way we are taking
and we are creating a frame. So as you can see here in this frame first we have got the
data from source S1 then from source S2 then source S3 and so on up to S minus n. After
taking data from n sources again it is started from source S1. as you can see here (Refer
Slide Time: 29:41) second frame is started from source S2 then to S3 and so on then this
is sent in terms of time. So this frame is sent then this frame is sent and so on. So as you
can see here there are n slots in each frame. This is slot 1 this is slot 2 so in this way you
have got n slots and each slot corresponds to a particular source. That means slot 1
corresponds to data from this source slot 2 corresponds to data from this source and slot n
corresponds to this data from this source.

However, whenever we are doing the framing some additional bits are necessary for
synchronization. Usually per frame one bit is used for the purpose of synchronization. So
a special bit or a bit pattern is added in a control channel so this is used for
synchronization. For example if this is the frame (Refer Slide Time: 30:48) at the
beginning of each frame 1 is added then if it is the next frame then at the beginning of the
next frame 0 is added so in this way alternately ones and zeroes are added for the purpose
of synchronization. These bits are used for the purpose of synchronizing the frame.
(Refer Slide Time: 31:01)

Essentially these synchronization bits state that this is the beginning of the frame and data
from different slots are coming from different sources and they are demultiplexed at the
receiving end.

However, sometimes the data rate from a particular source does not match or is not
multiplied at the rate at which the scanning is done while doing multiplexing. In such a
case some additional bits are added which is known as pulse stuffing or bit padding both
the terminologies are used to facilitate synchronization of different data rates.

For example, you are sampling at the rate of say 8 Kbps so here the data is coming at the
rate of 8 Kbps but here it is coming at the rate of 7.2 kbps then obviously these two
cannot be synchronized. So what is done is additional bits are stuffed into this so that the
data rate for this is 8 Kbps and then at the receiving end those dummy bits are taken out
or separated out because it is known that the data from this source is coming at the rate of
7.2 Kbps so those extra dummy bits can be taken out. This is known as pulse stuffing or
bit padding where signals from different data sources having different data rates can be
combined by using Time Division Multiplexing.
(Refer Slide Time: 33:12)

Here is an example of Synchronous Time Division Multiplexing. So here for example


and here the transmitter data is coming from four sources say this is the source one (Refer
Slide Time: 33:27), this is source two, this is source three and this is source four. And as
you can see here source A has got four characters four characters, source B has got three
characters, source C has two characters and source four has only one character to be sent.
Therefore as you do the framing the first frame has got four characters coming from four
different sources. So here we have put it as coming from source A, then you have put B
in slot two corresponding to second frame then here you have put C coming from third
source and D coming from fourth source. So this is how the first frame is created.

Now as you go to the second frame the second A is put here in the first slot then the
second B is placed in the second slot then second C is placed in the third slot and so on.
Here as you can see there is no data so this slot goes empty (Refer Slide Time: 34:38).
And when we go to the third frame the first two slots are filled up by the third characters
from source one and source two respectively. So here in slot one from this third character
we have A, and in the second slot the third character B but these two slots remain empty
because there is no data. And at the receiving end however it can be received and they
can be sent to different sources.

As you can see this will go here, this B will go here, this C will go here this D will go
here so there is no problem in multiplexing and demultiplexing. The problem is
somewhere else. What we are observing in this particular case is that the data that is
generated by framing has got some redundancy. That means if a particular source has no
data to be sent that particular slot goes empty because each slot is dedicated for a
particular source.
In this case as you can see in frame 1 one slot goes empty (Refer Slide Time: 36:00), in
frame 2 two slots goes empty and in frame 3 three slots goes empty so this is essentially
wastage of bandwidth because here as you can see the bandwidth of this medium is
higher, this composite signal is sent through a medium of higher bandwidth so the data
rate here is much higher than this data rate. so the transmission medium having higher
bandwidth is used for sending multiplexed signal. However, in Synchronous Time
Division Multiplexing as we find the bandwidth is not fully utilized and there is some
wastage of bandwidth. Therefore we have to overcome the problem of the wastage of
bandwidth.

(Refer Slide Time: 37:15)

This is the limitation of Synchronous Time Division Multiplexing. As I have seen many
of the time slots in a frame may be wasted. This problem is overcome by using a new
technique which is known as Statistical or Asynchronous or Intelligent Time Division
Multiplexing. Actually there are three different names to refer to the same thing. So it is
referring to Asynchronous Time Division Multiplexing but sometimes it is called
Statistical Time Division Multiplexing or Intelligent Time Division Multiplexing.

In this Asynchronous Time Division Multiplexing slots are allocated dynamically on


demand. In the previous case as you have seen the slots are pre assigned dedicated to
each channel but here it is not so. Depending on whether a particular channel has some
data or not a slot is allocated dynamically. so no slot is assigned to a particular source.
Any slot can be used by any source and it takes advantage of the fact that not all the
attached devices may be transmitting all the time. For example, whenever we talk over
telephone line all the time we are not speaking, sometimes we are listening and
sometimes we are thinking so that silence period is wasted whenever we talk through
telephone line. Hence that wastage can be overcome by using Statistical Time Division
Multiplexing.
Let’s see how it can be done. This is illustrated with the help of this simple example.

(Refer Slide Time: 40:02)

Here data is coming from four different sources A B C and D and here you have got the
high speed multiplexer. Here obviously the data rate is four times that of the data rate of
these inputs. Here there are different time slots so this is going to source A, this data is
going to source B, (Refer Slide Time: 39:25) this is going to source C and this is going to
source or channel D. Now as you can see during this time slot t0 to t1 only channel C and
channel D has data and similarly during slot two channel B channel C and channel D has
data, during slot three channel B and channel D has data, during slot four channel A and
channel C has data so if we use Synchronous Time Division Multiplexing the framing
will be done in this way.

In frame 1 we shall have this data (Refer Slide Time: 40:16) this is A1 and C1, in frame 2
we shall have B1, C1 and D1, in frame 3 we shall have B2 and D2 as you can see and in
frame four we shall have A2 and C3. And we can see these are the time slots which are
wasted. Although we have large bandwidth from here to here the bandwidth is quite high
we are not making use of it. Let’s see what we can do in Synchronous Time Division
Multiplexing.

In Synchronous Time Division Multiplexing we reduce the bandwidth. So instead of four


slots we have only two slots coming out from this multiplexer. Therefore in a frame we
have got only two data. So in the first frame we are sending A1 and C1, in the second
frame we are sending B1 and C2, in the third frame we are sending D1 and D2, in the
fourth frame we are sending D2 and A2 and then in the fifth frame C3. So we see that
wastage is much less and the remaining slots here these frames and slots can be used for
sending data coming from the other time slots. Thus we are making much better use of
the available bandwidth.
(Refer Slide Time: 42:02)

So in this Asynchronous Time Division Multiplexing since the data arrives from different
sources and are distributed to IO lines unpredictably address information is required.
There is a problem in this. Although we are able to make use of the bandwidth in a more
efficient manner or we can say in a different way with a transmission medium of lesser
bandwidth we can send signals coming from different sources provided they don’t
generate data continuously.

However, there is a problem. At the receiving end it is necessary to identify which data is
coming from where. For example, at the receiving end this slot is not meant for data only
from source A or source B so in this slot data can be sent by any one of the channels or
data can be taken from any one of the channels. But at the receiving end how the receiver
will know that data of a particular slot belongs to a particular channel? For that purpose
you have to incorporate address information. So, for proper delivery at the receiving end
it is necessary to have address information embedded as part of the data so there is an
overhead. So apart from the data we are adding this information here and since it is an
overhead we want to minimize it.

If we are sending say one source per frame then we can do the framing in this manner
like data and address which we can send in a particular frame. And whenever there is a
frame coming from multiple sources we shall also require address information and length
of data if we use variable length data coming from different sources for each of the
sources. So for each of the sources we require address, length and data, that was not so in
Synchronous Time Division Multiplexing. There the number of bits to be taken was fixed
and the slot allocation was fixed but here it is not so. Hence this additional information
that is needed to be sent through the transmission medium has to be minimized and to do
that sometimes we make use of relative addressing so that the number of bits required to
specify the address is smaller wherever we use relative addressing.

Sometimes we can use fixed length or length can also be specified in some special way so
that the length field is smaller so that the efficiency of Asynchronous Time Division
Multiplexing is more.

(Refer Slide Time: 45:13)

In Asynchronous Time Division Multiplexing the data rate at the output is less than the
data rate at the inputs. We have seen that the data rate at the inputs is higher than the data
rates at the output because the inputs are not always sending data. However, in peak
periods the inputs may exceed capacity. Because the output bandwidth is smaller then the
sum total of the input bandwidth so in peak periods there will be some kind of overflow
and it will exceed the capacity. Therefore how can we overcome this? To overcome this
we can use buffers of suitable size to store the data then they can be selected at later time
slots.

So some experimentation has been done by which one can decide what should be the
buffer size for achieving efficiency. Let’s assume n is the number of inputs, r is the data
rate of each of the source, M is the effective capacity of the output and alpha is the mean
fraction of time each input is transmitting. That means the sources are not transmitting all
the time that is the important property that we are exploiting and obviously the alpha is
less than 1 so it lies between 0 and 1. Then a measure of the compression, that means the
bandwidth of the transmission media compared to the maximum bandwidth that is
required that is the value of C and M is the effective capacity of the output and n into r is
the maximum bandwidth because you have got n sources and r is the data rate of each
source. Hence this factor is again less than 1 and it lies in the range alpha to 1 so C lies
alpha to 1. So depending on the statistical behavior of the inputs the value of alpha and
value of C will depend that is the reason why it is called Statistical Time Division
Multiplexing.

We have discussed the Frequency Division Multiplexing and we have discussed Time
Division Multiplexing.

(Refer Slide Time: 47:57)

Here we have another very important technique which is known as inverse multiplexing.
This is opposite of the multiplexing technique. In the previous case what was done was
the individual inputs of lesser bandwidth then we are combining to form a composite
signal of higher bandwidth. Here it is opposite. Here we are receiving signal of higher
bandwidth then it is divided into a number of channels of smaller bandwidth and at the
other end opposite operation is done. in what situation it can be used. Let us see an
application.

Suppose you have to send voice which will typically require 64 Kbps then let us assume
we have to send data which will require say 128 Kbps and video which will require say
1.544 Mbps. Now the user can higher medium of transmission capacity 1.544 mega bits
per second then whenever it wants to send the video it will make fully make full use of
the transmission bandwidth. However, whenever it is sending data that bandwidth is not
utilized or whenever it is sending voice the transmission bandwidth is also not utilized.

Let us consider the other alternative. The other alternative is that here you have got a
number of channels of smaller bandwidth and these bandwidths are only 164 kilo bits and
you have got a large number of 64 Kbps bandwidth available on demand so here we are
making use of the property bandwidth on demand.
(Refer Slide Time: 50:31)

So whenever we are sending voice only 164 Kbps bandwidth is demanded and one
channel is assigned. Whenever we are sending data two channels are made available to
send 2 Kbps and obviously we are doing some kind of demultiplexing where we are
dividing this data into two separate channels and sending them and again at the other end
we are combining them. Whenever we have to send video we will require a number of
channels which are demanded and the video data is divided and sent through a number of
channels and at the other end they are combined to get back the data.

So here there is a more cost effective use of the bandwidth and the facilitator is providing
you bandwidth on demand and as a result as and when the bandwidth is required that is
being utilized that’s why this technique is known as inverse multiplexing. Nowadays
these kinds of facilities are available.

We have discussed various multiplexing techniques. Now it is time to give you the
review questions.
(Refer Slide Time: 52:38)

1) In what situation multiplexing is used?


2) Distinguish between the two basic multiplexing techniques?
3) Why guard bands are used in Frequency Division Multiplexing?
4) Why synchronization pulse is required in Time Division Multiplexing?
5) What limitation of Time Division Multiplexing is overcome in Asynchronous
Time Division Multiplexing and how?
6) Design a Time Division Multiplexing system having output bandwidth of 128
Kbps to send data from four analog sources of 2 KHz bandwidth and 8 digital signals of
7200 bandwidth.

Here we have to do pulse bit stuffing so that synchronization is possible. The answers for
these questions will be given in the next lecture. Here are the answers to the questions of
lecture-10.
(Refer Slide Time: 53:44)

1) Which modulation technique is used in optical communication?

As we know On/Off Keying is used in optical communication. It is some kind of


Amplitude Modulation technique. It is a special case of amplitude modulation or
Amplitude Shift Keying ASK technique which is known as On/Off Keying which is used
in optical communication.

2) What are the three modulation techniques possible in modems?

We shall discuss about modems in detail later on. The three modulation techniques are
Amplitude Shift Keying, Frequency Shift Keying and Phase Shift Keying. We have
discussed it in detail in lecture minus 10.
(Refer Slide Time: 54:25)

3) Why Phase Shift Keying is preferred as the modulation technique in modems?

In PSK scheme it is possible to send signal having more than one digital value and this
approach is known as Quadrature PSK. That means here the baud rate is less than the
data rate. As a consequence we can send more data through a transmission medium of
smaller bandwidth that’s why PSK is preferred.

(Refer Slide Time: 54:31)


4) Out of the three digital to analog modulation techniques which one provides
higher data rate?

For a given transmission bandwidth higher data rate can be achieved in case of PSK. In
other words in PSK higher channel capacity is achieved although the signaling rate is
lower.

That’s the end of all the questions. We have discussed various multiplexing techniques.
As I told the multiplexing techniques have wide spread applications in different areas. In
the next two lectures we shall discuss about applications of multiplexing like telephone
system and so on. Thank you.
Data Communications
Prof. A. Pal
Department of Computer Science & Engineering
Indian Institute of Technology, Kharagpur
Lecture-12
Multiplexer Applications-1

Hello and welcome to today’s lecture on multiplexer applications. In the last lecture we
had discussed the various techniques of multiplexing, a set of techniques that is used for
multiplexing like Frequency Division Multiplexing, Time Division Multiplexing and
Wavelength Division Multiplexing which is a variation of Frequency Division
Multiplexing. And also we have discussed two types of Time Division Multiplexing that
is your Synchronous Time Division Multiplexing and Asynchronous Time Division
Multiplexing. These multiplexing techniques have numerous applications.

In this lecture we have chosen two applications which are very common and widely used
in our day to day life. Let us see the outline of today’s lecture. First we shall consider the
telephone system which we are using in our daily life and in our telephone system you
will see there are two types of services namely analog services which is in use for a long
time.

(Refer Slide Time: 2:10)

Now with the advancement of technology now the digital services are also available
which I shall discuss in detail. Then another very important technology which is
becoming very useful for broadband data transmission is your DSL technology Digital
Subscriber Line technology. And this DSL technology has four variations such as ASDL,
SDSL, HDSL, and VDSL. We shall discuss about them in detail.
On completion the student will be able to explain the operation of telephone system as to
how a telephone system works, he will be able to explain different types of services
provided by the telephone system, he will make you aware of the different types of
services that is being offered by the telephone system and he can choose the service
required for his/her application. Then we shall discuss about the DSL technology and we
will be able to explain how the local loop is being used to provide broadband service by
using DSL technology. We will be able to explain different types of DSL technologies.

(Refer Slide Time: 3:31)

Let us have a look at the telephone system.

(Refer Slide Time: 4:00)


As I mentioned one of the many applications of multiplexing is the telephone network
which makes use of both FDM and TDM. So here it is a combination of Frequency
Division Multiplexing and Time Division Multiplexing that is being used in our
telephone system so it is a very interesting application. This is the telephone network as
you can see. To the telephone network a number of telephone sets are connected which
we call handsets. Handsets are connected. Obviously these handsets represent telephones
in houses, in offices, in universities and in all the places. These telephones are now
distributed in all places and this telephone network is a global network, it is no longer a
small telephone network. That is, from anywhere you can call to any other place by
dialing a number.

Let us have an inner loop of the telephone system or telephone network.

(Refer Slide Time: 5:13)

Telephone network as you can see has got these distinct components. First is this local
loop. This local loop is connected to the user and through this local loop the user is
connected, here you have got your handset and your handset is connected to the local
loop. This local loop is essentially an analog line which has bandwidth of only 4000 Hz
and that is adequate for voice communication. So it is simply twisted-pair of wire which
is led from the end office which is essentially the telephone exchange and with the help
of twisted-pairs of wire it is connected to our home or office and it provides you an
analog transmission capability with bandwidth of 4000 Hz. Then as you can see these end
offices are connected (Refer Slide Time: 6:03) using trunk lines to tandem offices and
these tandem offices are in turn connected through trunk lines to regional offices. This is
some kind of a generic representation of the telephone network.

Here I have shown only few trunk lines, it can be through a cascade of trunk lines. And
obviously these trunk lines are implemented with the help of high speed lines. For
example these trunk lines can be implemented by using coaxial cable, it can be
implemented by using microwave link, it can be implemented by using optical fiber and
it can also be implemented by satellite network. So you may be asking why these are
implemented not by using twisted-pair of wires?

The reason is here the trunk lines are of much higher bandwidth. The voiced lines are of
only 4 KHz bandwidth but a number of such channels are combined with the help of
multiplexing technique to have much higher data rates and these are connected through
trunk line that’s why you will require coaxial cable if the distance is not large or you have
to use microwave link or optical fiber and satellite network. So at different links it can
use either microwave link or optical fiber or satellite link so it is not uniform. This
particular link can be microwave link, this can be a satellite link (Refer Slide Time: 8:15),
this can be an optical fiber link and so on.

So in the path this user one is connected to user two and it can go through a number of
paths through the trunk lines and that can be combination of various types of transmission
medium as we have already discussed. So this gives an overview of the telephone
network.

Now let us see the types of services the telephone network provides us. The services can
be probably divided into two types such as analog services and digital services.

(Refer Slide Time: 8:58)

Analog services are available from the telephone components for a long time. That means
we are using these analog services for quite some time and with the advancement of
technology digital services have been introduced in recent times and nowadays we are
able to use the digital services. So I shall discuss both analog services and digital services
one after the other.

First let us focus on the analog services.


(Refer Slide Time: 9:27)

The analog services have got two different types of services. One is known as analog
switch service and another is analog leased service. Analog switch service is the most
commonly used service that we use at home. We get a telephone connection from the
telephone exchange that is your local office using local loop which is nothing but a
twisted-pair of wire and we get the telephone service at your home and that is the analog
switch service and that is the most commonly used service. And analog leased service is
one. That provides a dedicated line between two users. Let us see the difference between
the two.

(Refer Slide Time: 11:07)


The first one is the analog switch service. In this particular case as I mentioned the
subscriber’s handset is connected to the telephone network by twisted-pair cable which is
known as local loop by an exchange. So through that exchange our handset is connected
to the local loop and the signal on local loop is analog in nature having bandwidth of 0 to
4000 Hz. Now there is a switch in the exchange that connects the subscriber to the
subscriber of the dialed number for the duration of call.

As you can see here only for the duration of call we are getting this service and for other
times it is not connected and the network that’s why is referred to as Public Switched
Telephone Network. That will be clear from this diagram.

(Refer Slide Time: 11:44)

Here you see we have got a handset at home. There is another handset in the neighboring
home that is connected through a telephone network. In the simplest case it can be the
local exchange and these two telephones can be connected to the local exchange using
two local loops, these are twisted-pairs. Now, only when the number is dialed from here
this user one at home dials a number and this switch (Refer Slide Time: 12:25)
establishes a connection from here to here.

So a connection is established from here to there and the home at the other end gets the
dialed tone and only when it is picked up a link is established and only for the duration of
call the link is established and when the call is terminated that is after the communication
is over then the handset is put down at both ends and the link is disconnected. That’s why
I mentioned that this link is only for the duration of call and it is not connected otherwise.
That’s why the network is referred as Public Switched Telephone Network. So it is doing
some kind of switching and only for the duration of a call it is establishing a link between
two users through the telephone network.
Now it can be through a single exchange or through those trunk lines involving more
offices as i have mentioned earlier. Now let us consider the other types of services analog
leased service. There are many applications in which the user wants to send data
continuously or once to communicate easily in such a case analog leased services are
available.

(Refer Slide Time: 14:47)

In this case the important point is although the connection is through an exchange and
there is a switch here but the switch has established a permanent path between these two
handsets as you can see. So the user one need not really dial a number to establish a
connection to user B but already a link is established so whenever he wants he simply can
talk or send data through this leased line. That means whenever the user rate is high
people are talking all the time or people are sending data all the time so in such a case
this kind of leased line can be established so it is dedicated. That means the whole
bandwidth is available all the time and it is not switched one. This is in use in many
situations where the user’s rate is much high.

Now as I mentioned the various links can be of higher bandwidth and in such a case for
better utilization of the infrastructure analog signals are multiplexed to provide lines of
higher bandwidth. So you have to combine several signals or several low bandwidth
channels to form high bandwidth channels which can be sent through transmission
medium of higher bandwidth and the Frequency Division Multiplexing is used to
combine many lines into fewer lines in a hierarchical manner. Therefore in a hierarchical
manner this combination is done and there is a variety of hierarchy available nowadays.
The hierarchical system is used by AT and T group companies is shown here. As we shall
see the hierarchy is divided into groups, super groups master groups and jumbo groups as
it is shown in this diagram.
(Refer Slide Time: 16:09)

(Refer Slide Time: 16:15)

Here as you can see twelve voice channels are combined by using Frequency Division
Multiplexing technology to form a group and this group here can be sent through a single
telephone line and it need not be twisted-pair but it can be some other transmission
medium. here (Refer Slide Time: 16:35) as you can see the bandwidth is 48 KHz. and
again five such groups can be combined to form a super group where each super group
will have bandwidth of 240 KHz and as you can see using this bandwidth you can send
60 voice channels.
In the previous case we were having 12 voice channels and then five such groups were
combined to form a super group and through this line you can transmit 240 KHz
bandwidth so it allows you to send 60 voice channels. That means there are two
exchanges and if two exchanges are connected then it is necessary to establish link for 60
voice channels then a single transmission medium can be set up between two exchanges
through which 60 voice channels can be sent then the super groups can be combined to
form master groups and as you can see each master group line can carry 600 voice
channels having bandwidth of 2.52 KHz and again six master groups can be combined to
form a jumbo group and this jumbo group has bandwidth of 16.984 KHz which can carry
as many as 3600 voice channels. So as you can see a transmission media of higher
bandwidth is a twisted-pair it can carry 4 KHz and a single line can be used to carry 40
KHz or 240 KHz or 2.52 KHz or 16.98 KHz are providing you voice channels of
different numbers starting from 12 to 3600 voice channels so it will lead to better
utilization of the lines of higher bandwidth.

Now let us look at the digital services.

(Refer Slide Time: 18:54)

As I mentioned because of the advancement of technology now the digital services are
becoming increasingly popular because of higher immunity to noise and other
interferences. If you send in terms of zeroes and one it is very unlikely that it will get
corrupted. On the other hand the analog signals get corrupted with noise. And if you are
sending digital signals with the other end with the help of repeaters the noise can be
separated and you can get back the original zeroes and ones. And also another important
aspect is the digital transmission provides you lower cost compared to the analog
transmission because digital processing has become cheaper. There are three categories
of digital services. First one is Switched/56, second one is DDS and third one is DS. I
shall explain each of them one after the other.
(Refer Slide Time: 20:00)

The Switched/56 service is nothing but a digital version of the analog switched line. I
have already explained the analog switched line so it is essentially the digital version and
it allows a data rate up to 56 KHz per second. Of course since it is digital in nature there
is no need to have modem in this Switched/56 service.

However, there will be a need for another device known as Digital Service Unit or DSU
and this DSU provides you better speed, less susceptibility to noise and better quality.
This particular service provides you bandwidth on demand.
You may recall that when I discussed about that inverse multiplexing there a high speed
line can be divided into a number of lower speed lines to provide service based on
bandwidth on demand.

For example whenever we are sending voice signal then a bandwidth requirement of 56
Kbps may be sufficient. Whenever we are sending data the bandwidth requirement can be
double of that. In such a case two lines can be demanded. Whenever we are sending
video say 1.544 mega bits then 24 or more number of this Switched/56 lines can be
provided. So this Switched/56 service allows you to have bandwidth on demand by using
that inverse multiplexing. Therefore using this service you can have inverse multiplexing.
(Refer Slide Time: 21:49)

Here the Switched/56 service is shown. It is the same except that instead of modem you
have got the DSU Digital Service Unit at both ends and it is connected through a switch
at the telephone exchange. This is also a dialed connection with a dial-up line so
whenever the dialing is done a link is established between these two DSUs and the
communication is possible between two users connected through the telephone exchange.

Then we have the Digital Data Service or DDS.

(Refer Slide Time: 22:38)


This is again the digital version of analog leased line. We have already discussed analog
leased line so here also we can have leased connection or dedicated connection and it
allows you data rate up to 56 Kbps. However, there is a choice since it is dedicated you
may not be always using 56 Kbps. So in such a case you can choose data rates of 2.4
Kbps or 4.8 Kbps or 9.6 Kbps or 19.2 Kbps or 56 Kbps. That means whenever you are
using this DDS service depending on your requirement you can specify the bandwidth
you want. It can vary from 2.4 Kbps to 56 Kbps. So since it is available all the time you
can make use of the bandwidth by sending data by scheduling data all the time.

However, as your need grows you can keep on increasing from 2.4 Kbps to 56 Kbps. Of
course here also there is a need for the Digital Service Unit. You don’t require a modem
but you will require a Digital Service Unit. However, in this case the DDS that is being
used is cheaper and simpler because you don’t require a key pad. In the previous case
where it was Switched/56 case there was a need for the keypad where you can press the
keys so that a number can be dialed. But here it is leased service so already a permanent
link is there and so there is no need for dialing a number so the DSU is simpler and
cheaper because it does not require a keypad.

Here is the schematic diagram for Digital Data Service DDS.

(Refer Slide Time: 25:05)

As you see here you have got two DSUs permanently connected through the telephone
exchange. Of course there is a switch here that switch I have not shown so that switch is
establishing connection from this to this. This is a permanent one so the data rate can
vary from 2.4 to 56 Kbps through this line.
So this is the Digital Data Service.

Finally you have got the Digital Signal Service and this Digital Signal Service DS service
provides you a hierarchy of digital services.
(Refer Slide Time: 25:30)

Here the bandwidth requirement can be as small as 64 Kbps and as your need grows it
can go up to 274.176 Mbps so the hierarchy is available through a number of services and
these services are known as for example DS-0. The DS-0 service is similar to DDS. It’s a
single digital channel however the bandwidth here is 64 Kbps data rate that is being
allowed instead of 56 Kbps. And DS-1 provides you 1.544 Mbps service however I shall
explain then DS-2 service provides 6.312 Mbps and DS-3 allows you 44.376 Mbps
service so through a single transmission media you can have this data rate 44.376 or you
can have 274.176 Mbps service if you are having a DS-4 service. These services are
implemented with the help of T lines so implementation is done by T lines.

I shall show you how it is being done.


(Refer Slide Time: 28:08)

This is the DS-0 service each of 64 Kbps and then 24 such channels are combined by
using Frequency Division Multiplexing to get DS-1 service and here the bandwidth is
1.544 Mbps where the 24 DS-0 channels are accommodated. And by combining 4 such
lines you can have 6.312 Mbps and you can have 4 DS-1 or 96 DS-0 you have a choice
through each of these services. then a DS-3 service provides you 44.376 Mbps where you
can have 7 DS-2 or 28 DS-1 or 72 DS-0 channels and finally you can have DS-4 service
which will provide you 274.176 Mbps and either you can have 6 DS-3 or 42 DS-2
equivalent number of DS-0 services.

So as you can see here you can have services of increasingly higher bandwidth and these
are supported with the help of those T lines. For example, I can show you how a T-1 line
can be used. T-0 is 64 Kbps, how a T one line can be used for analog transmission using
PCM for conversion to digital signal? How you are getting 1.44 and that is being shown
here.
(Refer Slide Time: 30:35)

Here as you can see 24 voice channels are combined to form one T-1 line and as you can
see each voice channel of 4 KHz is converted to 64 Kbps using Pulse Code Modulation
technology. That means since it is 4 KHz bandwidth you have to sample at the rate of 8
KHz then by using eight bit quantization that means analog to digital conversion by using
8-bit AD converters you can have 8 KHz into 8 KHz that means 64 Kbps for each
channel. Then the 64 Kbps channels are combined by using Time Division Multiplexing.
Here as you can see you are not using Frequency Division Multiplexing because here this
is digital transmission and it is no longer analog. In the earlier case for analog services it
was Frequency Division Multiplexing but now as you can see it is Time Division
Multiplexing.

So the Time Division Multiplexing is combining those 24 voice channels and as you can
see you have 24 Kbps voice channels and it has got 8 Kbps channel overhead and this
overhead is essentially for the purpose of synchronization. The synchronization bits are
present as it is shown in this diagram.
(Refer Slide Time: 30:45)

Here as you can see this is a frame where from each channel eight bits are taken, eight bit
from channel 1 and eight bit from second channel and this way twenty four channels are
there and for each frame there is a synchronization bit of 1-bit and this frame therefore
comprises of 24 into 8 plus 1 is equal to 193 bits and you have got a total of 8000 frames.
So T-1 lines supports you 8000 frames each of 193 Bps giving you 1.544 Mbps that’s
how you get T-1 frame of 1.544 bps.

Now this has opened up a new option. Earlier suppose a particular house or a business
organization were in need of 24 telephone lines then from the telephone exchange it was
necessary to connect 24 different twisted-pair of wires so 24 pairs of wires were coming
from the telephone exchange to the business house.

Now it is no longer necessary. You simply take one T-1 line and in the business house
you have a small PCM exchange as you can say so from there you can get back the PCM
lines. So there is no need to take 24 pairs of wires instead just one cable one transmission
media will do depending on the length it can be either a twisted-pair or some other
transmission media and then here that conversion is done from TDM that there will be a
small PCM exchange which will convert to give you 24 voice channels. So, small
telephone exchange can be set up in your home or in a residential complex or in a
business house to give you 24 separate voice channels or 24 different telephone sets.

Now there is another possibility. For example can these T-1 lines be shared? A small
business organization may not require the full T-1 service or T-1 line bandwidth that
means 1.544 mega bits may not be required so can it be shared? This is possible only
with the help of DSU CSU unit. This is Customer Service Unit.
(Refer Slide Time: 33:40)

What can be done here is for example say four subscribers wants to share a T-1. Say in a
building there are four business houses and each of them having requirement of one
fourth of the bandwidth of T-1 line then these four subscribers can share a single T-1 line
with the help of a DSU CSU unit and from each of these business house it is connected to
the DSU CSU say here it is one fourth of T-1, one fourth of T-1, one fourth of T-1 and
there will be another one and they are combined and this goes to your telephone
component and you get some kind of shared T-1 service. So this kind of flexibility is
being offered by these T-1 lines.

Now let us switch to another important technology that is your Digital Subscriber Line,
DSL technology.
(Refer Slide Time: 35:59)

This DSL provides you a much higher bandwidth, broadband and that is being done by
using the local loop. Earlier we were having only 4Kbps bandwidth through the local
loop. But although the twisted-pair of wire has the capability of transmission of much
higher bandwidth that is 1.1 Mbps we are restricting it because of our requirement. We
were filtering using low pass filters so that only 4 KHz signals go but now that filter can
be removed. So inherent bandwidth of 1.1 MHz of these wires that is being used in
existing local loop can be exploited or has been exploited in DSL technology.

Of course for that purpose it has to use suitable modulation techniques as well as
multiplexing techniques. So here you will see that it has used the combination of
modulation and multiplexing techniques to achieve this high bandwidth using local loops.
And DSL again has got several versions namely ADSL, VDSL, HDSL SDSL so this
family can be represented as xDSL so x can be A, V, H or S any one of them. Let us see
these four different versions one after the other.
(Refer Slide Time: 36:32)

ADSL stands for Asymmetrical DSL Digital Service Line and this has been primarily
designed for residential users. as we shall see although it will be able to provide much
higher bandwidth but based on the condition of the local loop the data rate is selected in
adaptive manner because the local loop wire is twisted-pair, this twisted-pair of wire can
run for a kilometer or 2 Km so the length can be different, the quality of cable can be
different, the cable can pass through different areas where the noise levels can be
different so based on that the data rate is selected in adaptive manner and it uses a novel
modulation technique known as Discrete Multitone Technique DMT which I shall
explain in detail. This uses a combination of QAM Quadrature Amplitude Modulation
and Frequency Division Multiplexing. And we shall see the available bandwidth is 1.104
MHz which is divided into 256 channels each having bandwidth of 4.312. So the entire
bandwidth is divided into 256 channels out of which the channel 0 is dedicated for voice,
channel 1 to 5 is not used it has been left for future then there are 24 upstream channels
(Refer Slide Time: 38:15) where one of them which is control channel from 6 to 30 and
downstream channels 31 to 254. So you have got a larger number of downstream
channels compared to upstream channels.
(Refer Slide Time: 38:20)

You may be asking why? Suppose you are having internet service through a DSL line.
then from the home to the internet service provider the data rate is much lower because
most of the time you are downloading something which has much higher bandwidth so
you require much higher downstream bandwidth that’s why the number of channels
allocated for downstream is much larger. So as you can see it is 31 to 255. So here the
Discrete Multitone Technique is explained as you can see here it uses voice channel. This
is your channel 0, then channel 6 to channel 32, here there are some serial to parallel
converters so upstream bits are coming which is converted into 24 channels and each of
these channels is encoded by using 15-bit QAM and each of them is connected to a FDM
Frequency Division Multiplexer. Similarly in the downstream signals there are 31 to 255
channels and whatever is coming in the digital data from these channels are again
converted to QAM that is your Quadrature Amplitude Modulation
(Refer Slide Time: 40:10)

Then you are doing parallel to serial conversion and here you get the downstream bits.
This is where at the end of this Frequency Division Multiplexer you have got 256
channels which are combined each having a bandwidth of 4.312 KHz gives you a
bandwidth of 1.104 MHz. These signals can be sent through the twisted-pair of wire from
home to the telephone exchange using local loop.

(Refer Slide Time: 42:21)

Here as you can see channel 0 is reserved for voice in this Discrete Multitone Technique
and channel 1 - 4 is not used idle as I explained earlier then upstream data and control
uses 24 channels from 6 to 30 and for upstream data and control one channel is used for
control that means 23 are available for data transmission and this is how you are getting
1.44 Mbps that is 24 into 4000 into 15, by using QAM we are getting 15 bits so it gives
you 1.44 Mbps for upstream. Upstream means from the home user to the internet service
provider or the telephone exchange and downstream data and control provides you 255
channels from 31 to 255 for downstream data control so you can see that actual
bandwidth is 244 into 224 actually here it will be 224 channels (Refer Slide Time: 42:02)
so 224 into 4000 into 15 gives you 13.4 Mbps. so these are the maximum possible
bandwidths available for upstream 1.44 Mbps for downstream 13.4 Mbps.

But in practice because of the line condition the data rate is dynamically varied. In
practice you will get only 64 Kbps to 1 Mbps for upstream and 500 Kbps to 8 Mbps for
downstream. So it is far lower than the 13.4 Mbps because usually the local loop quality
is not very high. So unless it is very close and quality of cable is very high you will not
get very high bandwidth. But this itself is quite high compared to 4Kbps.

The equipments used in ADSL is shown here.

(Refer Slide Time: 43:43)

This is from the customer residence that is home. The local loop is coming here with the
help of a filter (Refer Slide Time: 43:20) you are separating out the voice channel and it
is going to ADSL modem. The ADSL modem is getting back the data and here as you
know Frequency Division Multiplexing is done and it is performing the conversion
demodulation giving you data of much higher rate which is going to your computer.

Similarly from the customer premise it is going to the telephone exchange and here there
is a filter it is going to the telephone network and there is a DSLAM that is Digital
Service Line Access Multiplexer. So this access multiplexer actually not only does the
necessary multiplexing and other things but it also does the framing needed for internet
communication to the internet service provider so this goes to the internet service
provider. So using this equipment you can have ADSL service.

(Refer Slide Time: 45:45)

There are other DSL technologies like symmetric Digital Subscriber Line SDSL. This
divides the available bandwidth equally. because in this case it is not for home users so
there is no need to have separate bandwidth for upstream and downstream so here the
bandwidth is equally divided. Then you can have high data rate Digital Subscriber Line
HDSL which is an alternative to T-1 line. As we know the T-1 line uses the AMI
amplitude marked inversion coding and it is very susceptible to attenuation and noise
attenuation at high frequencies. As a result you don’t get more than 1 Km without using
repeater.

On the other hand in HDSL it uses 2B1Q encoding which is less susceptible to
attenuation. It allows you 2Mbps over a distance of 3.6 Km compared to 1 Km without
any repeater and it uses two twisted-pair wires for full duplex communication. It allows
you to have full duplex communication. Finally we have the very high bit rate Digital
Subscriber Line VDSL.
(Refer Slide Time: 45:58)

It is similar to ADSL but it can use coaxial fiber optic or twisted-pair wire for short
distances and using this DMT modulation technique it allows 1.5 to 2.5 mega bits per
upstream and 50 to 55 Mbps downstream. So these are the different variations of VDSL
technologies that I have discussed. Here are the Review Questions:

(Refer Slide Time: 47:12)

1) Distinguish between analog switched service and analog leased line


2) If a single node optical fiber can transmit at 10 Gbps how many telephone
channels can one cable carry?
3) How DSL provides broadband service over local loop
4) Why the actual data rates available through DSL line is substantially lower than
maximum possible rates?

These are the four questions which will be answered in the next lecture. Here are the
answers to the questions of lecture minus 11.

(Refer Slide Time: 48:02)

1) In what situation multiplexing is used?

Multiplexing is used in situations where the transmitting medium is having higher


bandwidth but the signals have lower bandwidth. That means from different channels the
bandwidth is lesser. Hence there is a possibility of sending a number of signals
simultaneously. In this situation multiplexing can be used. Multiplexing can be used to
achieve the following goals.

(1) To send a large number of signals simultaneously


(2) To reduce the cost of transmission
(3) Three to make the effective use of the available bandwidth.

These are the goals of multiplexing as I explained in the last lecture.


(Refer Slide Time: 48:26)

(2) Distinguish between the two multiplexing techniques?

The two multiplexing techniques are Frequency Division Multiplexing and Time
Division Multiplexing. FDM can be used to transmit a number of analog signals
simultaneously in different frequency bands. TDM, also known as Synchronous Time
Division Multiplexing is used with digital signals by sending signals from different
sources in different time slots of a frame.

(Refer Slide Time: 49:15)

3) Why guard bands are used in FDM?


In Frequency Division Multiplexing a number of signals are sent simultaneously on the
same medium by allocating separate frequency band or channel to each signal. Guard
bands are used to avoid interference between two successive channels. If you don’t
provide guard bands then there is a possibility that signals of two adjacent channels will
overlap leading to cross talk.

4) Why sync pulse is required in TDM?

In TDM each frame time slots are pre-assigned and are fixed for each input sources. In
order to identify the beginning of each frame a sync pulse is added at beginning of every
frame. So essentially the frames or the beginning of each frame has to be identified and
that is done with the help of the synchronization pulse provided in the beginning of the
frame.

(Refer Slide Time: 50:04)

5) What limitation of TDM is overcome in ATM and how?

In Time Division Multiplexing each frame consists of a set of time slots and each source
is assigned one slot per frame. In a particular frame if a source is not having data then
that time slot goes empty or it goes wasted. As a result many of the time slots are wasted
as we have seen in detail. This problem is overcome in ATM which stands for
Asynchronous Time Division Multiplexing for statistical Time Division Multiplexing.
And in ATM the time slots are not pre-assigned to a particular data source rather slots are
dynamically allocated to sources on demand. So it is dynamically allocated on demand
depending on the availability of data from different sources. That’s how it makes better
use of the bandwidth of the transmission medium.
(Refer Slide Time: 51:04)

6) Design a Time Division Multiplexed system having output bandwidth of 128


Kbps to send data from four analog sources of 2 KHz bandwidth and eight digital signals
of 7200 bps. Here it is shown that this 2 KHz analog signals which are coming from four
channels are converted into digital form by using Pulse Code Modulation each is
converted to 64 kilo bits so you see 2 KHz so you have to sample at 4 KHz then each
PCM has 4-bit AD converter so you get 16 kilo bits per each channel then you have got
buffers of two bits here.

Similarly here as you can see this 7200 bps here we shall be sampling at the rate of 8
Kbps. So you have to do pulse stuffing. As I explained in the last lecture you have to add
additional bits by pulse stuffing and that’s how you will convert this 7200 bits signal to 8
Kbps signals. So this 8 kilo bits per signal channels are each provided with 1-bit buffer
and here you have got the multiplexers which are generating 64 Kbps signals and here
you have got 8-bit buffer and here also you have got 8-bit per buffer and alternatively you
are taking 8-bit and 8-bit from there to get 128 Kbps composite signal.

Here the frame is shown, channel 1 2-bit, channel 2 2-bit, channel 2 4-bit and channel 4
8-bit so here you get 8 bit from this source and another one bit from each 6 7 8 9 10 11
and 12 that gives you the signals coming from the channel 5 to channel 12 and this is the
entire frame. Of course here the synchronization bit cannot be shown but just the
multiplexing path is shown in this particular diagram and what is also shown is how you
are getting 128 Kbps. So with this we come to the end of lecture-12. Here we have
discussed two very important applications of multiplexing. In the next lecture we shall
discuss three important applications of multiplexing, thank you.
Data Communications
Prof. A. Pal
Department of Computer Science & Engineering
Indian Institute of Technology, Kharagpur
Lecture-13
Multiplexing Applications-2

Hello viewers welcome to today’s lecture on multiplexing applications. This is the


second lecture on multiplexing applications. In the last lecture we have discussed about
the telephone network and DSL technology which are very important applications of
multiplexing. In today’s lecture we shall consider two another important applications of
multiplexing one in cable modem and we shall see how these conventional cable TV
networks has been extended to provide internet service. And also we shall see how
multiplexing is used in SONET. So here is the outline of today’s lecture.

(Refer Slide Time: 1:49)

First we shall consider the standard cable TV system the frequency bands used and
various devices used in cable TV system then we shall discuss about the extension of
cable TV system into a new network known as Hybrid Fiber Coaxial network or in short
HFC network. We shall discuss how the bandwidth is distributed and how the bandwidth
is shared between upstream and downstream data so that you can have internet service.
Then we shall consider SONET the Synchronous Optical Network. Here we shall see the
different types of devices used in SONET network and the STS Synchronous Transport
Signals the hierarchy of signal levels used in SONET network. We shall discuss about
SONET frame format.
(Refer Slide Time: 3:06)

And on completion of this lecture students will be able to explain how distribution of TV
signals take place in the traditional cable TV system. They will be able to explain how
HFC Hybrid Fiber Coaxial cable network allows bidirectional data transfer using cable
modem. They will be able to state the data transmission scheme used in cable modem. So
far as the second application of multiplexing is concerned they will be able to explain the
operation of SONET network, they will be able to explain the function of different
SONET layers. As we shall see there are four different layers in SONET network and
they will be able to explain the SONET frame format.

(Refer Slide Time: 5:14)


So here is the traditional Community Antenna TV or commonly known as CATV
network. As you can see the basic purpose is to distribute the broadcasted video signals to
residences. That means the TV signals which are received through an antenna, here is an
antenna which is usually installed on the top of a roof (Refer Slide Time: 4:13) that
receives the TV signal sent through satellite and then a device is used which is known as
head and head end is responsible for distributing the signal on a cable network. This cable
distribution system as you can see can have a number of amplifiers because the signal
strength gets attenuated as the signal goes from the head end towards the residential
complexes. So you can have a large number of amplifiers.

Then with the help of a splitter the coaxial cable is divided to be distributed in big
residential campus and here as you can see a drop cable is used which taps from the
distribution cable the coaxial cable and takes it to the residences. This is the overview or
the schematic view of the standard cable TV system.

And as we have mentioned the cable TV system uses the coaxial cable compared to the
DSL technology which uses the twisted-pair. So, in cable TV we find the widest use of
coaxial cable. And as you know the coaxial cable provides much better immunity to
interference and cross talk compared to twisted-pair so it gives you a data bandwidth
compared to DSL network.

As you have seen in DSL network twisted-pair is used and the actual bandwidth is much
less than the highest possible bandwidth. That means the bandwidth of the twisted-pair is
not fully exploited because of the noise, interference, cross talk and so on. So the actual
bandwidth available is restricted and as a consequence the available bandwidth is much
less and obviously coaxial cable used in your cable TV system provides you a better
alternative and it gives you higher bandwidth. and the bandwidth of the cable TV system
use the frequency range from 54 to 500 MHz so this is the frequency range that is being
used for cable TV system for distributing the number of large number of TV signals from
different stations. There are two standards; one is your NTSC that is National Television
Standards Committee and another is Phase Alteration by Line PAL standards which use a
6 MHz and 8 MHz bands respectively.
(Refer Slide Time: 7:51)

And since it uses 6 MHz and 8 MHz bandwidth this 54 to 500 MHz is distributed over a
number of channels providing 50 to 70 channels depending on whether you are using
NTSC or PAL standards. Depending on the standards used the number will be different.
For example, if NTSC is used you will get about 70 channels and if all is used you will be
getting about 50 channels because you require high bandwidth per channel. These are the
popular standard.

As you have seen in the background there are three important devices used in cable TV
devices. The first one was the head end. The head end receives the video signals from the
broadcasting stations with the help of antenna installed at the top of a tall building.

In this diagram here I have shown a tower it need not be a separate tower but this antenna
can be mounted on the roof of a building and from there the electrical signals is amplified
and goes to the head end and from the head end it is distributed over the coaxial cable. So
after receiving the video signal it is distributed on the coaxial cable.
(Refer Slide Time: 9:26)

Then as I have mentioned you require amplifier to boost the signal because the coaxial
cable has high attenuation so as the signal goes from the head end towards residential
buildings it gets attenuated. So to improve or to boost up the signal levels you have to use
amplifiers and you have to use up to 35 amplifiers in cascade between the head end and
the subscriber premises. So a large number of amplifiers may be required to provide
service to large number of buildings and to large number of users.

Then you have got splitters. Splitters are used to split distribution of signals into
branches. So with the help of splitters from a single coaxial cable you can have two
coaxial cables and so on. So you can have branches with the help of splitters. Then from
the drop table you can make some kind of tap and from that tap the drop cable goes to the
residential buildings that is the subscriber premises. This is how the cable TV system
works with the help of three devices.

And as I mentioned because of large attenuation you will require large number of
amplifiers and this restricts the communication and it is unidirectional. That means only
the downstream signal goes from the head end to the residential premises so the signal
transmission in standard cable TV system is unidirectional so only the downstream signal
is present. Essentially the TV signals are distributed over the residential buildings with
the help of this cable TV system. To overcome this limitation the cable TV system has
been extended which is known as Hybrid Fiber Coaxial network.
(Refer Slide Time: 11:18)

Here as you can see a combination of fiber optic and coaxial cable has been used. Here
there is a Regional Cable Head RCH where the signal is picked up with the help of
picking antenna and again with the help of tower the antenna is put on the top of a tall
building and from there the signal goes to the distribution hubs. Thus from the
distribution hubs to the fiber node it is essentially optical fiber. So you can see here in
between there are switches and from the fiber node it is essentially coaxial cable. So we
find that ultimately to the user premises that it is coaxial cable but in between from the
distribution hub to the fiber node it is the optical fiber.

And because of the use of the optical fiber you can have lesser attenuation and as a
consequence you will require a small number of repeaters or amplifiers. And as a
consequence you can use bidirectional communication. So now you can have
bidirectional communication and not only that because of very high bandwidth of optical
fiber it is possible to serve a large number of users. As you can see a single regional cable
head can serve as many as 400,000 users. So, from here the RCH can go to several
distribution hubs and altogether 400,000 users can be served and from each distribution
hub 40,000 users can be provided service.

On the other hand each coaxial cable coming out from the fiber node can be used to serve
about 1000 users. So you can see there is a hierarchy of the optical fiber cable
distribution and this is how it is done as shown in this diagram.
(Refer Slide Time: 14:34)

Here as you can see this is the optical fiber cable. There are two cables; one is upstream
fiber another is downstream fiber so you require a pair of optical fiber cable for
bidirectional communication so you have upstream and downstream fiber cable. Of
course when the length is long you will require some amplifier but the number of
amplifiers required will be much smaller. There we have seen the number of amplifiers
required is 35 but here it is restricted to 5 or 6 or 7 and not more than that.

Then from the fiber node the signal is distributed using coaxial cable and whenever
necessary to amplify you will require it will be bidirectional split band amplifier. So you
can see that with the help of several bidirectional split band amplifiers the signal is
distributed in the residential premises. So each such coaxial cable can serve as many as
1000 users as you have seen in the last slide so thousand users can be served.

This is the coaxial distribution plant ultimately going to the residential premises. Here is
the taps taken from the coaxial cable (Refer Slide Time: 14:55) and these are the
bidirectional split band amplifiers. This is the overall network for HFC system.

Now let us look at the bandwidth distribution of HFC Hybrid Fiber Coaxial cable system.
In case of standard cable TV network you have got only downstream band that is from 54
MHz so from 54 to 550 MHz was the bandwidth used in cable TV system. So it was only
downstream which can support about 50 to 70 channels. But here we find we have got
three distinct frequency bands.
(Refer Slide Time: 16:04)

We have retained the video band that is downstream frequency band used for video
distribution and in addition to that there are two data bands one is data upstream and
another is data downstream. Data upstream band is from 5 to 42 and data downstream is
from 550 to 750. We observe that the upstream bandwidth is much smaller than the
downstream bandwidth. You may be asking why the upstream bandwidth is smaller than
the downstream bandwidth.

Usually it is used for internet service. Now it is used for providing internet service to the
residential users. So in such cases usually the data which is going from the users to the
internet is small on the other hand the large volume of data flows from the internet to the
users. So as a consequence you require higher bandwidth for downstream data compared
to upstream bandwidth that’s why we will see that the upstream bandwidth is smaller
compared to downstream bandwidth and the similar situation we have seen in DSL
technology also.

Now we see how the bandwidth distribution takes place and how it is being utilized. First
we consider the upstream data band that occupies the lower band 5 to 42 MHz which is
divided into 6 MHz channels so since it is 5 to 42 we will have only 6 channels and each
of these 6 channels will have 6 MHz bandwidth and as it is more susceptible to noise
QPSK is used for modulation purpose instead of QAM because amplitude modulation as
you know it is prone to error whenever there is noise.
(Refer Slide Time: 18:04)

Thus since it uses QPSK you can have only 12 mega bits per bandwidth. That is, the
theoretical data rate that is possible is 12 Mbps because whenever we use 6 MHz
bandwidth channel our data rate will be 12 Mbps multiplied by 2. That means per signal
element you can send two bits of data that’s how you get this 12 Mbps.

Although this is the theoretical maximum data rate the actual data rate in the upstream
direction will be much smaller because you cannot really achieve this 12 Mbps
bandwidth. Then the video band that is downstream only is used for 54 to 550 MHz it can
accommodate up to 80 channels. Since each channel requires six MHz you can have at
most eighty channels that is about 80 channels can be transmitted is for video band.

On the other hand the downstream data band occupies the bandwidth from 550 to 750
MHz which is divided into 6 MHz channels and it will have about 33 channels. It will
have about thirty three channels and by using that 64 QAM modulation and since you are
using 64 QAM in the downstream data you can transmit six bit per signal element that
gives you 6 into 6 = 36 Mbps. However, one bit is used for upward error detection and
that’s how the theoretical data is 30 Mbps. However, in practice the actual will be 12
Mbps.

So you may be asking why the actual data rate is 12 Mbps?

The reason for that is the cable that is being used for this downstream data is essentially
that 10 base t twisted-pair cable and with the help of that you can support only 10 Mbps.
As a consequence the actual data rate will be much smaller. So we have seen how the
three bands are used and how they are modulated by using QPSK and QAM.
(Refer Slide Time: 21:09)

And obviously the data that is being sent either in the upstream or downstream direction
is shared. The subscriber shares both the upstream and the downstream bands. As we
have seen the HFC can support a large number of subscribers 400,000 from a single
regional cable head. So obviously the bands have to be shared. So the upstream
bandwidth is only 37 MHz these are divided into 6 MHz bands as I have already
mentioned and that is done by frequency division multiplexing and these 6 MHz
bandwidth can be shared by a number of users. Actually one channel is allocated to a
group of users either statistically or dynamically.

We shall see how this upstream band is shared, how a particular channel can be used by a
particular subscriber at a particular instant of time. That means you have to use some kind
of time division multiplexing. This is actually called TDM Time Division Multiplexing
and essentially it uses medium access kind of technique. We shall see how this can be
done later on.

Then the downstream sharing is also done because you have got only 33 channels each of
6MHz bandwidth and these are shared by all the users. Of course here the contention is
not required. There it is essentially multicasting. The signal is coming from the network
may be through optical fiber cable then it is distributed through a large number of users
and based on the address the signal is diverted and multicasting is used. That means here
multicasting is done based on matching of address. So the different signals different
channels goes to different users based on the address because each user is provided with
an address whenever they get the HFC service. So here are the different devices that are
used in cable modem systems.
(Refer Slide Time: 23:42)

Here you see the cable modem that is being used in the residential premises. This is the
customer residence (Refer Slide Time: 23:50) and this is the coaxial cable and a tap is
made and from there the signal comes to a filter which separates out the video signal so
that video signal will go to a TV television and the other data signal will go to a cable
modem which will do the necessary modulation and demodulation and the signal will go
to the computer. So here it is going to a computer. You can see here that both data as well
as video communication is possible by using this device.

Now let us look at the system used in distribution hub.


(Refer Slide Time: 25:09)

The cable modem transmission system or CMTS is installed in the distribution hub of the
cable company. This is in the cable company. Here we see that the signal comes from the
optical fiber. In the previous case it was the coaxial cable that goes to the residences but
here the optical fiber is going to the distribution hub (Refer Slide Time: 24:55) and from
the distribution hub it is bidirectional where you have a pair of optical fiber wires where
the upstream and downstream fibers are there. Then we have these two signals coming in
namely the video signal and the data signal. The video signal is coming from the head
end the other signal is coming from the internet as you can see it is a bidirectional signal
which is going to the CMTS which is combined here (Refer Slide Time: 25:26) and that
goes to the fiber. That means the signal which is downstream goes to the user and the
signal which goes to the internet comes in this direction and it goes to the internet. This is
how the communication takes place.

That means CMTS receives data from the internet and sends them to the combiner this is
the signal and it also receives data from the subscriber from here and it passes them to the
internet. Therefore the CMTS communicates in both the directions.

Let us see the data transmission scheme used in cable TV system. As I have mentioned
the cable TV system allows data communication. You can have both voice and video and
you can have video on demand if necessary. How it is being done is explained here. it
uses DOCSIS the Data Over Cable System Interface Specification devised by Multimedia
Cable Network Systems so MCNS develop the DOCSIS for the purpose of data
transmission over cable modem. DOCSIS defines all the protocol necessary to transport
data from CMTS to CM and timesharing is allowed for upstream data. As I have
mentioned the upstream data has to be time shared. A cable modem must listen for
packets destined to it on an assigned downstream channel. That means a channel is
assigned and obviously assignment is done based on some kind of contention. The CMs
must contend to obtain time slots to transmit their information in an assigned channel in
the upstream direction. That means it is some kind of contention based medium access
control.

In local area network this kind of a technique is used. A cable modem has to contend for
a channel and once it gets it then the CMTS will be able to send data to obtain time slot to
transmit data. That means it is a combination of FDM and TDM. It is used here for the
transmission of data from a large number of users.

(Refer Slide Time: 28:09)

The CMTS sends packets with the address of the receiving CM in the downstream
direction without contention. As I have mentioned here it is some kind of multicasting.
There is no need for contention with the help of the address assigned to a particular
subscriber so that the signal can be diverted to different users. This is the data
transmission scheme in nutshell used in cable TV system.

Now we shift gear to discuss about another important application of multiplexing that is
SONET.
(Refer Slide Time: 29:39)

As the need for higher data rate is growing it is necessary to utilize the enormous
bandwidth of optical fiber. As you have seen optical fiber provides you very high
bandwidth and to utilize it fully it is necessary to have standard. Two standards have been
developed; one in US developed by American National Standard Institute ANSI. ANSI
developed a standard known as Synchronous Optical Network or SONET.

On the other hand in Europe another very similar standard was developed by ITUT which
is known as Synchronous Digital Hierarchy or in short it is known as SDH. These two are
very similar and they have three important features. First one is it is a Synchronous Time
Division Multiplexing system controlled by a master clock which adds predictability. So
it is essentially a network wise synchronous system and because it is synchronized by a
master clock it is very predictable and dependable. The synchronous transmission
synchronous communication has more efficiency.

We have already discussed this when we discussed about the Synchronous Time Division
Multiplexing and Asynchronous Time Division Multiplexing. We have seen that
Asynchronous Time Division Multiplexing requires higher overhead. But since SONET
uses Synchronous Time Division Multiplexing it is quite efficient so the overhead is
much less.

Second important feature is different manufacturers follow a standard. The standards


which have been developed are being used by a number of optical fiber system
manufacturers so we get standard based equipment for use in SONET system.

Thirdly it has been designed to allow carry signals from incompatible tributary systems.
in the last lecture we discussed about the telephone network. There we have seen the DS
Digital Signal System has been developed that’s a hierarchy of signals that is used in
telephone network. But there the bandwidth is smaller. So here we shall see that those DS
signals can be also sent through this SONET network because of this SONET is
becoming increasingly popular and widely used. Here is the schematic diagram of a
SONET network.

(Refer Slide Time: 31:39)

Here we see there are three different types of devices; STS multiplexer, regenerator and
add drop multiplexers. These are the three different types of devices used here. Electrical
signals coming from different users are coming to the STS multiplexers and then it is
converted to optical signal. So here it is electrical and it is converted into optical signal
with the help of STS multiplexer. The optical signal goes through and if necessary it is
regenerated with the help of regenerator which is nothing but some kind of amplifiers
such as the optical signal and the noise is removed and it is regenerated. In fact it does
more than that as we shall see. From the regenerator it goes to the add drop multiplexer.
This is another kind of multiplexer and not the same as SDS multiplexer. The optical
signals can be picked up or combined and then sent through optical cable and then it goes
to another add drop multiplexer if necessary and if distance is long you will require one
or more regenerators before it goes to the STS multiplexer and it goes to the other end
where again it is converted into electrical signal and goes to the users. So between two
users you have got STS multiplexer.

So in the simplest case you need not have regenerator you need not have add drop
multiplexer essentially SDS multiplexers can be directly connected if the distance is
small. However, if the network is complex you will require a large number of STS
multiplexers, regenerators and add drop multiplexers.
(Refer Slide Time: 33:36)

So we have seen three different types of devices used in SONET. What are the functions
of these three different types of devices? Let us see.

First one is the synchronous transport signal multiplexer demultiplexer. So it either


multiplexes signal from multiple sources into a STS signal. The STS signal is the
hierarchy of signals or it demultiplexes an STS signal into different destination signals.
So here it gets converted from electrical to optical or from optical to electrical so both
these conversions are done in this STS multiplexers. So a number of electrical signals are
combined and then sent in the optical form or a number of optical signals are received by
the STS multiplexers and converted into electrical signals to be sent to different
destinations.

Then you have got regenerators. It is a repeater that takes a received optical signal and
removes noise and regenerates it. It functions in the data link layer so it does something
more than simply regenerating the signal. As we shall see it will take out some
information adds some information in the frame that’s why it works in data link layer as
well as in the physical layer.

Then you have got add drop multiplexer. Add drop multiplexer can add signals coming
from different sources into a given path or remove a desired signal from a path and
redirect it without demultiplexing the entire signal. So the demultiplexing and
multiplexing is restricted to the SDS multiplexers. On the other hand the add drop
multiplexer can divert path or remove a desired signal from a path, it is done based on
address and pointer.

The SONET has three different layers.


(Refer Slide Time: 37:28)

We have earlier discussed about the seven layer OSI model and we have seen the
functions of different layers. Here the SONET is divided into four different layers to
divide the complexity. Whenever the complexity of a system is very high it is divided
into a number of layers. So here also the SONET is divided into four different layers.
First one is photonic layer. Photonic layer corresponds to the physical layer of the OSI
model and here it works in NRZ form, NRZ encoding is used and it does On/Off Keying.
So it does the on off key modulation that means when there is 1 the optical signal is
present and when it is 0 no signal is present. So ‘O’ corresponds to no optical signal and
1 corresponds to when there is optical signal. Hence that is essentially On/Off Keying
using modulation.
.
The section layer is responsible for movement of signal across a physical section. It
performs framing, scrambling and error control. So the section layer performs three
different functions and also it is responsible for movement of signal across a physical
section. Later on let us see what we mean by section.

Line layer is responsible for movement of signal across physical line STS multiplexers
and add drop multiplexers are provided with line layer functions. That means the
regenerators do not have this line layer functions. On the other hand the STS multiplexers
and add drop multiplexers are provided with this line layer functionality.

Finally it has got path layer. This is responsible for movement of signal from optical
source to destination. STS multiplexers are provided with functionality of this layer. Only
the STS multiplexer is provided with this functionality. Let’s see the relationship of this
four layer SONET model with the ISO’s OSI layer.
(Refer Slide Time: 38:24)

We know that ISO’s OSI layer has got seven layers. The lower two layers are physical
and data link. So the photonic layer belongs to the physical layer where it decides the
signal level, distance and all this parameters for physical transmission of optical signal so
that is done in the photonic layer. On the other hand the data link layer has got three
different sub-layers that means all the three upper layers of SONET belong to the data
link layer of the OSI model. That means the path layer, line layer and section layer all
these three belong to data link layer. And here as you can see (Refer Slide Time: 39:07)
the functionality is incorporated in different devices of the SONET system as shown here.

The STS multiplexers are provided with the functionality of path, line, section and
photonic layers all the four layers. On the other hand the regenerators are provided with
only the functionality of two layers photonic and section layer. Since it has got the
functionality of photonic and section layer that’s why we say that regenerators belong to
both physical as well as data link layer.

On the other hand the add drop multiplexers has the three layers photonic, section and
line layers and the STS multiplexer as we have already mentioned has got four different
layers photonic, section, line and path layers. Here let us see what we really mean by
section, line and path. Between any two devices between two regenerators this is one
regenerator and (Refer Slide Time: 40:15) this is another regenerator so between any two
regenerator there is a section.
(Refer Slide Time: 40:17)

Between the regenerator and add drop multiplexer is a section. Between the add drop
multiplexer and STS multiplexer is a section. That means between any two devices a
section is formed. On the other hand between the line terminating equipment or between
two line terminating equipments or between path terminating equipment and line
terminating equipment the path terminating equipment is essentially the STS multiplexer
line terminating equipment is essentially the add drop multiplexer. Between each of them
forms a line. and the end to end from user to user, electrical to electrical optical to optical
that means when it is converted from electrical to optical and optical to electrical it is
essentially a path. So this end to end is the path and between a path you may have several
lines and between each line you can have several sections as it is depicted in this
diagram.

(Refer Slide Time: 42:06)


Now as I have mentioned a hierarchy of signal levels called Synchronous Transport
Signals are defined. These Synchronous Transport Signals has a hierarchy just like DS
signals we have seen in telephone network. Here as you can see (Refer Slide Time:
42:03) we have got STS-1 which has raw bandwidth of 51.84 Kbps, STS-3 which is three
times of that has got 155.52 you have got STS-192.

However, that STS-1, STS-3, STS-12 and STS-24 these four are the most popular
hierarchical levels commonly used. So the raw data rates electrical data rates are given
here in this third column starting with 51.84 Mbps and as you can see STS-192 has got
9953.28 Mbps. So you see it is a very high data rate that is supported by optical fiber
SONET network. This is called the services (Refer Slide Time: 43:01) and it is converted
into optical signal which are sent by optical carriers. So this is your optical carrier, the
STS-1 is converted into optical signal corresponding to OC-1 so optical carrier 1 carries
the STS-1 signal, OC-3 carries the STS-3 signals and so on. So in this way here you have
got the physical levels used to carry the signals or the optical carriers so these are the
optical carriers used.

Now, although the third column gives you the raw mega bandwidth Mbps of the
Synchronous Transport Signals there are overheads. first of all as we shall see in this STS
one out of this 51.84 Mbps this is your envelope that synchronous payload envelope is
only 50.12 Mbps out of which there is some overhead like section, line etc. Of course
path overhead is covered here in SPE some overheads are there and if you exclude these
overheads you get the payload of only 49.536 Mbps. So this is the actual payload that you
get from the SONET system. So as you can see here the overhead is not really very high
it is 51.84 and this is 49.536 so the overhead is about 4% which includes the overhead of
the layers that we shall see.
(Refer Slide Time: 45:02)

This is the basic STS frame format at the photonic layer. Here as you can see you can
transmit a single STS frame comprised of 8000 frames. So one after the other these
frames are sent like frame 1, frame 2 then frame 8000. On the other hand each frame is
divided into 810 octets and each octet has got eight bits so a single frame comprises 6480
bits and these 8000 frames are sent per second so your data rate is 51.840 Mbps. So here
as you can see a single frame is shown where you have got 810 octets.

(Refer Slide Time: 46:50)


As you can see here each row has got 90 octets out of which the first three columns are
used for section and line overhead and the upper three rows the top three rows and the
first three columns gives you the section overhead and the first three columns that is the
six rows of the first three columns is the line overhead and the path overhead is one
column. That means you have to exclude four columns which are essentially the
overheads and if you exclude from nine to one ninety octets you get eighty seven octets
which is known as the Synchronous Payload Envelope SPE out of which one is used for
path overhead giving you user data of 774 octets so you have got nine such rows of 90
octets per row.

Now this diagram gives you more detailed information.

(Refer Slide Time: 47:27)

This is the section overhead that is your three rows and here you have got 6 rows 3
columns and 3 rows, and 3 columns and 6 rows and here 1 column for path overhead. So
you have the section overhead, line overhead and path overhead. That means when the
electrical signal comes it goes through first the path overhead then the line overhead then
the section overhead. All the overheads are incorporated in a frame then it is converted
into optical signal and transmitted in the optical form.

Now those STS signals can be combined to form higher order signals.
(Refer Slide Time: 48:15)

For example three STS-1 signal can be combined with the help of multiplexer to form
STS-3 signal and in general you can have STSn frame format where as you can see that
the overhead columns are provided at the front which has got 3 into n columns and
remaining 87 into n are essentially the payload out of which there are path overheads
present here which is not shown here. This is how the multiplexing can be done and you
can create different types of hierarchical signals. So here as you can see again three STS-
1 signals are combined to form a STS-3c concatenated signal so overhead columns are
here and the payload 1, payload 2 and payload 3 columns are combined to have a
concatenated payload and this can be sent through the optical network.

(Refer Slide Time: 49:27)


Now the question arises, is it possible to send the DS signals used in telephone network
through the SONET network. We shall discuss how it is being done. As we know the
Digital Signal service used in telephone network has got a hierarchy of digital services.
DS-0 service is similar to DDS a single digital channel of 64 Kbps and DS-1 has got
1.544 mega bits service, DS-2 has got 6.312 Mbps service, DS-3 has got 44.376 Mbps
service and DS-4 has got 274.176 Mbps and as you know the T lines are used to
implement these services.

Now as you can see here even the DS-3 bandwidth is lower than the STS-1 signal. So
how we shall send DS-1, DS-2 and DS-3 signals? For that purpose a technique known as
Virtual Tributaries are created. Let us see how we can send those DS signal through the
SONET signal by using Virtual Tributaries.

(Refer Slide Time: 52:04)

To make SONET backward compatible with current hierarchy the frame design includes
a system of Virtual Tributaries. Four types of tributaries have been defined. One is
VT1.5, this is used to accommodate DS-1 service which is only 1.544 Mbps and as you
know this can accommodate twenty four voice channels. And VT2 which can
accommodate European CEPT one service has bandwidth of 2.048 Mbps. VT3 can
accommodate one DS-1c service having bandwidth of 3.154 Mbps. VT6 can
accommodate DS-2 service 6.312 Mbps and more than one tributaries can be interleaved
column by column and this SONET provides the mechanism to identify each virtual
tributary and separating them without demultiplexing the stream. That means based on
the pointers and addresses those VT signals Virtual Tributaries can be separated out and
can be sent to proper destination. So here the Virtual Tributaries are shown.
(Refer Slide Time: 52:12)

This is VT1.5 if you multiply 1.5 you get three that’s why it has got three columns and as
you can see 1.728. This is 1.728. However, it is sending only 1.544 DS-1 signal and
others are essentially overhead, there are overheads here. Similarly the other three
tributaries such as VT2 has got four columns, VT3 has six columns and VT6 has got
twelve columns these can be interlinked in a single SONET frame STS-1 frame for
transmission through the optical fiber. Therefore we have discussed two important
applications of multiplexing in this lecture, one is the cable modem and another is
SONET. Now let us look at the Review Questions.

(Refer Slide Time: 53:48)


1) Distinguish between the services provided by the cable TV and HFC networks.
2) How the upstream bandwidth is shared by group of users for data transfer in HFC
network?
3) How is an STS multiplexer different from an add drop multiplexer?
4) What is the relationship between STS and STM?
5) How does SONET carry data from DS one service?

These questions will be answered in the next lecture. Now it is time to answer the
questions of the previous lecture - 12.

(Refer Slide Time: 54:14)

1) Distinguish between analog switched service and analog leased service.

As you know analog leased service is the commonly used dial-up service. On the other
hand before data communication is performed in this dial-up service you have to
establish a connection. It is a common experience to us, first dial a number establish a
connection and then you transfer data. On the other hand in leased service dedicated link
always exist irrespective of whether any data is sent or not. So there is no need for
establishing a connection, the connection is already established.
(Refer Slide Time: 54:52)

2) If a single mode optical fiber can transmit 10 Gbps how many telephone channels can
one cable carry?

As we know 1.544 mega bits is required to send 24 telephone channels using pulse code
modulation so 10 Gbps will allow sending 15440 voice channels over an optical link.
You can do the calculation and you will find that considering 24 channels per 1.544
Mbps you will be able to send 15440 voice channels through 10 Gbps per channel in an
optical fiber. So you can see that large number of voice signals can be sent through a
single optical fiber.

(Refer Slide Time: 56:15)


3) How DSL provides broadband service over local loop?

As we know the twisted-pair is used in local loop which has the bandwidth of 1.1 MHz.
So by using suitable modulation technique it is possible to provide broadband service
over local loop of the telephone network. So the filters that are commonly used in
telephone network is removed and with proper suitable modulation technique it is
possible to provide that broadband service in DSL.

(Refer Slide Time: 56:45)

4) Why the actual data rates available through DSL line is substantially lower than
maximum possible rates?
As we have seen although the maximum data rate is high because of the long length of
the cable, attenuation, quality of cable, signal to noise ratio it is not possible to send at
that maximum rate so the rate data rate that can be sent is decided dynamically based on
the line condition in DSL network. Usually it is much lower than the maximum network.

So friends in the last two lectures we have discussed four important applications of
multiplexing. First we have seen the application of multiplexing in telephone network
then we have seen the application of multiplexing in DSL technology which provides
broadband service and third application we have seen in cable modem and fourth one is
SONET and all these are very popular and widely used for data communication, thank
you.
Data Communications
Prof. A. Pal
Department of Computer Science & Engineering
Indian Institute of Technology, Kharagpur

Lecture-14
Interfacing to the Media

Hello and welcome to today’s lecture on Interfacing to the Media. We have discussed
various techniques such as encoding, modulation and multiplexing with the help of which
you can transmit signal over any transmission medium.

Now question arises, how you interface your equipment to the transmission media, it can
be twisted-pair coaxial cable or optical fiber or as well as media but whatever it may be
you have to interface your equipment some how to the transmission medium. That is the
topic of discussion of today’s lecture. Here is the outline of the lecture.

(Refer Slide Time: 1:52)

First we shall discuss about the basic concept of the interface where it is required then we
shall discuss the various modes of communication between the interface and the user
equipment may be the computer. The various modes or transmission techniques are
parallel and serial or it can be simplex, full duplex and half duplex or there are two types
Asynchronous and Synchronous transmission then we shall discuss about the interface
between two equipments like DTE Data Terminal Equipment and data circuit
communicating equipment. So the interface between these two has some standards one of
them is EIA Electronic Industries Association developed in standard EIA-232.
We shall discuss about it in detail then we shall discuss the concept of null modem which
is related to this and another interface although which is not that popular that is X.21.
Then we shall discuss about the modems which are actually connected to the medium.

And on completion of this lecture the students will be able to explain various modes of
communication, distinguish between half duplex and full duplex modes of
communication, they will be able to distinguish between asynchronous and synchronous
modes of communication and they will be able to specify the RS-232c or EIA-232
standard used for DTE DCE interface. They will be able to explain the function of a null
modem and finally they will be able to explain the functionality and standards used for
MODEMS.

(Refer Slide Time: 3:48)

Let us see what we really mean by the interface.

(Refer Slide Time: 5:01)


As we have seen to send data through the transmission medium we have to do various
encoding techniques that we have discussed in detail. Now that encoded data has to be
transmitted and for that purpose it is necessary to interface data terminal equipment. By
data terminal equipment we mean the computers and various other equipments. It can be
computer, printer, plotter it can be any other equipment which can transmit or receive
data. Then another equipment will be required known as DCE Data-Circuit Terminating
Equipment.

The Data-Circuit Terminating Equipment will actually interface to the medium. As you
can see here Data-Circuit Terminating Equipment is now sending or launching the signal.
Obviously we have to develop an interface between this Data Terminal Equipment
referred to as DTE and data circuit and Data-Circuit Terminating Equipment DCE. This
interface is an electrical circuit. Later on let us discuss how we can send digital signal
over this. Some standard has to be used for this purpose. So we shall discuss about this
interface in detail. Before that let us discuss about the various modes of transmission
rather possible modes of communication.

(Refer Slide Time: 5:50)


Data can be transmitted in parallel or in serial. Again parallel has two different types
synchronous and asynchronous and serial also has two variations synchronous and
asynchronous. First let us consider the Parallel Transmission technique.

(Refer Slide Time: 6:41)

In Parallel Transmission technique as you can see a number of bits usually a byte or a
word it can be either a byte that is 8-bit or it can be word. Word can be 8-bit, 16-bit, 32-
bit or 64-bit depending on the size of the processor it can be byte or word that is sent in
parallel. As you can see all the bits are transmitted parallel between the sender and
receiver. Obviously whenever you transmit data in this manner it is very fast because all
the bits are going simultaneously.
However, this is possible only over a short distance and this Parallel Transmission is
commonly used for transmission of data between the CPU and the memory or between
the CPU and other peripherals which are very close to the CPU. That means when two
equipments are located very close to each other in few meters or some feet away in
distance then this parallel mode of communication is possible. Usually parallel mode of
communication is not popular in the data transmission techniques that we are discussing
in detail and in that respect normally serial transmission is used. What we really mean by
serial transmission?

(Refer Slide Time: 8:02)

As you can see in serial transmission a pair of wire is used for communication of data in
bit serial form. Instead of sending in parallel you are sending it bit by bit. As you can see
here we are sending them one by one, first one, then 0, then 1, then 0, then 1 and 0 and 1
and 0 and 1 so in bit serial fashion and of course this will require parallel to serial
conversion by the sender and also it will require serial to parallel conversion at the
receiving end (Refer Slide Time: 8:00).

However, it has many advantages and particularly it is very suitable for long distance
communication. That means serial mode of communication is possible over long
distances.
(Refer Slide Time: 8:20)

Here various reasons are given to explain why serial transmission is required and in this
lecture we shall mainly focus on serial transmission. Now let us see what are the various
techniques or why serial transmission is required. First of all it allows you reduced cost of
cabling. As we have seen we are using lesser number of bits at a time, we are only
sending one bit at a time so you require one pair of wire and you don’t require a bunch of
wires so it requires lesser number of wires compared to parallel connection.

In parallel communication you require a bunch of wires to be connected between the


DTE and DCE which is not required here. Then it leads to reduced cross talk. Because of
lesser number of wires it results in reduced cross talk. We have discussed about these
cross talks in detail and we have seen that whenever a number of wires are bunched
together it leads to cross talk and here in the serial mode of communication cross talk is
reduced significantly.

The third important factor is availability of suitable communication media. We are


familiar with telephone system, microwave system and satellite network. In all these
cases data transmission takes place in serial form. That means if you have to use the
popular transmission media like telephone network, satellite network or microwave
network we have to use serial mode of transmission.

Fourth point is that there are many devices which have which are inherently serial in
nature and in such cases it is natural to use serial mode of communication. For example,
whenever we need data from some devices like tape recorder, disks floppies, hard disks
etc it is always serial in nature.

Another important factor which is becoming important in the present day context is the
availability of portable devices like PDAs, cell phones etc and in such cases you have
seen that PDAs and cell phones are very small in use and if we use a parallel connector
having large number of pins it is very difficult to fit in a PDA or cell phone. However, if
we use serial mode of communication the number of pins required is small so the
connector size is also small so only small connector size can be used for portable devices.
That’s why serial mode of communication can be used in such cases.

However, as we shall see the serial mode of communication is slower than parallel mode
of communication. It is much slower compared to the parallel mode of communication
that is possible when we are sending data in parallel form. For example, when we are
sending data between computer and main memory there the transfer rate is very high. We
cannot transfer data rate at that rate so however that may not be required in many of the
applications. Let us see the various modes.

(Refer Slide Time: 13:01)

First one is simplex mode. Here as you can see data is unidirectional. Here data is going
from sender to the receiver and there is no data going from the receiver to the sender. It is
going in only one direction. For example, this can be a computer and this can be a printer
so in such cases we can use simplex mode of communication. However, simplex mode of
communication cannot be always. As you can see in some situations it has to be in both
directions. So here it is bidirectional. For example, here you have got one computer and
here you have got another computer so when two computers are communicating with
each other it is quite natural that the data transfer will be in both directions. This is known
as full duplex.

On the other hand, for the previous one it is unidirectional so it is known as simplex. So,
for full duplex communication as you can see we require two pairs of wires for
communication of data simultaneously because here it allows simultaneous
communication in both directions that’s why it is called full duplex. However, there are
situations in which it is not possible to have two separate transmission links or channels.
In such cases we have to use what is known as half duplex communication. In half duplex
communication as you can see you have got only one channel (Refer Slide Time: 13:36)
but communication is taking place in both directions. Obviously the communication
cannot take place in both directions simultaneously. Therefore in such case we have some
protocol.

For example when a policeman is talking to the headquarters we will find that he is
talking then he is telling over then again listening for some time then again starts talking
so they have a single communication channel through which the headquarter is
communicating with that policeman. So in such a case the technique is known as half
duplex. That means a single communication channel is being used for communication in
both directions but in one direction at a time. This is known as half duplex
communication.

(Refer Slide Time: 14:20)

Then comes the other two techniques that is asynchronous and synchronous. Let us see
what we mean by Asynchronous Serial Transmission.
(Refer Slide Time: 15:02)

In Asynchronous Serial Transmission what is being done is instead of sending long bits at
a time the bits are grouped either in bytes or words and then each such character a byte or
a character that is 7-bit ASCII code is sent in the form of a frame and is sent one at a
time. As you can see here there is a frame for sending one such character and each
character is framed with the help of a start bit in the beginning and one to two stop bits at
the end. That means each character has a start bit and one and half or two stop bits and
there is an optional parity bit and the character size can be 5 to 8 data bits.

Hence we are not sending more than eight data bits at a time and that five to eight data
bits is being framed with the help of a start bit stop bit and if necessary parity bit for error
detection. Now, whenever a sender is not sending data the line can remain idle that is you
are sending essentially one, the idle state of line is one.

And whenever data transmission starts the line is made 0 signifying that a word is being
transmitted and the data transmission ends with the help of stop bits. So after sending
immediately sending the stop bit one can send another word in such case the next start bit
will start or it may remain idle like this. And in this case the receiver has the opportunity
to resynchronize at each new character. So here you can see that (Refer Slide Time:
16:53) before a particular character is received the receiving end can synchronize with
the help of this start bit. This has many advantages.
(Refer Slide Time: 17:10)

Here it is shown how several characters are being sent from the sender to the receiver.
Here you can see one character has been sent and then there is a gap and that gap can be
indefinite, it can be of any duration then another character is sent then there is a gap then
another character is sent and then there is a gap and so on. And it has been found that this
technique is very simple to implement that’s why this is quite widely used. And one
important characteristic is it is self synchronizing. We have seen that whenever data is
sent by the sender at the receiving end it has to be received in a synchronized manner so a
1 is received as 1, a 0 is received as 0 at proper bit position. If it is not received in this
manner then it will be incorrect.

Here what is being done is it is synchronized with the help of the start bit at the middle of
the start bit then it is sampled at the middle of each of these bits. So as a result it gets
synchronized with this particular rising edge and then at the middle of each bit the data is
sampled to find out whether it is 0 or 1 and as a consequence the timing requirement is
modest.
(Refer Slide Time: 18:13)

Even if the receiver clock is five percent slower or faster signal can be received correctly
because the synchronization is done for each character separately at the beginning of each
word but it has some limitation. The limitation is arising because of additional bits like
start bits, stop bits which is increasing the overhead. For example, you are sending 8-bit
so to send 8-bit you may require 1 stop bit plus 8 data bits plus may be 1 parity bit plus 1
stop bit. So these are the data bits (Refer Slide Time: 19:27) this is parity bit and this is
stop bit. So altogether you require 11 bits to send 8 data bits. That means 3/8 is roughly
about 30/300/8 that is roughly about 40% so 40% overhead is here. That means to send 8-
bits of data three additional bits are required.

(Refer Slide Time: 20:02)


Also, you can think in another way. Here you are sending 11 signal elements to send 8-bit
data. That means baud rate is 11 you can say that if you send 8-bits per second then
essentially you are sending 11 bits per second signal. That means 8-bit data and 11 bits
signal per second that means the baud rate is 11 or data rate is 8. So baud rate is higher
than the data rate. You can think in this manner. So this high overhead cannot be
tolerated in many situations. Therefore in such a situation we have to go for another
alternative that is synchronous mode of communication.

(Refer Slide Time: 22:09)

In synchronous mode what is normally done is initially one or two synchronization


characters are sent instead of start bit and stop bit for each character. Data characters are
then continuously sent without any extra bits. So you are not sending start bit and stop
bits for each character. And at the end some error detection data may be sent if necessary.
the advantage is here the overhead is much less even less than 1% may be .01% so this
kind of overhead is present in synchronous mode and as you can see there is no overhead
except for the synchronization character.

Synchronization characters are necessary to identify a beginning of a block of data.


However, the main disadvantage here is there is no tolerance in clock frequency. You
cannot accept any tolerance because you are sending large number of characters. If there
is mismatch in frequency after few bits the receiving end will receive incorrect data that’s
why here it is necessary to transmit in fully synchronous manner.

Here is the synchronous serial transmission. You can see how it is being done.
(Refer Slide Time: 2218)

A block of bits is transmitted in a steady stream here is the data field (Refer Slide Time:
22:23) it can be very large, it can be a few kilo bits and there is a preamble 8bit flag and
some control bits and there is a postamble control field, there can be error correcting
codes and error detection codes and 8-bit flag which is postamble. So preamble
postamble is provided along with the data but the percentage of overhead is very small.
So here the synchronization is done at a higher level using preamble and postamble bit
patterns. So special character is sent at the beginning and the end to mark the beginning
and end of a stream of data and this is the synchronous frame format.

Now here one question arises. We have told that the clocks of the two ends must be fully
synchronized, how it can be done? One alternative is a separate line can be used for
sending clock then at the receiving end that clock will be used for receiving data. But that
is not practically feasible unless two systems are located very closely. In such a case the
normal practice is to recover data recover clock from the received data with the help of
some suitable hardware known as phase lock loop. But the phase lock loop will require
some transitions and as we have seen different encoding techniques in detail which are
used where synchronization is possible and clock is recovered with the help of phase lock
loop at the receiving end. So the synchronous serial transmission will require complex
clock recovery circuit so that synchronization is possible.

Now, coming to the question of interfacing we have seen that we have data terminal
equipment it can be computer or any other equipment which can send data or receive data
and we require a DCE Data-Circuit Terminating Equipment. This Data-Circuit
Terminating Equipment will launch the signal in this communication system and at the
other end another DCE will receive the signal and it will then send the signal to the DTE
at the receiving end. In between you will require interface at both ends between the DTE
and DCE.
(Refer Slide Time: 25:02)

Therefore DCE is usually the modems. The DTEs are computers and various other
equipments that we use for sending data. Now the interface has to be universally accepted
so that the two equipments one DCE and another DTE manufactured by two different
companies can be interfaced. For that purpose some standards are to be used. The
standards are developed by two organizations the EIA Electronic Industries Association
and ITUT or the DTE DCE interface.

As you can see here the standards developed by EIA is known as EIA-232, EIA-442 and
so on. Similarly the ITUT standards are known as V series or X series standards. Here we
shall primarily discuss the EIA standards because these are very popular. And as we shall
see this interface will require three different components; mechanical, electrical,
functional and procedural so let us see what these different components do.
(Refer Slide Time: 26:34)

So here the EIA-232 standard has got different versions A B C and presently D. The
mechanical specification specifies the connected type. You will require a connector and a
cable to connect two equipments. So the mechanical standard specifies that and in the
main standard a 25 pin male and female connectors are suggested and these connectors
are used for linking DCE with a DTE. However, all the 25 pins are not used, nine of them
are commonly used as it is shown in this particular diagram so subsequently a DB-9 nine
pin connector has been developed which is commonly used. This is smaller in size and
the various signals that are present on these pins are also shown here.

Then the electrical specification gives you the logic levels, the maximum cable length
and the maximum data rate baud rate which can be used for communication between
DTE and DCE.
(Refer Slide Time: 27:56)

As you can see in EIA-232C the electrical signals levels are not TTL compatible that it is
not 0V for 0 and plus 5V for 1 so it is not true. So as you can see here for logic 0 the
range is plus 3 to plus 25 usually plus 12V is used. On the other hand, for logic 1: minus
3 to minus 25V is used. In practice minus 12V is used for this purpose. The electrical
specification specifies the electrical voltage levels the maximum cable length and the
maximum baud rate. These are the three parameters specified by the electrical
specification.

Then comes the functional specification. It specifies what are the signal lines that you
require. They can be broadly divided into four types namely data lines, control lines,
timing lines and ground lines. And for EIA-232C basically nine lines are used as I have
shown here. For example, this is the ground signal (Refer Slide Time: 29:10) then you
have the transmit data and received data these are the data lines then you have Carrier
Detect, DCE ready, clear to send, ring indicator and so on. So these are the timing signals
and some of them are control signals.

I shall explain the function in the next specification that is the procedural specification.
(Refer Slide Time: 30:06)

This procedural specification specifies the sequence of events that will take place for
transferring data. So, to send data you have to use a sequence so that the transmitter and
the receiver can communicate with each other. It is some kind of agreed upon protocol
that is being used for communication between the two systems. For example, DSR Data
Set Ready and Data Terminal Ready are used to check whether both devices are ready for
communication. That means these two DSR and DTR will be used for establishing the
fact that both are ready for communication that both have been turned on. On the other
hand, the request to send and clear to send are the handshaking signals before the
transmission of data can be started. And both these transmit and receive are used for
transmitting data between DTE and DCE. Let us see how it is exactly being done.
(Refer Slide Time: 32:12)

So here we see a DTE on one side and DTE on the other side. This is a DCE modem A
modem B connected through a transmission media. So first the DTE sends the DTE ready
signal to the modem in response to that and also it sends the telephone number that it is
connected through the telephone system. So telephone number is sent to the data lines
and in the receiving end the modem sends the ringing signal that goes to the other end
and whenever the modem receives it a ring indicator signal goes to the other end DTE. In
response to that the DTE sends DTE B where at the other end the DTE ready signal
informs the modem that the DTE is ready for receiving the data and in response to that a
carrier signal is sent by the modem B to the modem A and after receiving that carrier
signal the modem A sends a DCE ready signal informing that the DCE or modem is
ready for transmission and also it sends a carrier signal to the other end and after
receiving that modem it sends a signal detector to the DTE B. Now the system is ready
for communication. A link or a channel has been established. Now both DTEs are ready
for communication. Let us see how it is done.

Here as you can see before transmission of data the DTE A sends a request to send asking
the modem whether it can send data or not in response to that the modem responds giving
a hand shaking signal known as clear to send. And after receiving that the DTE A will
send the transmitted data which will go to the other end and as it goes to the other end as
you can see the DTE B will receive the data. After receiving the data the communication
is complete. This is the procedure. So, at the beginning of each communication for
sending each character of data a request is sent then clear to send command is given
followed by transmission of data to the other end. This is how character by character is
sent one after the other between two DTEs.

So apart from RS-232 standard some other standards have been developed by Electronic
Industries Association. I am briefly comparing the RS-232 with two other standards RS-
423 and RS-422. The reason for that is RS-232 was developed long back when the data
rate requirement was much smaller. As you have seen only 20 Kbps can be sent which is
very low in today’s standard. So, to improve the performance this RS-423 and RS-422
were introduced.

(Refer Slide Time: 34:34)

As you can see the baud rate here is more as 300 k for RS-423 and 10 Mbs for RS-422.
Then the cable length is also increased. Instead of 50 feet for others RS-423 it is 4000
feet and also for RS-422 it is 4000 feet. These are the output voltage levels; plus 5 to plus
25 for 0 and for minus 5 to minus 25 for 1. On the other hand, for RS-423 it is plus 3.6 to
plus 6 for 0 and minus 3.6 to minus 6 for 1. But here it is 226V between outputs, here it is
balanced. Balanced means differential as you can see and differential means the data is
sent between two wires. When it is unbalanced you are sending data through one wire
and there is a common ground line.

On the other hand, there is a common ground line but the data is sent between these two
wires that’s why it is called balanced. This helps to reduce the noise because all the noise
persists through the signal so as a result whenever you use unbalanced or single ended
communication the noise immunity is much less compared to the balanced mode of
communication. Here because of higher data rate and longer distance this balanced mode
of communication is necessary. So the input threshold in this case is minus 3 to plus 3 for
RS-232C and here the threshold is much less minus 0.2 to plus 0.2. Now the same is true
for RS-422. So these are the input resistances. They are high input impedance that is 4k
and more than 4k it is more than 3k to 7k here.

So in brief this is the comparison of the other standards and here is an important concept
known as null modem in the context of this DTE DCE interface. We have seen that
whenever the data transmission is to take place over long distance the two DTEs will
require two DCEs and then to telephone network through communication can be done.
(Refer Slide Time: 37:09)

However, when two DTEs are in the same room the same interface may be used but there
is no need of any DCE. In such a case the concept of null modem is used. That means
whenever the communication is over a very short distance there is no need for any DCE.
However, we can make use of the good old RS-232 or EIA-232. How it can be done is
explained here.

So instead of this DCE telephone network and DCE at the other end this is being replaced
by a cable. So, null modem is nothing but a cable as you can see. However, the wires are
swapped it is because the transmit signal has to go to the receive line and this transmit
signal has to go to the received line. So a null modem is nothing but a cable with two
female connectors at both ends. In the previous case (Refer Slide Time: 38:02) here we
used a female connector here we used a male connector. Similarly here is a female
connector there is a male connector. But in this particular case we have to use female
connectors at both ends because both ends are DTEs and wires are swapped as necessary
for communication between two DTEs.

But the important point is DTE is fooled here. DTE has the illusion that it is connected to
a modem but in practice it is not it is connected to a cable and that cable is connected to
the DTE. So this is the null modem concept. Whenever we have to use the 25-PIN
connector some of the wires have to be looked back and some of the wires have to be
cross connected. Various cross connections and loop back lines are shown in this null
modem when 25-PIN connector is used that is used in the standard RS-232.
(Refer Slide Time: 39:03)

Another standard which has been proposed by ITUT to overcome the limitations of EIA-
232 is the X.21. This X.21 was designed by the ITUT to overcome the limitations of
EIA-232 and pave the way of all digital transmission. That RS-232 was developed in the
era of analog transmission or analog communication. On the other hand, the X.21 was
developed to facilitate digital transmission. What has been done here is most of the
control circuits are eliminated by data traffic so that the number of pins required is small
and it is replaced by data traffic that means with lesser pins but data lines have used for
sending the control signals.

As I have already explained it works in the balanced circuits and data rate is 64 Kbps so it
is relatively high compared to 20 Kbps. It uses a 15-PIN connector which is known as
DB-15 connector and there are various timing signals used for byte control. Here it uses
timing lines for byte control and pin 3 and pin 5 which are control and indication are used
for handshaking. As you have seen, the request to send and clear to send are used in case
of RS-232 but here that control and indication lines are used for that purpose. So in brief
this gives the overview of X.21.
(Refer Slide Time: 40:34)

Now we shall require another important component for interfacing DTE to the
communication media. We have seen the interface and now the modem is required and
we have already discussed modem in detail. As you know MODEM stands for modulator
plus demodulator so it is performing two functions modulation and demodulation and the
modulator as you know converts digital signal into a analog signal using ASK, FSK, PSK
or QAM modulation techniques and the demodulator converts analog signal back into
digital signal.

(Refer Slide Time: 41:35)


Here there are two important parameters as you know the transmission rate and the baud
rate. Here are the various baud rate and data rate is explained based on different
modulation techniques. As you have seen using smaller baud rate we can achieve bit rate
or data rate.

(Refer Slide Time: 42:03)

If we use 256 QAM it can be eight times. for example if we are using the typical
telephone line having the bandwidth of 2400 that is the baud rate so we can multiply it by
8 if we use 256 QAM that will be data rate. So 8 into 2400 can be the data rate in this
particular 256 QAM. And some modems have been developed by ITUT and Bell
laboratories. The modems developed by Bell laboratories are known as bell modems and
ITUT developed modems are V series modems.
(Refer Slide Time: 43:34)

So some of these standard modems are shown here and some of them are equivalent as
you can see V.21 is equivalent to Bell 103 modem having baud rate and bit rate of 300
and modulation used is FSK. And V.22 is the ITUT standard and standard bell modem is
212 having baud rate of 600, bit rate of twelve hundred which uses 4PSK and V.23 which
is equivalent to bell modems 202 uses baud rate of 1200, bit rate of 1200 and uses FSK
Frequency Shift Keying. V.26 bell modem equivalent is 201 uses baud rate of 1200, bit
rate 2400 using 4PSK Phase Shift Keying and V.27 is equivalent to bell modem 208 baud
rate of 1600 and bit rate is 4800 using 8PSK and V.29 which is equivalent to bell modem
209 provides baud rate of 2400 with bit rate of 9600 that uses 1600. There are other
standards providing at most 33.6 Kbps of data transfer using modems. That is the
maximum possible data transfer rate in using modes.
(Refer Slide Time: 44:31)

Here the standard modem operation is explained as how it is connected to the telephone
network and how it works. As you can see this is the DTE or the computer and here are
the modems and this is the interface it can be RS-232C RS-232 or EIA-232 whatever you
call it and these are the two modems at both ends. and these modems are connected by
local loop to the switching exchange and in the switching exchange it is converted into
digital form by using PCM so the digital data goes through the digital telephone network
and through the telephone network at the other end it is again converted by inverse PCM
and that goes to the other side of the modem and again that interface is there which is
connected to DTE. That’s how the communication takes place. And because of the
quantization possible quantization is done in PCM the data transmission is restricted by
Nyquist which we have already discussed in detail and based on that maximum data rate
possible is 33.6 Kbps so data transmission rate is not really very high you cannot really
go beyond this when you have got this system.

However, nowadays other types of modems that are becoming popular are known as 56K
modems. You may be asking how it is possible to transmit at such high rate of 56 kilo
bits does it violate Nyquist criteria? In practice it is not. Let’s see how it is being done.
(Refer Slide Time: 46:57)

This 56K modems can be used when the other end is not connected to the standard analog
telephone system but it is a digital one which is essentially the internet service provider.
As you can see there are two types of signals present here. One is going from this DTE
through the local loop using the PCM and it goes to the internet service provider. In this
uploading direction data transfer rate is still 33.6 Kbps because of the presence of PCM
here in this particular direction (Refer Slide Time: 47:10).

However, when the data is being sent by the internet service provider through the digital
telephone network then there is no PCM involved here. The data is going through the
inverse PCM and it is going through the local loop to the modem. In such a case the data
transmission rate possible is 56 Kbps. So in the downloading direction or down link
direction as you can see high data transfer rate is possible in the reverse direction. The
reason for that is in this case PCM is not present so there is no question of quantization
error and no question of violation of the Nyquist rate.

So we find that the 56K modems allows you data transmission at higher rate from the
internet to the user which is particularly in that direction where we require high data rate,
usually we are downloading more data from the internet than we are sending towards the
internet. So here are the 56K modems.

So, to summarize, in this lecture we have discussed about the interface between the data
terminating equipment that is your computer and other peripherals and the Data-Circuit
Terminating Equipment or DCE which is commonly a modem and the standard interface
that is EIA-232 has been discussed in detail. Other standards also we have discussed
briefly like X.21 and some other standards proposed by EIA like EIA-422 and EIA-423
and we have discussed operation of modem how modem communicates through the
telephone network between two DTEs and also we have discussed the function of the
56K modem in detail as how it provides you higher data rate.
(Refer Slide Time: 49:36)

Now it is time to give you review questions.

1) Distinguish between half duplex and half-duplex and full-duplex transmission.


2) Why serial transmission is commonly used in data communication instead of
Parallel Transmission
3) Distinguish between asynchronous and synchronous serial modes of data
communication
4) What is null modem and when can it be used
5) In what way the standard modems differ from 56K modems

These questions will be answered in the next lecture.


(Refer Slide Time: 50:37)

Answers to the questions of lecture-13:

1) Distinguish between the services provided by the cable TV and HFC networks.

The standard cable TV service allows distribution of broadband TV signals to a large


number of people. Here data transmission is only in direction. On the other hand, in HFC
Hybrid Fiber Coaxial network the data transmission is in both directions. So in addition
to broadcasting TV signal in the downstream direction it allows broadband internet
service which is bidirectional through the cable network.

(Refer Slide Time: 52:10)


2) How the upstream bandwidth is shared by a group of users for data transfer in
HFC network?

As we know only six channels are available in the upstream direction so each channel is
shared by a group of users in the same level. for this the cable modem has to negotiate
with the CMTS in the distribution hub as CMTS is present in the distribution hub to
contend for a channel then the CM can send data to the internet in the already allocated
channel using TDMA Time Division Multiple Axes. So, this concept of multiple axes
will be discussed in more detail later on.

(Refer Slide Time: 53:09)

3) How is an STS multiplexer different from a add drop multiplexer?

The add drop multiplexer operates in the line layer of SONET and it can add signals
coming from different sources into a given path or remove a desired signal from a path
and redirect it without demultiplexing the entire signal. On the other hand, STS
multiplexers operate in the path layer and it either multiplexes signal from multiple
sources into STS signal or demultiplexes an STS signal into different destinations. We
have seen that STS multiplexer signal is responsible for converting electrical signal to
optical signal and optical signal to electrical signal.
(Refer Slide Time: 53:35)

4) What is the relationship between STS and STM?

The standard developed by ANSI is known as Synchronous Optical Network SONET


whereas another very similar standard developed by ITUT is known as Synchronous
Digital Hierarchy or SDH. Their relationship is shown in this table.

As you can see OC-1 is not present in SDH, it has no SDH value. On the other hand, OC-
3 is equivalent to STM-1, OC-12 is equivalent to STM-4, OC-48 is equivalent to STM-
16, OC-192 is equivalent to STM-64 and OC-768 is equivalent to STM-256. These are
some of the popular signal levels shown here.

We have already discussed in detail the line rate, payload rate and overhead rate for the
different levels.
(Refer Slide Time: 54:43)

5) How does SONET carry data from DS-1 service?

To make SONET backward compatible with the current DS hierarchy the frame design of
SONET includes a system of virtual tributaries in which more than one tributaries are
interleaved column by column. And SONET is also provided mechanism to identify each
VT Virtual Tributary and separating them without demultiplexing the stream. So with
this we come to the end of today’s lecture, thank you.
Data Communications
Prof. A. Pal
Department of Computer Science & Engineering
Indian Institute of Technology, Kharagpur
Lecture-15
Error Detection and Correction

Hello viewers welcome to today’s lecture on Error Detection and Correction. We have
discussed various issues related to transmission of signal through different transmission
media. We have seen that when signal is sent through some transmission media it is
subjected to attenuation distortion and noise as a result some of the bits of the data or the
signal gets corrupted. In other words it leads to some error in sending data. But we want
reliable communication. So through the unreliable media if we want reliable
communication what we have to do. We have to device mechanism for correcting the
error that occurs during transmission through the transmission media and that is the topic
of today’s lecture.

So here is the outline of the lecture. First we shall discuss why error detection and
correction, then the various types of error that can occur during transmission, third topic
will be error detection techniques.

(Refer Slide Time: 2:20)

We shall discuss various error detection techniques such as parity check, two dimensional
parity check, checksum, Cyclic Redundancy Check etc. These are the commonly used
error detection techniques. Finally we shall discuss error correcting codes which can be
used not only to detect error but to correct error if it occurs during transmission.

And on completion of this lecture the students will be able to explain the need for error
detection and correction, they will be able to state how simple parity check can be used to
detect error, they will be able to explain how two dimensional parity check extends error
detection capability because as we shall see simple error detection or simple parity check
cannot detect all possible errors but only some of the errors are detected.

(Refer Slide Time: 3:03)

So to extend the capability two dimensional parity check is used and that the students will
be able to explain. then they will be able to state how checksum is used to detect error
this is another scheme they will be able to state then they will be able to explain how
Cyclic Redundancy Check works. Finally the students will be able to explain how
hamming code is used not only to detect error but correct error.

So we start with why error detection and correction. As I mentioned because of


attenuation, distortion, noise, interferences some errors will occur and this will lead to
corruption of some of the transmitted bits. And particularly two parameters which are
important in the context of error is your frame size and second is the probability of
single-bit error.

Obviously if the transmission media is noisy the probability of single-bit error will be
more and when we are sending data in the form of a frame the size of the frame will also
decide the probability of error of a frame. That means two important parameters such as
frame size and single-bit error will decide whether we are receiving a frame with error or
without error. This can be proved mathematically longer the frame size and higher the
probability of single-bit error lower is the probability of receiving frame without error.
(Refer Slide Time: 5:01)

That means the probability of receiving a frame without error increases with the size of
the frame and also as the probability of a single-bit error increases. This is quite obvious.
If the frame size is long say if you have got a long sequence of zeroes and ones then if
one of the bits get corrupted then the entire frame will be in error. So, this clearly
emphasizes the need for error detection and error correction. If we want to communicate
in a reliable manner then we have to device a scheme so that the error which is inevitable
can be corrected. Let us look at the various schemes but before we look at the various
schemes let us see the types of error.

(Refer Slide Time: 7:06)


First one is single-bit error. Suppose you are sending a bit sequence may be a character 1
0 1 1 0 0 1 a 7-bit say ASCII code one of the bits may get corrupted and it may become 1
0 1 0 0 0 1. That means this corresponds to a particular character (Refer Slide Time: 6:28)
and at the receiver this has been transmitted and this will be received by the receiver and
obviously the received character is different from the transmitted character. This has
occurred due to single-bit error so this is called single-bit error and this is very common
in parallel transmission.

Suppose you are sending eight bit simultaneously using eight lines so using eight lines
you are sending and whenever you are sending eight bits then one of the line may be
faulty suppose this is faulty (Refer Slide Time: 7:17) whenever a particular line is faulty
obviously the signal coming through this line will be in error. On the other hand signals
coming through other lines may not be in error and as a result this will lead to a single-bit
error if a particular line is faulty. This is how single-bit error occurs particularly in
parallel transmission.

Then there is another type of error which is known as burst error. Burst error occurs
whenever more than one bit gets corrupted. Suppose you are sending a long sequence 1 0
1 1 0 0 1 0 1 and so on then what can happen is the duration of noise is longer than the
duration of one bit. suppose duration of noise is this much this is the duration of noise
(Refer Slide Time: 8:05) so duration of noise is covering four bits so there is a possibility
that some of this bits say this becomes 0 this becomes 1 this remains 0 this becomes 1 so
it is not necessary that all the bits will get corrupted so out of these four bits the three bits
have got corrupted leading to this signal at the receiving end.

This is essentially multiple bit error and we call it burst error because a burst in noise is
responsible for this kind of error. This is very common in serial transmission. Whenever
you are transmitting serially then during transmission the transmission line may be
exposed to some noise may be switching on some electrical equipment or some other
electric disturbances like it may be due to lightening because of bad weather and so on.
So because of any one of this reasons multiple bits get corrupted because of the duration
of noise is longer than one bit and this is the burst error. So we have to develop technique
for detecting not only single-bit error but multiple errors including burst error.

What are the techniques and how it is being done?


(Refer Slide Time: 10:50)

The basic approach is use of redundancy. Suppose this is your message the message may
comprise several bits in sequence and with that some additional bits are used known as
Frame Check Sequence FCS. This Frame Check Sequence is added with that additional
bit and then sent over the transmission line. So these are the additional bits (Refer Slide
Time: 10:05) which you may call redundancy incorporated in the message and that is
being sent and these additional bits can be used for not only detecting error but for
correction of error. That is the basic mechanism of error detection and correction.

Obviously this FCS is a function of M so FCS will be dependent on this message. So


based on this message bit this Frame Check Sequence is calculated and then it is sent and
these together are sent to the receiver and they are using this Frame Check Sequence and
message bits and by this the error detection and correction is done. We shall see how
exactly it is being done.

The popular techniques are simple parity check where single-bit is added, then to enhance
the performance two dimensional parity check is commonly used and the third technique
we shall discuss is checksum which adds several bits to detect error and error detection
capability is better than simple parity check when compared to two dimensional parity
check. Finally we shall discuss Cyclic Redundancy Check which is very popular in
common and possibly this is the most sophisticated technique used for error detection.
We shall discuss these techniques one after the other.
(Refer Slide Time: 11:31)

As i mentioned the parity check is the simplest and the most popular error detection
scheme. What it does is it appends a parity bit to the end of the data. Suppose you are
sending a character say 1 0 1 1 0 0 1 so 7-bit now you can add a parity bit and that parity
bit is added based on whether you are using even parity or odd parity. If you are using
even parity then including that bit the number of ones should be even. So here you have
got 1 2 3 and 4 ones so this is made 0 whenever you are using even parity so that the total
number of bits in the message that is being sent including parity is even.

On the other hand if we are using odd parity then it will be 1 0 1 1 0 0 1 1. So instead of 0
1 is added so that the number of ones is odd so this is the odd parity. Particularly as we
have discussed the asynchronous and synchronous mode of transmission in asynchronous
mode commonly the odd parity is used and in synchronous mode of transmission
normally the even parity is used so both are in use. Let’s see how it is exactly being done.
So this is the basic scheme of parity check. Here it is based on even parity as it is shown
here.
(Refer Slide Time: 13:10)

So this is the data 1 0 1 1 0 1 1 that is being sent and the parity is calculated using very
simple hardware using Exclusive OR Ex-OR gate for computing the parity bit and as you
can see here you have got odd number of ones in the message so another one is added as
parity bit to make the number of ones even. So these data bits along with this parity bit
are sent through the transmission media and it reaches the receiving end. In the receiving
the parity bit is computed. That means here all the bits are used to check whether the
number of ones is even or not.

If it is even then you accept the data because it is the correct one and on the other hand if
the number of bits is odd then you have to reject the data. In that case some error has
occurred leading to single-bit error. So in that case you have to reject the data at the
receiving end. This is the basic scheme of using the parity check. Obviously to compute
the parity bit you can use Exclusive OR gates in both cases. This is the performance of
simple parity check scheme.
(Refer Slide Time: 15:03)

Here all single-bit errors are detected and it can detect all the single-bit errors and also it
can detect burst errors. However, this can detect burst errors only when the number of
bits in error is odd. Obviously when the number of bits in error is even then if it is 0 it
becomes 1, if another one is 1 it becomes 0 leading to the same parity bit. That’s why
whenever even number of bits is corrupted then this simple parity check cannot detect the
error. So we find that this technique is not foolproof against burst errors that inverts more
than one bit. Particularly if the even number of bits is inverted due to error the error
cannot be detected by using this simple parity check scheme.

(Refer Slide Time: 16:05)


To overcome this limitation we can use two dimensional parity check. In two
dimensional parity check the performance is improved by using more number of bits that
means more redundancy. Here the data is recognized in blocks of bits in the form of a
table. So here since it is organized in a form of a table, it takes the shape of a two
dimensional data and parity check bits are calculated for each row which is equivalent to
simple parity check bits. For each row there is a parity bit and parity check bits are also
calculated for all columns. So you have parity check for each of the rows and parity
check bit for each of the columns and then both are sent along with the data and at the
receiving end these are compared with the parity bits calculated on the received data.

(Refer Slide Time: 16:47)

So let us take up an example and see how it exactly works. This is the original data. Here
the entire data has been divided into four segments or blocks then they are organized in
the form of a table so here you have got four rows corresponding to each segment or
block then for this particular row the even parity is calculated so you can see here the
total number of bits is even including the parity bit so in this way for each of these rows
the parity bits are calculated and then for each of these columns also the parity bits are
calculated and even parity is added at the bottom.

So, for each of these columns you can see the total number of ones you will find even
because this is the even parity. Then for parity bits also the parity bit is calculated which
makes the total number of parity bits. So we find that we send four segments of data each
eight bit. That means thirty two bits is to be sent and to send that the total number of
additional bits is now nine bits in each block and you have got five blocks so nine into
five that means forty five bits. So you see here that in simple parity check only one
additional bit is needed but here the number of additional bits is increasing. In other
words the overhead is much more than the simple parity check. Obviously this is done to
achieve better performance.
(Refer Slide Time: 18:53)

So the extra overhead is traded for better error detection capability. How? For example, it
can detect not only the odd number of bits but also it significantly improves the error
detection capability compared to simple parity check. it can detect most of the vast errors
but obviously it cannot detect all. Let us take up that example. Here suppose these two
even number of bits get altered and here also these two bits get altered so four bits get
altered. Now, if only these two bits get altered then this column parity bits can be used to
detect error. However, in this column in row one these two bits get altered and in the
same column these two bits get altered then the parity bits here remain unaltered. As a
consequence error detection is possible. That’s why all the burst errors cannot be
detected. However, it detects most of the burst errors.

Now let us see the third scheme that is based on checksum. In checksum at the sender’s
end again the data is divided into k segments each of m-bits. So just like the two
dimensional parity bit here also data is divided in segments. in this example we have
taken up the same example as you have shown for two dimensional parity bit so you have
got four segments k is equal to 4 each of eight bits m-bits.
(Refer Slide Time: 20:38)

Now the segments are added using ones complement arithmetic to get the sum. You may
be familiar with ones complement arithmetic. Here what is done is here the addition is
done for each pair of the segments one after the other and as we know whenever there is
an end around carry it is added with the least significant bit to get the sum. So this is the
partial sum by adding first two segments then the third segment is added and here there is
no end around carry so this is the partial sum then the fourth segment is added and here
we find there is end around carry so it is added so in this way we get this partial sum 1 0
0 0 1 1 1 1. This partial sum is now complemented to get the checksum. So here we get
the checksum 0 1 1 1 0 0 0 0. So this checksum is sent along with the four segments. So
what we are doing here is along with the data segments we are sending a message
segment equal to m-bits. These are now received at the receiving end and the checking is
done in this manner. All the received segments are added using ones complement
arithmetic to get the sum.
(Refer Slide Time: 22:54)

As you can see here the first part is the same as the previous case. We have assumed here
that no error has taken place so first two segments are added to get this partial sum then
the third segment is added to get this partial sum then the fourth segment is added to get
this partial sum and then the additional segment which is essentially the checksum-bits
are added to get this partial sum. Now the sum is complemented and if the result is all 0
then it is assumed that no error has taken place and conclusion is accept data. So, if it is
all not 0 then this data is not accepted and it is considered that some errors occurred in
anyone of these bits.

(Refer Slide Time: 23:52)


Let’s see the performance of checksum. Like single parity or two dimensional parity it
can detect all errors involving odd number of bits. So here the performance is same as the
simple parity check. However, it can detect most of the errors involving even number of
bits. That means the burst errors can also be detected and whenever the number of errors
is even in most of the cases it can be done but not in all cases. So we find that the
performance of checksum is comparable to that of two dimensional parity check but the
overhead is little less.

As you can see here (Refer Slide Time: 24:07) overhead is lesser than the two
dimensional parity check. Here we are sending for 32 bit so here the data was 32 bit so it
is 32 bit plus 8 bit that means it is 40 compared to 45 in case of two dimensional parity
check. So in this case we have some extra overhead but less than the two dimensional
parity check.

(Refer Slide Time: 24:53)

Now let us come to the fourth scheme and the final scheme for error detection that is the
Cyclic Redundancy Check. The Cyclic Redundancy Check is possibly one of the most
powerful and commonly used error detecting codes. We shall see why it is so. We shall
see that Cyclic Redundancy Check gives you the best performance in terms of the error
detection quality. The basic approach is given here.

Given a m-bit block of bit sequence the sender generates an n-bit sequence and additional
bits known as Frame Check Sequence FCS. So for each of m-bits if your block of data
sequence is m-bits then additional n-bits sequence is calculated based on some scheme so
that the resulting frame consisting of m plus n-bits is exactly divisible by some
predetermined number. Here is the most important part of the story. We are using a
predetermined number and this is a very carefully chosen predetermined number which is
used to generate this n-bit and as we shall see this bit is generated by division. The parity
check and the checksum uses the addition. Here we shall see we are using division to
generate the Frame Check Sequence. So the receiver divides the incoming frame by that
number and if there is no remainder it assumes that there was no error. So let’s see how it
is being done. So here you have got the data of m-bits then additional 0s are concatenated
with it so (Refer Slide Time: 26:50) this is divided by n plus 1 bit divisor.

(Refer Slide Time: 26:51)

This is the carefully chosen n-bit sequence which is used as divisor to divide the m plus
n-bit sequence and it gives you n-bit data. As you know whenever you divide a number
by n plus 1 bit we get the reminder of n-bits and that n-bit is used as CRC or the Cyclic
Redundancy Check bits. These n zeroes are replaced by CRC to get the data so this is
what is being transmitted to the transmission media. So, through the transmission media
that is sent at the receiving end the receiver receives m plus n-bits data plus CRC and it is
again divided by the same divisor so the divisor has to be same in both cases both at the
transmitting and the receiving end and after division it generates a reminder and if the
reminder is 0 then the data is accepted as correct data and if it is not 0 then it is rejected.
So we find that at the receiving end as well as in the transmitting end division is
performed to generate CRC and also to check whether the received data including CRC is
correct or not and then it is accepted or rejected based on whether the reminder is 0 or
not. This division is based on Modulo-2 arithmetic and how it is done it is shown with
this example.
(Refer Slide Time: 28:38)

So here it is assumed that data is of only four bits 1 0 1 0 and divisor is 1 0 1 1. Although
I have shown here four bits for simplicity the data length can be 8-bit, 16-bit or even
much longer and as we have seen then the divisor is n plus 1 bits so n zeroes are added
with it 1 0 1 0 then 0 0 0 so this is being divided by this divisor. This division process is
simple this is the partial quotient so since this is 1 so 1 0 1 1 then-bit by bit Exclusive OR
operation is performed to get this remainder (Refer Slide Time: 29:25).

So one bit is excluded because the remainder has to be n-bit so 0 0 1 and this comes from
this bit and again it is divided and since this quotient bit is 0 so it is 0 0 0 0 here and again
Exclusive OR operation is performed to get this 0 1 0 0 and this comes here again this is
divided to get the quotient bit 0 and after Modulo-2 arithmetic we get the 1 0 0 then this 0
comes (Refer Slide Time: 30:02) and finally the quotient is 1 0 1 1 so you get 1 0 1 1 and
by Modulo-2 arithmetic we get the remainder 0 1 1 and the quotient 1 0 0 1.
(Refer Slide Time: 30:22)

So this remainder as we know is used as CRC to give you the data to be sent. So here we
get the data plus the CRC. This is the CRC and here we can see how it is generated by
Modulo-2 division (Refer Slide Time: 30:24). So this data and CRC is sent to the
transmitting end and at the transmitting end in place of 1 0 1 0 0 0 0 this 1 0 1 0 0 1 1 is
divided by 1 0 1 1. And you will find that if we do the division in the same way the
remainder will be 0 0 0 so this is how Modulo-2 arithmetic division is performed to
generate the CRC by the transmitter and to check whether the received data is correct or
not at the receiving end.

(Refer Slide Time: 32:13)


Actually the same approach can be expressed in a different way. The 1 0 sequence can be
represented or quite often it is represented by a polynomial. So here we find a bit
sequence 1 1 0 0 1 can be represented as a polynomial of a dummy variable x. So this
corresponds to the bit x to the power 0 so this is 1, this corresponds to the bit x to the
power 1 (Refer Slide Time: 31:46) and since it is 0 it does not appear in the polynomial,
this bit is also 0 it does not appear in the polynomial then this corresponds to x to the
power 3 so this is present here then this bit corresponds to x to the power 4 so
corresponding to each of these ones there is a factor in the polynomial.

On the other hand the zeroes do not give you any factor in the polynomial. So we find
that this is a more concise way of representing the bit sequence and particularly the
division operation can be expressed as…….. that means the message which comprises of
m-bits so that means comprising m-bits whenever you multiply x to the power n this
becomes m plus n-bits and that is being divided by standard polynomial known as
characteristic polynomial P(X) where we get a remainder and that remainder is used as
the CRC check bits.

There are some standard polynomials based on some strong mathematical property where
these are used for the purpose of generating CRC check bits. For example, CRC-16 is
commonly used in ATM and it has got some other applications. Here this has the degree
of 16 but only four factors are here as you see; x to the power 16, x to the power 15, x
power 2 plus 1. That means the remaining bits are 0 CRC, CCITT is used in many
applications, this is also 16-bit but it has also got four factors and it is based on that it
should not be divisible by x and it should be divisible by x plus 1 so these are the two
minimum properties that is being satisfied and based on that these are generated. And
there is another polynomial x to the power 32 bit and x to the power 16 and so on. So we
see this is a concise way of representing the bit sequence in the form of a polynomial of
dummy variable x.

(Refer Slide Time: 34:30)


Now how do you really perform the division?
I have shown how the division can be done manually. Question arises how the machine
will do it both at the transmitting and at the receiving end. This can be done with a help
of hardware which is known as Linear Feedback Shift Register or in short LFSR. there is
a shift register with some feedback which are used and this Linear Feedback Shift
Register can be used to generate the Frame Check Sequence or the CRC and also at the
receiving end it can be used for checking whether the received information is correct or
not. The Linear Feedback Shift Register will have n number of flip-flops at most n Ex
OR gates. The number of Ex OR gates can be less but the number of flip-flop will be
exactly n and the LFSR divides a message polynomial by a suitably chosen divisor
polynomial. The divisor polynomial is of degree n. then the LFSR will have n flip-flops
and the remainder will constitute n-bit which is the Frame Check Sequence. Let us see
how exactly it is done.

(Refer Slide Time: 36:35)

This is a simple characteristic polynomial of degree 3. Since it is of degree 3 obviously


the number of flip-flops will be 3 that is 1 2 3 C0 C1 C2 so there are three flip-flops and
here we find two Exclusive gates are used in the feedback path corresponding to x1 and
x0 and x3 is the feedback then there is no need for another Exclusive OR gate so two
Exclusive OR gates and these three flip-flops will make the necessary hardware for
performing the polynomial division. let’s see how exactly it can be done.

This is the initial condition (Refer Slide Time: 36:59) where these flip-flops are
initialized with the value 0 0 0 so C0 C1 and C2 all bits are reset, these are reset pins these
are reset initially and as you can see the input here is 1 so the input to this flip-flop is 1 on
the other hand this bit is 0 and this bit is also 0 so input to this flip-flop is 0. That means
input to this and input to this flip-flop is given here on this column and this column and
since this output is directly connected this C1 output is directly connected to the input of
C2 .

On the other hand to C0 C2 Exclusive OR input bit is applied and to C1 C0 Exclusive OR


C2 bit is applied. So after we apply one clock whatever is here will be latched into this
flip-flop and whatever is here will be latched into this flip-flop and the output of C2 will
be latched into C2. So let us apply one clock and see how it happens. Therefore this C1
will come here, C0 will come here and this C1 will come here after applying one clock
(Refer Slide Time: 38:15) next we apply another clock and we find that this 0 will come
here, this 1 will come here and this 0 will come here so in this way we have to apply
seven clocks because we have got 7-bits so each of them has to be fed into….., in the step
3 again this 1 comes here, 0 comes here and 1 comes here and in step four same way this
1 comes here, 0 comes here and this 0 comes here, in step 5 again this 0 comes here this 1
comes here and this 0 comes here and these inputs are already fed and then finally these
seven inputs are fed. So, this 0 comes here, this 0 comes here and this 1 comes here and
finally we have to apply another clock to get the final CRC. So seven clocks are to be
applied 1 2 3 4 5 6 7.
(Refer Slide Time: 39:12)

So after applying seven clocks we get the result in this C0 C1 and C2 and indeed this is the
CRC 0 1 1 so data to be sent is 1 0 1 0 0 1 1 that means these three 0s are replaced by 0 0
1 and we get 1 0 1 0 0 0 1 this is the data to be sent. Now at the receiving end instead of
applying 1 0 1 0 0 0 0 the receiver will apply this 1 0 1 0 0 1 1 to this Linear Feedback
Shift Register and again they will apply seven clocks and at the end you will see that
result will be 0 1 1 if no error has occurred and if any error occurs then of course the
remainder will not be 0 0 0 but it will be something else. So this is how the polynomial
division is performed by using Linear Feedback Shift Register. Let us see the
performance of CRC.

(Refer Slide Time: 40:18)


CRC can detect cyclic redundancy can detect all single-bit errors. This is similar to the
simple parity check. It can detect all double-bit errors provided there are three ones in the
characteristic polynomial and usually in a number of characteristic polynomials we have
seen that total number of ones is indeed more than 3. So the CRC can detect all double-
bit errors. CRC can detect any odd number of errors provided it is divisible by x plus 1
and CRC is chosen such that it is divisible by 1 so it can detect single-bit error, double-bit
error and any odd number of errors. Not only that it can detect all burst errors less than
the degree of the polynomial. So degree of the polynomial is n so it will detect all the
burst errors less than the degree of the polynomial.

We have seen that the CRCs are usually 16-bit or 32-bit. So in this case up to 15-bit burst
errors can be detected and moreover CRC can detect most of the larger burst errors. That
means if the polynomial is of 16-bit then if the degree of the polynomial is n then the
number of burst errors that means more than 15-bit errors can also be detected. In other
words most of the larger burst errors that means more than 15 bits can be also detected
with high probability. However, all the larger burst errors cannot be detected.

For example for CRC-12 that means whenever degree of the polynomial is 12 it will
detect 99.97% of all errors with length 12 or more so we find that the CRC gives you a
very powerful error detection scheme and that is the reason why it is so widely used.
Now let us shift gear and come to the last topic that is error correction.

(Refer Slide Time: 43:43)

In error correction two basic approaches are used. The first approach is based on
backward error correction. This backward error correction is based on this; when an error
is detected in a frame the sender is asked to retransmit the data or frame. That means the
receiver will detect the error by one of the schemes as we have discussed then it will send
information to the ender and ask the sender to retransmit the frame once again. So this
approach is known as Automatic Repeat Request and this scheme is known as backward
error correction because we are going back to transmission once again. this particular
ARQ scheme will be discussed in our next lecture. in this lecture we will be focusing on
forward error correction.

In this forward error correction what is being done is by using more redundancy the
transmitted data is received and not only error detection is performed but error correction
is also performed on the received data so it is based on the received data it is not based on
the transmission and after the data is received using the additional bits or redundant bits
error correction is performed. Let us see what the basic ideas are. The basic idea is to
have more redundancy particularly using this property.

For example, whenever you are performing error detection the requirement is an error
detecting code if and only if the minimum distance between any two code words is two.
So what you can do is….. all the codes can be divided into two sets codeword and non
codeword then whenever you choose two code words say you are choosing 1 0 1 0 1 0 1
1 another codeword must have a distance of two what do you really mean by that they
must defer in two bit position say it can be 1 1 0 1 1 0 1 1 so here we find they differ in
these two positions. So, whenever they differ in two bit positions and if error occurs in
any one of these it will fall here. Whenever error occurs it will not be a part of codeword
but it will be part of non codeword that’s how error is detected.

However, if the distance is only one then again it will be part of this so any single-bit
error will lead will generate a non codeword that is the basis for error detection. That
means the codes are selected in such a way that they differ in two number bits.

(Refer Slide Time: 46:10)


If you check the codes that are generated by parity bit you will find that this property is
satisfied. That means any two code words will differ in two bits. But for error correction
requirement is more. The minimum distance between any two code words must be more
than two and the number of additional bits should be such that it can point the position of
the bit in error. So here the requirement is we are not only interested in detecting error but
identifying where the error has occurred. So the additional bits are to be selected such
that it is possible to point the bit in error. So requirement is if k additional bits are used
then it can point to 2 to the power k positions and the requirement is that 2 to the power k
has to be more than m plus k plus 1 so this condition has to be satisfied only then the
error correction can be done. This table shows the requirement. For example here the
number of data bits is varying from 1 to seven and the number of redundancy bit required
to satisfy the conditions is given here.

(Refer Slide Time: 47:31)

For one data bit the total number of bits has to be 3 including the redundancy bits but for
seven data bits the redundancy required is four having total number of bits as 11. So, if
the number of data bits is 16 it can be shown that we will require 5 bits that means m plus
k in this case is 21. To point to one of the 21 bits you will require 5 bits so total number
of bits will be 21 whenever your number of data bits is 16. Let us see how exactly it is
done.
(Refer Slide Time: 49:02)

So, to each group of n information bits k parity bits are added to form m plus k bit code
and the location of each of the m plus k digits is assigned a decimal value. Then the k
parity bits are placed in position 1, 2, 2 to the power k positions then the k parity checks
are performed on selected digits of each codeword and at the receiving end the parity bits
are recalculated. The decimal value of the k parity bits provides the bit position in error if
any. Let us see how exactly it is being done with the help of an example.

(Refer Slide Time: 51:01)

Let’s assume that your data is of four bits, let’s assume your data is 1 0 1 0. So what is
done is the parity bits p1 p2 these are added in one two and four positions and here you ad
the data bits d1 d2 d3 and d4 then as you can see with the help of this three parity bits we
have to identify the positions. Let’s assume these to be (Refer Slide Time: 50:00) 1 2 3 4
5 6 7 per each position so seven positions have to be pointed with the help of parity bits
by a code and here the codes are shown. So here we find 0 0 0 whenever there is no error
and when error is found then it will point to this position 0 0 1. Whenever it is 0 1 0 it
will point to this position by checking parity bit on this. So these parity bits p1 has to be
calculated wherever it is 1 that means using the bit positions 1 3 5 and 7 and p 2 has to be
calculated by 2 3 6 and 7 and p3 has to be calculated using 4 5 6 and 7. Let’s see how it is
done in this example.

So this p one has to be calculated using 1 3 5 7 so these three are given so already these
two are one so two even number of ones are there so this has to be 0 (Refer Slide Time:
51: 24) and these 2 3 6 7 so here the number of ones is odd so you have to add 1 to make
it even and here 5 6 7 is already even so it is 4 5 6 7 so here it will be 0. So this is the
code words generated by calculating the bits and then it is sent. Suppose error has
occurred here this has become 0 so this is the data that is being received at the receiving
end.

Now you calculate the parity based on the same manner. So here we get C1 C2 and C3. So
by using 1 3 5 and 7 we find it is odd. So this has to be 1 by using this bit, this bit, this bit
and this bit so it is even so this has to be 1 to make it 1. Then 2 3 6 7 so we find it is
already even. So this is 0 and here it is 4 5 6 7 it has become odd so to make it even it has
to be 1 again. So we find it is 5 that means it is pointing to a two bit number 5 so the
corrected data has to be 1 0 1 0 0 1 0. With this simple example we have illustrated how
the code is generated by using hamming code. This is known as the hamming code
scheme and how the correction is done at the receiving end. This is how the error
correction is performed by using hamming code.

(Refer Slide Time: 53:17)


Therefore in this lecture we have discussed how error correction is done by using three
schemes; parity check, checksum and CRC and how error correction is performed.
Obviously only single-bit error correction can be performed by using hamming code.
However, when the number of bits in error is more you have to add more redundancy.
Now it is time to consider the review questions.

(Refer Slide Time: 54:44)

1) Compare and contrast the use of parity bit and checksum for error detection.

2) Draw the LFSR circuit to compute a four bit CRC with the polynomial x to the power
4 plus x to the power 2 plus 1.

3) Obtain the four bit CRC code word for the data bit sequence 1 0 0 1 1 0 1 1 1 and the
left most bit is least significant using the generator polynomial given in problem 2. That
means this is the generator polynomial.

4) How many redundant bits are to be sent for correcting a 32 bit data unit using
hamming code?

These questions will be answered in the next lecture.


(Refer Slide Time: 55:48)

1) Distinguish between half-duplex and full-duplex transmission.

In half-duplex communication data communication in both directions between the


systems are performed using a single line. So it is not possible to perform simultaneous
transmission in both directions. It has to be done by using a protocol as we saw in the last
lecture that how a policeman communicates with his head office by using a suitable
protocol and the other end will say over and then the talking will start.

So, data transmission takes place in one direction at a time using suitable protocol.
However, in full-duplex communication two separate lines are used for transmission in
both directions so simultaneous transmission in both directions are possible.
(Refer Slide Time: 56:01)

2) Why serial transmission is commonly used in data communication?

Some of the important reasons are given below:

 Reduced cost of cabling


 Reduced cross talk
 Availability of suitable communication media
 Inherent device characteristics and
 Smaller size of the connector

(Refer Slide Time: 56:59)


3) Distinguish between asynchronous and synchronous serial modes of data
communication.

In asynchronous serial modes of data transfer the data is transmitted in small sizes or
frames along with start bit, parity bit, stop bits and as we have seen the maximum number
is 8. And this mode of transmission is self synchronizing with moderate timing
requirements. The major drawback is high overhead; it can be 40% overhead as we have
seen.

In synchronous mode the clock frequency of the transmitter and the receiver should be
same and hence complex hardware is required to maintain this rigorous timing. But this
approach is efficient with much less overhead.

(Refer Slide Time: 57:13)

4) What is null modem and when can it be used?

A null modem is nothing but a cable with two female connectors at both ends so no
active component is present. Here it is just a cable with two connectors at both ends so
these are the two female connectors at both ends (Refer Slide Time: 57:20). This can be
used when two DTEs are in the same room the same interface may be used but there is no
need for any DCE in such a case the concept of null modem is used.
(Refer Slide Time: 58:16)

5) In what way standard modems differ from 56K modems?

These modems allow data transmission at the rate of 56 Kbps in downlink direction.
However, these modems can only be used when one of the two parties such as internet is
using digital signaling. Data transmission is asymmetrical. For example, from user to the
internet is that is in the uploading direction it is maximum of 33.6 Kbps but from the
internet to the user where digital signal can be used it is 56 Kbps.

With this we come to the end of today’s lecture, thank you.


Data Communications
Department of Computer Science & Engineering
Prof. A. Pal
Indian Institute of Technology, Kharagpur
Flow and Error Control
Lecture - 16

Hello viewers welcome to today’s lecture on flow and error control. In the last lecture we
have discussed about various error detection techniques and also we have discussed about
a technique for forward error correction k using hamming code. Now, for successful data
communication it is necessary to use some techniques so that data communication can be
performed in a reliable and efficient manner. So in these contexts two techniques flow
control and error control are very important. In this lecture we shall discuss about the
flow and error control techniques. Here is the outline of today’s lecture.

(Refer Slide Time: 01:48)

First we will discuss why flow and error control is necessary then we will discuss the two
important flow control techniques stop-and-wait flow control and sliding-window flow
control and we shall compare the performance of these two flow controls techniques
namely stop-and-wait flow control and sliding-window flow control. Then we shall see
how we can perform the backward error correction using three techniques; stop-and-wait
ARQ, go-back-N ARQ and Selective-Repeat ARQ. So, we shall consider these
techniques one after the other.
(Refer Slide Time: 2:38)

And on completion of this lecture the student will be able to explain the need for flow
and error control, they will be able to explain the stop and flow control techniques how it
works, they will be able to explain how sliding-window protocol is used for flow control,
then they will be able to explain how stop-and-wait ARQ works, they will be able to
explain how Go-back-N ARQ works and also they will be able to explain how Selective-
Repeat ARQ works. They will also be able to compare the relative performance of these
three techniques. First let us focus on why flow and error control.

(Refer Slide Time: 03:25)


As I mentioned for successful data communication we have to use certain techniques.
Now we know that for data communication we require at two devices of two machines;
one is sender another one is receiver and we require a great deal of coordination between
these two devices or machines for reliable and efficient data communication.

Let us see what are the constraints involved in it. First of all one constraint is both sender
and receiver have limited speed. That means they will require some time to receive data
to process data and to store data. So, because of some limited speed it will involve some
time.

Similarly for both sender and receiver have limited memory for storage capability and
usually it is known as buffer because whenever data communication takes place usually
the data that is being received is temporarily stored in memory called buffer so that
capacity is limited. Thus under these constraints the requirements are; a fast sender
should not overwhelm a slow receiver which must perform a certain amount of
processing before passing the data on to the higher level software. That means after the
data is received the receiver has to do some processing before it can pass on to next
higher level software or store it in a permanent memory.

Now we know that different kinds of machines. For example, a surfer is usually a high
end machine which can perform processing at higher speed. On the other hand the client
or receiver can be a little low end machine so obviously its speed or processing capability
is lower compared to the server. In such a situation the sender should not….. at such a
rate that the receiver is overwhelmed so you have to protect the receiver from being
overwhelmed and for that purpose the sender must send data in a very controlled way
because unless it is controlled in a control way there will be overflow from the buffer or
information will be lost therefore that controlled way is known as flow control. That
means the receiver will send some acknowledgement before the sender sends some data.
By using that kind of coordination overwhelming can be overcome and data
communication will take place successfully without any loss of information.

Another information is that if error occurs during transmission it is necessary to devise a


mechanism to correct it. We have already discussed the error detection techniques and
also one forward error correction technique using hamming code. But commonly forward
error correction technique is not used. The most commonly used technique is that the
receiver will check the received data and if an error is found it sends information to the
sender that error has occurred in the received data then the sender will retransmit the data
again. This technique is known as backward error correction or error control technique.

Hence for reliable and efficient communication we have to perform flow and error
control. So in this lecture we shall discuss about these things. First we shall focus on the
simplest flow control the stop-and-wait flow control.
(Refer Slide Time: 07:40)

Although this flow control and error control are performed in an integrated manner but
for the sake of understanding we are considering them one after the other. But usually
they are performed in an integrated manner.

In this stop-and-wait flow control the source transmits a data frame. That means sender
transmits a data frame and after receiving the frame the destination that means the
receiving end indicates its willingness to accept another frame by sending back an
acknowledging frame. So the receiver after receiving that frame will send an
acknowledgement and it will indicate that the frame has been received already so another
frame can be accepted as it is ready for receiving another frame. The source must wait
until it receives the acknowledgement frame before sending the next data frame. This is
the basic idea behind the stop-and-wait flow control. Let us see how it really works with
the help of this animation.
(Refer Slide Time: 08:58)

So here is the sender side and here is the receiver side and a frame is ready for
transmission and it is going to be sent by the sender. now it is taking some time to reach
the receiver known as the propagation time and after receiving the frame the receiver will
take some time to do some processing then it will send an acknowledgement and the
acknowledgement will reach the sender and after it is received by sender the sender will
then send another frame. Hence there is a waiting time involved before another frame can
be sent.

Similarly, another frame is received by the receiver and it will send another
acknowledgement and after receiving that acknowledgement again the sender will send
another frame as you can see from this animation. So here as you can see (Refer Slide
Time: 9:55) between two frames there is a waiting time or wait time involved so some
time is wasted before another frame can be sent. This ensures that the receiver is not
overwhelmed. This is the basic idea behind stop-and-wait flow control.

Let us see the performance of this stop-and-wait flow control technique. For that purpose
we have to understand about two important parameters. One is known as transmission
time.
(Refer Slide Time: 10:33)

The transmission time is the time it takes for a station to transmit a frame. The time for
transmission of a frame will depend on the length of the frame and also it will depend on
the rate at which the transmission is taking place. Let us assume that the size of the frame
is 10 to the power 3 bytes that means 1 kilo byte and the rate at which it is being sent is
10 Mbps. So when it is sent at this rate obviously the transmission time will be 10 to the
3, 10 to the power 6 which is 1
millisecond.

Now let us come to the propagation time. The propagation time is the time it takes for a
bit to travel from sender to receiver and this time is obviously dependent on the distance
and the speed of the electromagnetic wave. Therefore longer the distance more will be
the propagation time. For example, whenever we use satellite communication then the
signal goes from the base station to the satellite and back to the ground station and as we
know it take quarter of a second and whenever we are using say fiber optic
communication then length of the cable can be very long and in such a situation
propagation time can be long. And the ratio between the two is known as ‘a’. So ‘a’ is the
ratio between the propagation time by the transmission. Usually transmission time is
normalized to the value 1. So, whenever a is less than one the frame is sufficiently long
such that the first bit of the frame arrive at the destination before the source has
completed transmission of the frame. That means here is the sender (Refer Slide Time:
12:44) and here is the receiver so it has started the transmission.

Therefore in such a situation you will see that before the first bit reaches the destination
the transmission will not get complete. That means in this case transmission line is longer
than the propagation time. On other hand whenever ‘a’ is greater than 1 in such a case
what will happen is, this is the sender’s side and this is the receiver side so before it
reaches the other end the transmission is over. For example, the scenario will be
somewhat like this (Refer Slide Time: 13:22). That means this is the frame and before
this end has reached the receiver the transmission has been completed and some time has
already passed. This will be case whenever ‘a’ is greater than 1.

And depending on this value of ‘a’ which is the propagation time by transmission time it
can be shown that the utilization of the link is equal to 1/1 plus 2a. So you see that higher
the value of ‘a’ utilization will be poor on the other hand smaller the value of ‘a’
utilization will be better. Hence we can see that whenever we are communicating over a
long distance then there is a possibility of I inefficient utilization in case of stop-and-wait
flow control. On the other hand when propagation time is small then the utilization will
be better. This problem can be overcome by using sliding-window protocol.

(Refer Slide Time: 14:18)

The main limitation of stop-and-wait protocol is that at a time only one frame is in transit.
Normally a single message is divided into a number of frames. Therefore whenever short
frames are sent the transmission line is small and that is used so that error does not occur
and for various other reason. So in such a case the stop-and-wait flow control will not be
efficient. Therefore when we use multiple frames for single message as the normal
scenario is then the stop-and-wait protocol does not perform well. The main reason is that
here only one frame at a time can be in transit and that is being overcome in sliding-
window protocol as we shall see.

As you have see when ‘a’ is greater than 1 then serious inefficiencies would result in
stop-and-wait protocol. Efficiency can be greatly improved by allowing multiple frames
to be in transit at the same time. That means before an acknowledgement is received from
another end if multiple frames can be allowed to be sent then inefficiency can be
improved and that is precisely what is done in case of sliding-window flow control.
Moreover efficiency can be further improved by making use of the full-duplex line. As
we shall see acknowledgement can be sent from the receiver side as part of information
frame known as piggybacking.
Therefore piggybacking technique is used which will improve the efficiency of
transmission. Let us see how this sliding-window protocol works.

(Refer Slide Time: 16:38)

Now, first of all to keep track of frames we are sending multiple frames. Obviously
unless the frames and number it will be difficult to keep track which frame has received
the destination and which frame has not reached the destination. So the frames are
sequentially numbered and they are sent. And since the sequence numbers to be used
occupies a field in the frame it should be of limited size.

Obviously the sequence number should be part of the frame. that sequence number that is
embedded will be one of the fields of the frame and obviously the number of bits that is
used for representing the sequence number will be an overhead and that’s why there is
some restriction on the number of bits that can be used. If the header of the frame allows
k bits then the sequence number range is from 0 to 2 power k. That means there is a
possibility of having 2 to the power k numbers with k bits whenever you are using k bits
for representing the sequence number.
(Refer Slide Time: 16:32)

Sender maintains a list of sequence numbers that it is allowed to be sent. That means a
window is maintained at the sender’s end and it is essentially the number of frames that
can be sent say starting with 0 and then if you are using 7 bit it can be up to 0, 1 to 7.
You will see that the window size is usually not 2 to the power k but 2 to the power k
minus 1 less than this. That means it has to be 1 to 6. That means this is the range of
frames which can be sent from the receiver without receiving an acknowledgement. Then
the sender is provided with a buffer equal to the window size. That means the sender is
having a buffer to store all these frames and then they are sending one after the other.
Similarly at the receiving end a window is also maintained but the size of the window is
equal to 1.

The receiver acknowledges a frame by sending an acknowledgement frame that includes


the sequence number of the next frame expected. That means suppose frame 0 has been
received by the receiver then it will send an acknowledgement like this ACK1 that means
it is a frame that it is expecting next. So in this way this explicitly announces that it is
prepared to receive the next N frames beginning with the number specified. That means
with this number 1 like numbers 1, 2, 3 in this way starting with 1 to n minus 1 up to 2 to
the power k minus 1 frames it can receive one after the other so this readiness is
conveyed by the receiver. This scheme can be used to acknowledge multiple frames.

it is not necessary that for each frame received by the receiver it has to send an
acknowledgement. It is possible to send a single acknowledgement for a number of
frames. How it can be done let us see?

Suppose it has received three frames two three and four but it holds acknowledgement
until frame 4 has arrived now by returning an acknowledgement like ACK5 with
sequence number 5 it acknowledges frame 2, 3 and 4 at one time. That means while
sending a single acknowledgement ACK5 it serves the purpose of acknowledgement for
three frames namely 2, 3 and 4 and obviously the sender will not transmit the frame 5 and
in the receiver as I said since the window size is 1 it will require a buffer of size only 1
because at a time it is receiving one frame and then passing it on to the next higher level.
Let us see how it actually works.

(Refer Slide Time: 20:55)

So as we can see here we are using k is equal to 3 so our window size is 2 to the power k
minus 1 that is 7 so here you can see 0 1 2 3 4 5 6 this is the window at the sender’s end
on the other hand at the receiver send the window is only 1 so it is now ready to receive
the frame 0 and the sender sends the frame 0. As it sends the frame 0 the window is
shrinking by 1. That means now it is it has got six more frames to send starting with 1 2 3
4 5 6 so when a frame is sent the window shrinks on the other hand when an
acknowledgement is received the window is expanded on this side.

After the receiving the frame the receiver’s side moves the window from 0 to 1 because
now the frame 0 has been received and now it is ready to receive the frame number 1. So
as the frame number 1 is sent again the window shrinks by 1 so it is 2 3 4 5 6 so these are
the frames which it can send now.

Suppose the receiver’s side sends acknowledgement ACK2 that means it is now ready to
receive the frame 2. When this is received the window has been extended by 2. Therefore
earlier it was ending with six now it is seven and so again the window is now having
seven frames seven numbers 2 3 4 5 6 7 and 0 and now it will send frame number 2 from
this end. So frame number two is sent again the window is being extended to 3 so in this
way it goes on.

A piggybacking technique is used whenever two sides are communicating with each
other.
(Refer Slide Time: 23:12)

Another point is the maximum window size need not be equal to 2 to the power of k
minus 1 this is the limit for maximum size that can be used. However, in practice one can
use lesser than that but in that case the overhead increases. Unnecessarily if you use more
bits to represent the sequence number the overhead increases so usually we try to use the
maximum number that is 2 to the power k minus 1 that is the window size.

Now, if two stations exchange data each need to maintain two windows to save
communication capacity known as piggybacking. How it works? Each data frame
includes a field that holds the sequence number of that frame plus a field that holds the
sequence number used for acknowledgement. That means each side will have two fields.
One field is the sequence number that is the frame number and another is
acknowledgement number. And now the acknowledgement number is piggybacked as
part of the frame that is being sent and as a result we need not send a separate frame
acknowledgement for this purpose. Therefore saving in the bandwidth occurs in this way
known as piggybacking.

However, if a station has an acknowledgement but no data to send it sends a separate


acknowledgement frame. It is not always necessary that both sides will send
acknowledgement frames by using piggybacking because if a particular station has no
frame to send then it has to send acknowledgement frame after receiving the information
frames. Let us see the link utilization in case of sliding-window protocol.
(Refer Slide Time: 25:30)

Here the utilization will be 1 if N is greater than 2a plus 1 where N is the window size
and ‘a’ is ratio between the propagation time and transmission time. Let us take an
example. Suppose we are taking k is equal to 5 bit such that the value of N will be equal
to 2 to the power 5 minus 1 is equal to 31. This is the window size (Refer Slide Time:
26:00) now suppose the value of a is equal to 10 let us assume so whenever a is equal to
10 obviously the utilization will be in this case 1 because N is greater than 2a plus 1, ‘a’
is equal to 10 so it is 21 so this is greater than 1 so in this case the utilization will be 1.

On the other hand let us assume the propagation time is very long and as a consequence
let us assume a is equal to 200 that can happen in satellite communication. So whenever
that happens the utilization will become equal to…… that is your N is equal to 31 and 1/1
plus 400 so you see the utilization is much less in such a situation whenever your are
using communication over a longer distance that means propagation time is long or you
are sending at a very high speed then also the transmission time will be short.

Therefore whenever N is better than 2a plus 1 in such a situation sliding-window protocol


is capable of sending continuously without almost without waiting because we have seen
that it maintains a window and before the window is exhausted all the frames of the
window are transmitted and it receives an acknowledgement so it keeps on expanding the
window in one direction. So in this way continuous flow of traffic possible if the value of
N is chosen in such a way that it is greater than 2a plus 1. so you see that link utilization
can be significantly better in case of sliding-window protocol.

Now let us focus on error control. We shall consider this model for Backward Error
Control. Data is sent as a sequence of frames as it is already been done. Frames arrive at
the same order as they are sent. We are assuming that frames are reaching the destination
in the same order because usually we are considering the case where the two nodes A and
B are linked by a direct communication link so in such a situation there is no possibility
that the frames will be reaching out of order. It will be reaching in the same order.
However, each transmission frame suffers arbitrary and variable amount of delay before
reception. There can be some delay on the path because of some reason. In addition to the
above following two types of errors may occur. Now what can happen is so far we have
assumed that when a frame is sent it is reaching the destination without any error or
without any loss however it can happen.

First of all let us consider the case of lost frame. A frame fails to arrive at the other side.
How it can happen? Whenever a frame is sent in what situation a frame cannot reach the
destination or it gets lost on the way? One possibility is that there may be some noise and
that noise can be such that it may corrupt to such an extent that the information may
become unrecognizable. So a noise burst may damage a frame to such an extent that it is
not recognizable at the receiving end. In such a case we may say that the frame is lost.

Another possibility is that the frame is damaged. Whenever error occurs that is that is
detected by suitable error detection scheme and then the frame is recognizable but some
bit errors occur. So in such a case we call it damaged frame. So we have to take into
consideration these two situations whenever we consider the error control. Most of the
common techniques for error control are based on some or all of the following.

(Refer Slide Time: 31:57)

First of all it uses error detection.


What are the components of error control?
First of all there should be some technique for error detection. We have already discussed
about a number of techniques for error detection such as parity check, cyclic redundancy
check and checksum so one of the techniques can used for error detection. Normally the
cyclic redundancy check is used because we have seen that this is the most sophisticated
one and gives you much higher fault coverage. Then it will use the concept of positive
acknowledgement. The destination returns a positive acknowledgement to successfully
received error free frames. That is usually represented by ACK. On the other hand there
can be negative acknowledgement so the destination returns a negative acknowledgement
to frames in which an error is detected but the source retransmits such frames.

So whenever an acknowledgement comes in the form of negative acknowledgement then


the sender has to retransmit the frame. Apart from that there will be necessity for
retransmission after time-out. The source retransmits a frame that has not been
acknowledged after a predetermined amount of time. That happens in case of damaged
frames. A frame is damaged to a certain extent that it is not recognizable. So the receiver
assumes that no frame has come. So in such a case the sender will use some kind of time-
out mechanism for retransmission. These are basic techniques or components are used for
Backward Error Control.

These error control techniques are collectively referred to as Automatic Repeat Request
ARQ. Obviously as I mentioned in the beginning our objective is to turn unreliable data
link into a reliable one.

(Refer Slide Time: 33:20)

As we know there does not exist any transmission media where error cannot occur. Error
can always occur although the reliability of the transmission media is improving with
time. However, there is always some possibility or probability of error. So in such a case
in the error prone environment our objective is to perform reliable communication and
that can be done by using error control techniques. There are three versions of ARQ
techniques; one is known as stop-and-wait based on stop-and-wait flow control, second is
go-back-N based on sliding-window protocol then selective-repeat which is also based on
sliding-window protocol. So these are the three ARQ techniques which are used and we
shall discuss them one after the other.
Let us consider the stop-and-wait ARQ technique. As I mentioned it is based on stop-
and-wait flow control technique. The source station transmits a single frame and then
waits for an acknowledgement. As we have seen that is done in case of flow control.
Then no other data can be sent until the destination stations reply arrives at the source
station. This is being performed as part of the flow control. Now, to take care of lost and
damaged frames the stations are equipped with another device known as timer.

(Refer Slide Time: 35:04)

If no recognizable acknowledgement is received when the timer expires at the end of the
time-out interval then the same frame is sent again. That means after sending each frame
the sender starts a time-out and based on the propagation time the distance that time-out
timer is fixed and after the timer reaches the time-out and if no recognizable
acknowledgement is received then the sender will retransmit the frame that is the basic
mechanism. So this however requires that the transmitter maintains a copy of transmitted
frame until an acknowledgement is received for it.

So we have seen that sliding-window is moving forward only after receiving an


acknowledgement. That means for all the frames it receives an acknowledgement the
buffer can be released that is the basic idea here. However, there is a possibility that
acknowledgement frame is also damaged. So in this case also the sender will time-out
and resend the same frame. now there is a possibility that whenever the
acknowledgement is damaged or lost then the sender will retransmit it so there is a
possibility that receiver receives the same frame twice. How do your over come that?
That can be overcome by using a modulo-2 numbering scheme. What is done is the
frames are sent alternately as frame 0, than frame 1 then again frame 0 so alternately this
is done. So if acknowledgement of frame 0 is lost then frame 0 will be transmitted so
there is no possibility of confusing with the next frame because next frame will be frame
1.
(Refer Slide Time: 36:55)

So by using this numbering the possibility of receiving duplicate frame is avoided.


Similarly the acknowledgements are also numbered in this way. For example, an
acknowledgement of frame 0 will be ACK1 and as an acknowledge of frame 1 it will be
ACK0 and then again acknowledgement of next frame 0 will be ACK1 so this is how it
goes on.

This is shown with the help of this diagram.

(Refer Slide Time: 38:10)


As you can see here the frame is same frame 0 and in response to that the receiver will
send an acknowledgement ACK1 then the transmitter or the sender will send frame 1 and
in response to that receiver will send an acknowledgement ACK0 and after receiving that
acknowledgement it will send another frame that is frame 0 but these two frames are not
same they are different frames. They numbered alternately as sequence number in this
manner.

You may be asking why you were using modulo-2 numbering. The reason for this is we
require only one bit so the overhead is much lesser, only one bit is required for
numbering difference. Now you see this frame is reached with error. So receiver detects
error in it so NAK 0 is set No Acknowledgement 0 is set. After receiving that the
transmitter will send frame 0 once again so now it receives frame that frame in correct
form so it sends an acknowledgment ACK1 then the sender will send the frame 1. So this
goes on in this manner.

Now let us consider the situation when a frame is lost in transit. As I mentioned a frame
may get damaged to such extent that it is not recognizable by the receiving end and in
such a case we call it lost frame.

(Refer Slide Time: 39:20)

We see that frame 0 is being sent to the receiver and an acknowledgment is sent and
another frame is sent however this frame is damaged. And as a result this frame is lost
and we can see there is a timer which times-out and after the time-out is reached the same
frame is retransmitted so frame 0 is again retransmitted whenever the frame is lost after
the time-out is reached. I have seen only in this case but it happens for all the cases.

Now let us consider the case of lost acknowledgement. in the previous case we have seen
the lost frame but here we see the lost acknowledgement. So the frame is reaching
without problem but the acknowledgement is now lost, timer is proceeding and as the
time-out is reached again the frame 0 is sent. This is how retransmission takes place in
case of stop-and-wait ARQ technique.

The main advantage of stop-and-wait ARQ technique is that it is very simple and also it
requires minimum buffer size of 1 and also the number of bits required is 1. That means
the number required for representing the sequence number is 1. So in both cases it is 1 so
it will be minimum. However, the disadvantage is that it is highly in efficient use of the
communication links particularly when ‘a’ is large because efficiency will be same as
that stop-and-wait flow control. So we find that stop-and-wait ARQ technique is not very
efficient. How it can be overcome? Obviously it can be overcome by using the technique
of sliding-window protocol for ARQ. Here (Refer Slide Time: 41:00) this is known as
go-back-N ARQ. Why it is called go-back-N ARQ will be clear in very soon. This is the
most commonly used technique.

(Refer Slide Time: 40:55)

The basic concept is a station may send a series of frames sequentially up to a maximum
number. The maximum is equal to 3 equal to three in such a case the window size is 7 so
the numbers are 0, 1 and up to 6. Therefore so many frames can be sent with receiving an
acknowledgement. So 0, 1, 2 all these frames can be sent without receiving any
acknowledgement basically that is the idea. The number of unacknowledged frames
outstanding is determined by window size using the sliding-window flow control
technique as we have already explained.

In case of no error the destination will acknowledge in the coming frames as usual. So the
acknowledgements are same as usual as in the previous case. However, whenever the
destination detects an error in the frame that negative acknowledgement is sent which is
known as reject frame REJ frame. The destination will discard the frame in error and all
future frames until the frame in error is correctly received.
(Refer Slide Time: 42:33)

What will happen is the destination will discard the already received frame until a
particular frame is received correctly. The source station on receiving REJ reject frame
must retransmit the frames in error plus all succeeding frames. Let us see how it actually
works. So here it is explain with the help of the diagram.

(Refer Slide Time: 43:56)

Frame 0 is sent so here it reaches correctly, frame 1 is sent then frame 2 is sent by the
sender and as you see for all these three frames a single acknowledgement is sent as
ACK3. Now the transmitter is keeping on sending 3 then it is sending 4 then it is sending
5 but unfortunately the frame 3 has reached with error and as a result the receiver sends a
negative acknowledgement NAK 3. At same time what it does is it discards the frame 4
and frame 5 because it has got only one buffer so there is no possibility. Since the
receiver is having only one buffer it cannot really store the subsequent frames. if a frame
is not received correctly then all the subsequent frames are discarded so frame 4 and 5
will be discarded so now you see that the transmitter will start sending frame 3, 4 and 5
and that is why it is called go-back-N ARQ. So it is going back to 2, 3 and again
retransmitting 3, 4, 5 so 3, 4, 5 are sent and they are received one after the other at the
receiving end. So the name go-back-N is coming from this. We see here that although the
transmitter has sent up to 5 it is going back to 3 and again then it will be retransmitting 3,
4 and 5 so in this way it proceeds.

(Refer Slide Time: 44:38)

Now let us consider what happens in case of lost or damaged frames. Here frame 0 is
being sent (Refer Slide Time: 44:40) then frame 1 is being sent and frame 2 is now lost.
Since frame 2 is lost and after the frame 3 is received by the destination it will be
discarded because it is waiting for frame 2. The window that it was having was showing
2 so it can receive only frame 2 and no other frame can be received so here frame 3 is
discarded and at the same time it sends one negative acknowledgement NAK 2. And after
receiving that NAK 2 the sender will again send 2, 3 and 4 because it is using go-back-N
ARQ technique. So how the lost frames are tackled is explained here. Let us see how the
lost acknowledgement is tackled.

Here we see that frame 0 is sent (Refer Slide Time: 45:36) frame 1 is sent it is received
correctly, frame 2 is also received correctly so it sends an acknowledgement ACK 3 that
means it is asking for frame 3 to be sent and it is lost unfortunately. So after sending
frame 0 it has not received acknowledgement so it has already started a timer and it reads
the time-out at this point. Since no acknowledgement is received within this period
retransmission starts with frame 0.
(Refer Slide Time: 45:30)

So frame 0 frame 1 and frame 2 these are all sent one after the other to the receiving side.
This how in go-back-N ARQ protocol a lost acknowledgement is taken care of.

(Refer Slide Time: 46:30)

Let us see the window size limit in case of go-back-N ARQ. Let us assume full-duplex
transmission is going on. The receiving end sends piggybacked acknowledgement by
using some number in the acknowledgement field of data frame. Let us assume that a 3-
bit sequence number is used and obviously the window size can be up to 8 so it can be 0
to 7. Now suppose that a station sends frame 0 and gets back received request 1 that is,
essentially this is acknowledgement 1. Then the sender sends the frames 1, 2, 3, 4, 5, 6, 7
and 0 and gets another RR1 so it has received two RR1s. Now this might either mean that
RR1 is the cumulative acknowledgement or all eight frames were damaged so the
transmitter will be puzzled. So if all are damaged and lost then obviously this RR1
corresponds to the first 0. That means before this frame 0 was sent and it corresponds to
frame 0.

On the other hand if it is not damaged then this RR1 can correspond to the next frame
after this 0. So this (Refer Slide Time: 48:05) can be either for this or the next frame after
0. So the transmitter gets confused in selecting the right frame. This ambiguity can be
overcome if the maximum window size is limited to 7. That means if the window size is
limited to 7 that is 0 to 6 then this problem would not arise it can be very easily proved.
That is why whenever k bit sequence number is used the window size is limited to 2 to
the power k minus 1 in case of go-back-N ARQ.

Let us see the case of Selective-Repeat ARQ technique. In this case only those frames are
retransmitted for which negative acknowledgement has been received. This negative
acknowledgement is referred to as SREJ Selective Reject or time-out has occurred. In
either case only that frame is sent. We have seen that in case of go-back-N the transmitter
is going back to some previous number and it sends a number of frames which were
already retransmitted that is not done in case of Selective-Repeat ARQ and as a
consequence it is more efficient than go-back-N ARQ. However, in this case the receiver
requires storage buffers to contain out of order frames until the frame in error it correctly
receives.

(Refer Slide Time: 48:42)

In case of go-back-N we have sent the receiver requires only one buffer. However, in this
case the number of buffers required will be more than 1. The sender also will require
buffers more than one and receiver also will require buffer more than one. Moreover the
receiver must have appropriate logic circuitry needed for reinserting the frames in the
correct order. Transmitter is also more complex because it must be capable of sending
frames out of sequence because in both cases out of order frames are to be handled. Let
us see about that with the help of this animation.

Frame 0 is being sent by the transmitter and after receiving this frame the transmitter is
sending another frame that is frame 1 and then frame 2 but however frame 2 reaches the
destination with error where some bits get corrupted. Obviously the sender will send a
negative acknowledgement although the transmitter will keep on sending subsequent
frames. We see that negative acknowledgement is reaching the destination now. So frame
2 has to be sent and retransmitted by the sender and before that already frame 3 and
frame 4 have been sent.

Hence in case of go-back-N ARQ after 2 the 3, 4 and 5 are sent but now here we find that
after sending 2 it sends 5 so retransmission of 3 and 4 is prevented by using Selective-
Repeat ARQ technique. However here the receiver must keep buffer so that 2 can be
inserted before 3 and 4 and also it must have little more possessing capability.

(Refer Slide Time: 51:40)

Therefore we find that in case of selective-repeat with the help of more powerful
processor capability and more number of buffers in the receiving end the efficiency can
be improved. Also, it will require some limit on the window size. Let us consider the
following scenario in case of Selective-Repeat ARQ assuming k is equal to 1 and let us
assume the window size is same as go-back-N ARQ.
(Refer Slide Time: 52:10)

Now the sender sends frames starting from 0 to 6 one after the other. The receiver sends
receive requests 7 that means after receiving 6 it will send the receive request 7 after
receiving all seven frames. So unfortunately RR7 gets lost in transit though sender times-
out and retransmits frame 0. Since it was waiting for acknowledgement for these frames
they are not received so it will send frame 0.

However, the receiver already has received only seven frames it was also sending frame 0
but not the frame 0 of the previous one but the next one. So the receiver already advanced
its received window to receive frame 7, 0, 1, 2, 3, 4, 5 so it is now ready to receive frame
0. So the receiver wrongly assumes that frame 7 has been lost and frame 0 is accepted as
a new frame, instead of old frame it is received as a new frame so we see that the problem
arises here. This problem can again be alleviated by choosing a limited window size and
in this case the window size should be no more than half the possible sequence number.
that means if you are choosing k is equal to 3 then the possible window size in case of
Selective-Repeat ARQ has to be 2 to the power 3/2 that is is equal to 4.

In case of go-back-N ARQ this can be 7 however in case of Selective-Repeat ARQ it has
to be 4. Moreover both in case of receiver and transmitter we require more number of bits
here because it has to be 2 to the power k/2. Suppose we are having a window size of 7 so
for window size 7 in case of go-back-N ARQ the number of bits required is 3. On the
other hand for Selective-Repeat ARQ the number bits required will be 4. So for the same
window size the Selective-Repeat ARQ technique will require more number of bits for
numbering the frames.

Therefore we have discussed two flow control techniques the stop-and-wait and sliding-
window technique and also we have discussed three ARQ techniques; stop-and-wait, go-
back-N and selective-repeat. And as I mentioned these flow and error control are
performed in an integrated manner although I have discussed them separately for the
convenience of your understanding. Now it is time to give you some review questions.

(Refer Slide Time: 56:30)

1) In what situation sliding-window protocol performs better than stop-and-wait


protocol?

2) Consider the use of 10 k-bit size frames on a 10 Mbps satellite channel with 270 ms
trip delay round trip delay. What is the link utilization for stop-and-wait ARQ technique?

3) What is the channel utilization for the go-back-N protocol with window size of 127 for
the problem 3?

So in the previous case the value of n was different actually it was stop-and-wait so
window size was 1 but now it is 127 so in this case what is the utilization?

4) Compare the window size number of bits used for numbering the frames and buffer
size for the three ARQ techniques.
(Refer Slide Time: 57:10)

There are answers to questions for lecture-15.

1) Compare and contrast the use of parity bit and checksum for error detection.

Only odd number of error can be detected if simple parity check is used for error
detection. However, simple parity check has minimum overhead. Error detection
capability for burst errors for two dimensional parity check and checksum are comparable
but the overhead of two dimensional parity check is more than checksum techniques as
we have discussed in detail.

(Refer Slide Time: 57:23)


2) Draw the LFSR circuit to compute a 4-bit CRC with the polynomial x to the
power 4 plus x to the power 2 plus 1.

As you can see since the degree of polynomial is 4 we require four flip-flops and here we
have got three terms so we require two Exclusive OR gates connected in this manner as it
is shown in this diagram. So this is the LFSR circuit for this characteristic polynomial.

(Refer Slide Time: 57:55)

3) Obtain the 4-bit CRC code word for the data bit sequence 100110111 using the
generator polynomial given in problem 2.

Here the sequence is given and I have already explained that how this number is divided
by modula-2 arithmetic. as you can see first quotient is 1, the second quotient is also 1
then Exclusive OR performed bit by bit, third quotient is 0 so 0 0 and it becomes the
same so in this way we will get 1011 as the reminder which is used as the CRC and this is
the quotient.
(Refer Slide Time: 58:38)

4) Why the use of burst error correction is uncommon?

The number of redundant bits required for error correction increases drastically as the
error correction in the number of bits increases. As we know in case of burst error the
number of bits may get corrupted and as a result error correction is very very difficult and
inefficient that is why it is not commonly used. So with this we come to the end of
today’s lecture, thank you.
Data Communications
Prof. A. Pal
Department of Computer Science & Engineering
Indian Institute of Technology, Kharagpur
Lecture 17
Data Link Control

Hello viewers welcome to today’s lecture on Data Link Control. In the last lecture we
have discussed about flow and error control through important mechanism used for
efficient and reliable data communication. In this lecture we shall discuss about other
components that is required for data communication between two machines or two
systems. Actually a higher level of logic has to be added above the physical layer to
facilitate efficient and reliable data communication. That is the topic of today’s lecture.

(Refer Slide Time: 01:56)

Here is the outline of the lecture. First we shall discuss about Data Link Control then we
shall discuss the key components of Data Link Control such as frame synchronization,
flow control, error control and link management. We have already discussed about flow
control and error control in detail so we shall not discuss about these two important
components in this lecture. However, we shall see how they are used here in some
standard protocol. Then we shall consider about link management and we shall discuss
about an important standard that is known as High-Level Data Link Control or HDLC in
short. Particularly it is characterized by three important parameters like types of stations,
data transfer modes, frame format etc which we shall discuss in detail.
(Refer Slide Time: 02:58)

On completion of this lecture the students will be able to explain the functions of Data
Link Control, they will be able to specify various Data Link Control functions, they will
be able to explain how High-Level Data Link Control works that is HDLC which is the
most popular Data Link Control protocol that is in used today, explain how piggybacking
is done is HDLC and also explain how data transparency is maintained in HDLC.

So, before we discuss about the definition of Data Link Control let us look at the key
features of the link that is being used for communication between two machines. First
feature is we shall assume that two machines are directly connected. That means they are
not connected through some intermediate node or other machine. We shall assume that
two machines are directly connected.

Second assumption is there will be occasional errors in the communication circuit so


there will be possibility of some errors in the communication circuit as data
communication takes place there will be some errors but obviously the probability of
error is quite small and as a result it will occur occasionally not frequently.

Third important assumption is there is non zero propagation delay. As I mentioned in the
last lecture there will be some propagation delay for data communication it will take
some time and obviously this delay will be dependant on the transmission media used and
also it will be dependant on the distance between the two machines. That means the
transmission media and also the distance the between the two will decide the propagation
delay. Then the communication link and the machines have finite data rate. So
communication will take place with some finite data because the machines will have
limited processing capability and based on the bandwidth of the medium the data rate will
be limited. So it is limited both by machines and also by the communication link.
(Refer Slide Time: 03:38)

Obviously in this scenario our objective is to devise suitable mechanism for efficient and
reliable communication by using the unreliable transmission link. This will require great
deal of coordination between the two machines and this coordination is represented by
this Data Link Control protocol.

So, a Data Link Control is nothing but a layer to logic added above the physical interface
to achieve necessary control and management. As it is shown here you have got two
computers they are connected by communication link or Data Link, it can be twisted-pair,
it can be optical fiber, it can be coaxial cable, it can be wireless or whatever it may be
they are directly linked and the transmission media between the system when a Data Link
Control protocol is used is known as Data Link so this is the Data Link (Refer Slide
Time: 6:50) so you can use varieties of medium but the protocol that we are discussing is
really independent of the medium that you are using. So this is irrespective of the
medium that we are using for linking two devices. We are mainly concerned with the
layer of logic or the protocol that has to be used for efficient and reliable communication.
(Refer Slide Time: 06:02)

What are the key components of Data Link Control? These are listed here.

(Refer Slide Time: 07:20)

First one is frame synchronization. We have already discussed about some kind of
synchronization that is necessary for data communication between two links particularly
in the context of asynchronous and synchronous serial communication. Now
synchronization can be done at three different levels; bit level, word level and frame
level. Here we are concerned with frame communication.
We are interested in sending a frame of information a sequence of bits or a sequence of
characters and obviously the beginning and end of a data block called a frame should be
distinguished so that the receiving end can identify where a particular frame is beginning
and when it is ending known as frame synchronization. So this is in addition to bit and
word synchronization. This will work at a little higher level.

Secondly, the flow control which we have already discussed in detail. The basic objective
is the sender should not send frames at a faster rate such that the receiver is
overwhelmed. We have already various flow control mechanisms such as stop-and-wait
flow control and also the sliding-window flow control. And in most of the applications
here we shall see that the sliding-window flow control is used for its efficiency.

We have already discussed error control techniques such as stop-and-wait ARQ go-back-
N ARQ and the select-repeat ARQ these are used in this protocol. These are included in
this Data Link Control protocols. Error control technique is necessary because any bit
errors introduced by the transmission system should be corrected. Then we have the
control and data on same link. You have to send data and also you have to perform the
management. The receiver must be able to distinguish between control information and
data that is being sent using the same medium. So you are sending that control and data
through the same link and unless there is some kind of some kind of protocol that means
agreed upon rules and conventions then it is not possible to identify what is data and what
is control information. This has to be used to show that the receiver can identify what is
control information and what is data being transmitted. Then comes the link management.
This is necessary so that the procedures for the management of initiation, maintenance
and termination of a sustained data exchange. that means when two machines are
connected then you have to first set up the connection then you have to maintain the
connection throughout the period for data communication and also when data transfer is
over then it has to be terminated. Therefore all these things are a part of the link
management. These are the key components of the Data Link Control. We shall discuss
how actually they work.

First let us focus on frame synchronization.


(Refer Slide Time: 11:05)

When data is transferred from the transmitter to the receiver unless steps are taken to
provide synchronization the receiver may start interpreting the data erroneously so some
kind of agreed up on convention has to be used. And as I mentioned synchronization
levels are done at three different levels; bit, character and frame. And the two more
common approaches are asynchronous transmission and synchronous transmission. So let
us consider these two one after the other.

First let us consider asynchronous transmission. So, in asynchronous transmission the


synchronization is done at the character level so data transmits one character at a time. As
we know the number of bits can vary from 5 to 8 and timing or synchronization must
only be maintained within each character. We have already discussed about this.

With the help of a start character the synchronization is achieved and this signifies the
beginning of a character and after that there can be data bits which can vary from 5 to 8,
here this is an optional parity bit (Refer Slide time: 12:35), if error detection is required
parity bit is used then it is followed by stopped bit and so the synchronization is restricted
within this character which is done with the help of start and stop bits. And then
immediately after the stop bit either another start bit can start so in that case it will
become 1 or it can remain 0. actually this is one this is zero (Refer Slide time: 13:03) it
will remain 1 if no new character is sent or it will become 0 that means a start character
will start if another character is being sent. So in this way character by character
synchronization is done in asynchronous transmission.

And here as you can see the receiver has the opportunity to resynchronize at beginning of
each new character. So because synchronization is done at the beginning of each
character the clock frequency between transmitter and receiver need not be exactly same
there can be a difference of about 5%. So, that kind of flexibility exists here. So when no
character is being transmitted the line between the transmitter and receiver is in idle state.
So this is the idle state one state (Refer Slide Time: 14:05) so a character can be sent, the
line can remain idle then another character can be sent which is synchronized.

We have already seen that asynchronous transmission require 20% or more overhead. So
asynchronous transmission is although very simple and some kind of self synchronizing it
is not very efficient because of high overhead.

(Refer Slide Time: 11:47)

Now asynchronous protocols have to be developed on top of the character oriented


transmission. This has been primarily used in modems. Particularly whenever you want
to send a file from one machine to another this type of asynchronous protocol have to be
used. Some of the example protocols are known as X-MODEM, Y-MODEM, Z-
MODEM and so on. So the character format is briefly specified here. It has got one byte
start of header so you have got a character which is start of header SOH and it is followed
by two bytes of header so this is two bytes of header however one byte gives you the
sequence number. So one byte is a sequence number and the other byte actually the
complement of that is being sent here. Therefore using these two bytes the sequence
number and its complement is received so if there is an error that can be detected with the
help of these two bits. So, second byte is essentially to check the validity of the sequence
number. so whatever character is sent the complement of the bit is sent.

Since we are using one byte of character the number of stations it can have is 128 and
after that there is 128 fixed data field so it is restricted to 128 and after that 128 data field
there is a checksum or CRC that is only of one character that means 8 bit. Hence this is
the frame format and here you have got 128 data and finally the character. This is how
you can send one file or one frame.
(Refer Slide Time: 14:35)

As you can see there is a character sent which is start of header followed by 2 bytes and
then 128 byte fixed characters and CRC and in between one character there can be a gap
so as we have seen this gap (Refer Slide time: 17:05) can be variable between two
characters so the variable gap is provided. So this is how the XMODEM works and
YMODEM, ZMODEM are essentially some extensions over XMODEM. There are
several such protocols.

However, this asynchronous protocol is not very widely used because of the inefficiency
or high overhead that is why the synchronous transmission is commonly used. In
synchronous transmission a block of bits or characters are transmitted in a steady stream
without start and stop bits. So, without start and stop bits both the stream of characters or
bits can be sent one after the other without start bit and overhead and the block may
arbitrarily long. So although the block may be arbitrarily long to prevent timing drift
between transmitter and receiver, clock signal is embedded in the data signal. As we
know if a frame is arbitrarily long the probability of error increases that is why the length
is not really very long it is restricted to half the limit.
(Refer Slide Time: 18:22)

Therefore to prevent timing drift between transmitter and receiver clock signal is
embedded in the data signal for example by using Manchester encoding and clock is
regenerated at the receiving end. As we have mentioned earlier the clock cannot be
separately sent whenever two machines are located wide apart but for synchronous
transmission it is necessary that these two clocks should be identical, how that can be
achieved? One way of doing that is we embed the clock as part of the data signal by
suitable encoding that is the Manchester encoding technique where we have seen that for
each bit one transition is included which helps in regenerating the clock at the receiving
end with the help of some special hardware such as Phase Lock Loop or PLL so Phase
Lock Loop is used to regenerate the clock at the receiving end so for that purpose some
encoding is used. Other encoding techniques can also be used but Manchester encoding is
one of the popular techniques used.

For sizable blocks of data synchronous transmission is far more efficient than
asynchronous mode. As I have already mentioned asynchronous transmission requires
20% or more overhead and obviously this is not acceptable whenever we are sending
over a long distance or you want higher efficiency. In such a case synchronous
transmission gives you very high efficiency and that efficiency can be even 001% or may
be .01% where the overhead is very small .01% or .001% so with such a low overhead
the synchronous transmission gives you much better efficiency.
(Refer Slide Time: 20:32)

Now let us look at the protocols that are being used in synchronous transmission. There is
another level of synchronization required so as to allow the receiver to determine the
beginning and end of a block of data. That means apart from the synchronization of the
clock there is a need for synchronization for identifying the beginning and end of a
beginning and end of a block of data and there are several protocols which can be broadly
divided into two types namely character oriented and bit oriented. The example of a
character oriented protocol is Binary Synchronous Communication BSC which is based
on characters. So a sequence of characters sent one after the other. Obviously there will
be a frame involving a string character then a sequence of characters then CRC and other
things. So the BSC is one such example but the character oriented protocols are not very
popular because we cannot really pack more information in them.

On the other hand, the bit oriented protocol allows you to pack more information and as a
result bit oriented protocols are very popular. In both cases every block begins with a
preamble, bit pattern and generally ends with a postamble bit pattern. So this is the
typical framing that is being done. a typical synchronous frame format is given here
where it will have a some kind of an 8-bit flag followed by some control bit then there
can data bits then the control field so again you have the data then some control fields for
error detection and again the flag. This (Refer Slide Time: 22:43) is the typical format
that is being used in synchronous frame format. As we can see the beginning and end is
identified with the help of a special flag it can be 8-bit flag. So the control information,
preamble and postamble in synchronous transmission are typically less than 100 bits.
(Refer Slide Time: 22:18)

So these are essentially the overhead, this is restricted to less than 100 bits and this can be
thousands of bits. Since this is thousands of bits and this is restricted to only 100 bits
obviously the overhead is insignificant portion of the entire frame that is being sent so as
a result that it gives you higher efficiency. Now let us focus on the most popular protocol
that is being used for Data Link Control and this is known as High-Level Data Link
Control or HDLC in short. It is one of the most important protocols as I mentioned and it
is most widely used bit oriented protocol. There is a possibility of having two types of
protocols; character oriented and bit oriented. But character oriented protocols are not
popular so bit oriented protocols are used and HLDLC is one of them.

(Refer Slide Time: 24:36)


It is also the basis for many of the important Data Link protocols. Although HDLC has
not been fully accepted in many protocols but some subset of it or a variation of it has
been included in many protocols that’s why it is the basis for many other important Data
Link Control protocol. This particular protocol HDLC was adopted by ISO in
International Standards Organization committee and also embraced by ITUT. So, as a
consequence two important standards provider accepted this protocol and obviously this
has to very widely accepted.

Some of the important characteristics are it supports both full-duplex as well as half-
duplex communication also it can work in point-to-point and multipoint configurations.
Point-to-point means it can be connected between two machines or you can have a
number of machines connected at one point and it can be connected to other machines
also that is one to many so multipoint so it can work in both the cases as we shall see.

(Refer Slide Time: 25:38)

This HDLC is characterized by four important parameters namely station types,


configurations, response modes and frame formats. So we shall discuss each of them one
after the other in little more detail. First let us focus on station types. There are three
types of stations.
(Refer Slide Time: 26:11)

First one is known as primary station. Primary station is responsible for the operation of
the link so it acts as some kind of master and frames issued by primary stations are called
commands. That means the primary station acts as a master when data communication
takes place between two machines and whatever is issued by the primary station is called
commands so commands are issued by primary station.

On the other hand, secondary station operates under the control of primary station and
frames issued by secondary stations are known as responses. So commands are issued by
primary stations and responses are issued by secondary stations. And primary stations
primary maintains a separate logical link with each secondary. That means primary
maintains a separate logical link with each secondary so essentially link is established
between the primary and secondary.

Now there is a there is a third type of station or machine which are known as combined
station. So the combined stations combine the features of primary and secondary. We
have seen that primary stations can issue commands and secondary can issue responses.
But whenever it is a combined station it may issue both commands and responses. So,
both can be issued by combined station. We have seen the three types of stations that can
be used in HDLC.
(Refer Slide Time: 27:44)

Now let us consider the link configurations. There are several link configurations like
unbalanced configuration which is the first one. So, in the case of unbalanced
configuration as you can see it consists of one primary and one or more secondary
stations. So whenever you have got one primary and one secondary it can be point-to-
point or whenever you have got one primary and several secondaries then it can be
multipoint. So let us see how communication takes place. As we have already mentioned
primary issues a command and in the multipoint configuration the command can go to
both of them. Obviously one of them will respond so the response is coming from the
intended secondary which goes to the primary. This is how the unbalanced configuration
works.

(Refer Slide Time: 28:42)


The second link configuration is known as symmetrical configuration. In symmetrical
configuration each physical station of a link consists of two logical stations one primary
and one secondary. So you see here you have got a single physical station however
functionally it has got two logical stations. That means one is primary and another is
secondary and as a result here you have got a symmetrical configuration, the secondary
of this side. So these two logical stations communicate with another station which is also
having two logical stations so you require two different links and here the primary can
issue commands and the other side the secondary will issue response. Similarly the
command can be issued by the primary of the other side and obviously the secondary of
this side can issue responses. This is how the communication takes place in symmetrical
configuration.

So, in symmetrical configuration as we can see you have got two logical stations within a
single physical station and the communication takes place in this manner. Coming to the
third configuration which is known as balanced configuration consists of two combined
stations. So here you have got one physical station and one physical station and both are
of combined type.
(Refer Slide Time: 30:05)

Since both of them are combined type, both have the capability of issuing commands as
well as responses. Now a command has been issued by this station (Refer Slide Time:
30:37) which goes to the other side and the other side will issue response for the
corresponding command. Similarly this side can also issue a command and this side will
in turn generate a response. So a single combined station can issue both commands and
responses and that’s how the exchange of information takes place in balanced
configuration. This is how the balance configuration works using two combined stations
and here it is always point-to-point.

Coming to the third important parameter that is the data transfer modes there are three
data transfer modes such as Normal Response Mode NRM, Asynchronous Balanced
Mode ABM and Synchronous Response Mode ARM. These modes work in different
ways as they are given here.
(Refer Slide Time: 31:11)

First let us consider the Normal Response Mode NRM.

(Refer Slide Time: 31:52)

This is always used in unbalanced configuration. That means there will be one primary
and you can have one or more secondary. So primary may initiate a data transfer to a
secondary, a secondary may only transmit data in response to a command from the
primary. So this secondary will be able to transmit the data when it is pulled or when it is
asked to give some response. However, it can be used in multi-drop lines also.
As I mentioned there can be one primary and several secondaries. So in this case a
number of terminals are connected to a host computer this is an example of multi-drop
lines. So you can have a computer this can be your server and then you can have a
number of terminals so these are terminals (Refer Slide Time: 32:55) so this is the
situation for this Normal Response Mode. What I have shown here is a logical diagram
but you can use a …….. sort of thing. So the computer polls each terminal for input and
we have already discussed how the polling can be done. It is also used for point-to-point
links. This can be used for both point-to-point as well as multipoint as I mentioned. This
is the point-to-point where a computer is connected to peripheral. This is a computer and
this is a printer. so this is essentially the master and this is the primary and this is the
secondary in the HDLC terminology and the communication is in the unbalanced mode.

(Refer Slide Time: 33:53)

Coming to the second data transfer modes asynchronous balanced mode this is used in
balanced configuration. This asynchronous balanced mode can be used as its name
implies it can be used only with balanced configuration so either combined stations may
initiate transmission. So here you have got two machines and both of them are combined
type and obviously in this case data transmission can be initiated by either of them and
this is possibly the most widely used technique and it makes most efficient use of a full-
duplex point-to-point link as there are no polling overheads.

As we have seen in case of unbalanced mode if it is a multipoint then polling overhead is


there. But in this particular case you can have two machines connected by full-duplex
link so data can flow in this direction so there is no need for any polling and both of them
can be combined machines. This is how the asynchronous balanced mode works.
(Refer Slide Time: 35:04)

The third type is Asynchronous Response Mode or ARM. This is used with an
unbalanced configuration, secondary may initiate transmission, primary still retains
responsibility of the line for example initialization, error control these are all done by the
primary however in this secondary may initiate transmission in the Asynchronous
Response Mode which is not possible in the first mode.

However, this is rarely used and may be used in very special situations where a secondary
may need to initiate transmission. Normally all the data transfer takes place with the
initiation of the primary but in this Asynchronous Response Mode the secondary may
also initiate data transmission so whenever this kind of situation is required in that case
Asynchronous Response Mode can be used. So, coming to the fourth important parameter
that is your frame types we have got three different frames namely I-frames which is the
Information frame, S-frame or U frame
(Refer Slide Time: 36:10)

So there is I-frame, S-frame and U-frame, so this is your unnamed frame (Refer Slide
Time: 36:35), S stands for secondary frame and I stands for information frame. As you
can see here you have got flag, address, control, information and Frame Check Sequence
and flag. This is the typical format of the I-frame. However, S-frame is not having
information part but it is having all the other fields the flag, address, control, information,
Frame Check Sequence and flag and in case of unnamed frame or the U-frame you will
find there is a flag, address, control and information part is optional and this information
is provided for management and control so this is not really the user data. In the case of I-
frame this is user data but here it is used for management and control. Whenever some
information has to be sent for management and control it is provided here. So you have
got flag, address, control, this optional information, Frame Check Sequence and flag. So
these are three different types of frames possible. Let us consider each of these fields
separately one after the other.

First let us consider the flag field.


(Refer Slide Time: 37:55)

Each frame starts and ends with the separate bit pattern that is your 01111110 so you
have got six ones and two zeros planked by two zeros on both sides so six ones planked
by two zeros on both sides acts as a flag which is actually the starting and ending frame
and which signifies the start and end of a particular frame. Now this bit pattern may
appear as part of the information or it may appear as part of control so in such a case
what has to be used.

Whenever we are sending the bit pattern 0 then six ones and then 0 which appears as part
of data then there is a possibility that if it appears in the middle of information bit that can
be considered as the ending flag. So as a consequence this will divide the single frame
into two frames so this problem has to be avoided and that is being avoided by bit
stuffing so that there is data transparency to unambiguously identify the flag fields.

So the flag fields should not be present in the information bits. So, to unambiguously
identify the flag fields bit stuffing is used. As you can see here (Refer Slide Time: 39:40)
as you are using the same flag when the flag is used to mark both beginning and the end.
There are two situations; A 1-bit error may merge two frames into 1 and a 1-bit error
inside the frame could split it into two, this can happen. However, when no error occurs
there is no problem as you are using Frame Check Sequence for detecting errors.

Now let us consider the address field.


(Refer Slide Time: 40:08)

This address field identifies the secondary station that transmitted the frame. That means
whenever a secondary station sends a frame it gives its own address known as (from
address). That means whenever it is going from a secondary to a primary then it is called
(from address). That means where from it is coming is provided as part of this address.
On the other hand, there is another situation, whenever it is going from a primary to a
secondary, so this is your primary and this is your secondary (Refer Slide Time: 40:48) so
when it is going from here to here then the address is essentially (to address). So here it is
(to) and here it is (from) so in the same address field you can have either two types of
addresses (to) address and (from) address depending on information transfer that is taking
place.

Obviously there is only one primary in the system so there is no need for its address to be
part of the frame so only the address of secondary is needed that’s why this (from)
address and (to) address approach is used. So this field is not required for point-to-point
links but is included for the sake of uniformity.

Whenever it is point-to-point you have got only two stations one primary and the other
secondary so there is no need for any address because there is only one. But when it is
multipoint there are several secondaries in such a case address has to be used and since it
is of 1 byte you can have at most 128 different types of secondaries. So this address
whenever it is single octet or 1 byte then you can have 128 addresses but there is a
provision for multi-octet addressing in such cases obviously the number of machines you
can have in a multipoint communication can exceed 128 so there is provision for multi-
octet addressing but single octet is the most commonly used one.

Now let us focus on the third field that is the control field. this control field defines three
types of frames namely information frames which can carry the data to be transmitted,
flow and error control information are also piggybacked using ARQ mechanism then
supervisory frames provide the ARQ mechanism when piggybacking is not used,
unnumbered frames provide supplemental link control functions and in case of I and S
frames they use 3-bit sequence number as we shall see so the flow and data control used
here will be explained how it is being done.

(Refer Slide Time: 42:34)

Let us look at the control frame format. First of all control frame decides what type of
frame it is. For example, if it is going in this direction it is the first bit of the control field
if it is 0 then it is I-frame or information frame. And as you can see information has got
two 3-it numbers NS and NR. NS is actually the sequence number of the station which is
sending the information. So it is the sequence number of the station which is sending the
information so it is 3-bit.
(Refer Slide Time: 44:18)

Obviously if we use go-back-N flow control and ARQ then the total number will be 7.
That means 7 possible sequence numbers can be used. And NR is used for piggybacking.
NR essentially specifies the frame it is expecting. That means this NR tells ACK that N
and that N value is provided here. So how piggybacking is done is whenever both sides
are connected by using full-duplex communication and two stations are connected by
using full-duplex communication then piggybacking can be used and acknowledgement
information can be provided as part of the information frame. This is how these three bits
are used for that purpose. So you can provide ACK as well as reject other frames as part
of this.

However, when there is no data to be sent by the station that means whenever there is not
data to be sent by the secondary then it can send one S-frame. Hence in case of S-frame
there is a code and it is decided by one 0 so in that S-frame there is no data field.
However, it has got the two bit code then this bit stands for poll and final.

Whenever you are using a secondary it is used in this case information frame also and
whenever you are sending you have got a primary and a secondary and whenever primary
sends it actually this bit is 1 so this signifies the polling. That means when it is going
from primary to the secondary. On the other hand, when it is coming from the secondary
to primary and if it is 0 then data is coming from secondary to primary and whenever it is
1 that signifies the final frame that is coming. So in this way the sequence of frames can
come from secondary to primary so polling is performed with the help of these bits.

And the secondary can signify (Refer Slide time: 46:59) with the help of this code the
different types of information that is being provided as part of that NR. For example,
when the code is 0 0 this stands for RR that means Receive Ready. That means Receive
Ready is essentially that whenever the receiver is sending an acknowledgement which is
provided as part of NR. So NR is providing that acknowledgement then it is RR Receive
Ready for receiving that particular frame.

For example, ACK6 means 6 is provided here so here you have got 1 1 and 0 so 6 is
written here and then Receive Ready so an acknowledgement frame is being sent. Or it
can be say 0 1 then it is reject. that means whenever negative acknowledgement is being
sent a particular frame has to be rejected so NAK for example it is reject then it is 1 1 and
0 once again. That means here in case of go-back-N ARQ the frame has to be repeated
from the frame number 6 so this signifies the frame which is in error which has to be
retransmitted that is the case whenever it is 0 1 that is reject. So it specifies the function
and the frame number is provided here. That acknowledgement number is provided here.
Then if it is 1 0 then it is Receive Not Ready RNR. That means whenever the other side is
not ready then the data cannot be sent. So you see here you can perform flow control. If
the secondary is not ready in such a situation RNR frame can be sent that supervisory
frame can be sent by the secondary to the primary.

Finally whenever it is 1 1 then it can be S reject SREJ. That means whenever the
selective-repeat ARQ can be used in such a case a particular frame number which is
mentioned here has to be selectively repeated that is being specified with the help of this
bit. So this is how the S-frame with the help of this code can mention about different
types of functions of that supervisory frame which is coming from the receiving side to
the transmission side. So whenever a secondary has no data to send piggybacking cannot
be done so in such a case this negative acknowledgement or acknowledgement can be
sent in this manner.

Finally in case of U-frame again the code is there which is used in S-frame and U-frame
and the three more bits are used for coding. These frames are essentially for used for
control and management purposes.

(Refer Slide Time: 43:32)


Now coming to the information field it is present only in I-frames and also in some U
frames as I mentioned. And whenever it is present in I-frame it is user data that is being
sent and when it is used in U-frame it is management and control information. So it can
contain any sequence of bits but that has to be multiple of 8 but is usually limited in
length from consideration of error control.

(Refer Slide Time: 50:30)

As I have mentioned although you can send unlimited number of bits it is restricted so
that the frame does not suffer from error. That means if one bit gets corrupted then again
retransmission has to be done so to avoid that it is better to restrict the length so that the
error does not occur and probability of error increases with the length. Longer the length
more the possibility of error is more. Frames with empty I field are transmitted
continuously on idle point-to-point lines in order to keep the connection alive.

So, whenever a particular station has no data to send but a particular session has started in
such a case frames are sent continuously with no information in the information field that
is also possible. Particularly this is necessary to keep this connection alive otherwise the
link will be disrupted.

Now as I mentioned that your information part may contain the flag which is your
0111111 and 0 and this should not appear in the information field. That means
information field should not contain a flag character. So, to do that, bit stuffing is used.
(Refer Slide Time: 52:07)

Bit stuffing is explained with the help of this animation. Let us assume that this is the
sequence of bits or data to be sent present in that information field (Refer Slide Time:
52:44). So, what is being done is after high consecutive ones a 0 is forcibly introduced so
here you can see 11111 after five ones a 0 is introduced and here also 1111110 is
introduced. Thus after each occurrence of five consecutive ones a 0 is introduced known
as bit stuffing. So here you see a flag at this control and after this a 0 bit has been
introduced and here also a 0 bit has been introduced so bit stuffing has taken place and
this is the frame to be sent with two extra zeros and this is the frame that is being sent and
this is the frame received.

At the receiving end the receiver knows that after each five ones there is a so these zeros
are replaced to get back the data. So the receiver will remove that redundant zeros to get
back the data. This is how the frame is recovered at the receiving end. This is how the bit
stuffing is done for data transparency.

Finally we have the Frame Check Sequence which can use either 16 or 32 bit CRC code
computed using address, control and data fields for the purpose of error detection using
the Cyclic Redundancy Code. We have already discussed about it in more detail. Now as
I mentioned several protocols have been developed based on HDLC known as Link
Access Procedure Protocols and LAPD and LAPB, LAPM are the three examples. This
LAPB works in ISDN and uses Asynchronous Balanced Mode of transmission to connect
two devices in a combined type. Link Access Protocol D is used in ISDN using ABM
Asynchronous Balanced Mode that is in both the cases we find it is used for point-to-
point link. The link access procedure for modems used in modems essentially is used to
design synchronous and asynchronous conversion error detection and retransmission.
These functionalities are not provided in dcdt interface so this is added on top of the dcdt
interface to provide the HDLC features.
(Refer Slide Time: 55:40)

So now it is time to give you the Review Questions.

(Refer Slide Time: 56:10)

1) Why are asynchronous protocols using popularity?


2) Why bit oriented protocols are gaining popularity?
3) In HDLC what is bit stuffing and why is it needed?
4) What is piggybacking? How is it used in HDLC?

Now it is time to give you the answers to the questions of lecture minus 16.
(Refer Slide Time: 56:21)

1) In what situation sliding-window protocol performs better than stop-and-wait


protocol?

Obviously sliding-window protocol performs significantly better than stop-and-wait


protocol when the value of ‘a’ propagation time by transmission time is large. this
happens in long distance communication through optical fiber and satellite link or when
you are using high speed communication.

(Refer Slide Time: 56:55)


2) Consider the use of 10 kilo bit size frames on a 10 mega bit satellite channel with
270 millisecond delay. What is the link utilization for stop-and-wait ARQ technique?

So here the value of a equal to 270 because transmission time is 1 millisecond,


propagation time is 270 millisecond so utilization is only 0.369%. On the other hand,
when you are using go-back-N ARQ with window size 127 that means you are using 8 bit
for the frame number so in such a case the value of a is equal to 270 and obviously the
utilization is 46.86%.

(Refer Slide Time: 57:25)

And if we use more number of bits that means larger window then it can be even 1.
(Refer Slide Time: 57:52)

3) Compare the window size, the number of bits used for numbering the frames and
buffer size for three ARQ techniques.

So, in stop-and-wait window size is 1, buffer size requirement is 1, in go-back-N


receiver has got window size of only 1, transmitter will have window size 2 to the power
k minus 1, buffer size in transmitter is 2 to the power k minus 1 and receiver will have
only 1.

In case of selection reject however as we know the receiver and transmitter both will
have window size 2 to the power k by 2 and buffer size also 2 to the power k by 2 both
for transmitter and receiver. So with this we come to the end of today’s lecture, thank
you.
Data Communications
Prof. A. Pal
Department of Computer Science & Engineering
Indian Institute of Technology, Kharagpur
Lecture 18
Switching Techniques: Circuit Switching

Hello viewers welcome to today’s lectures on switching techniques. So far we have


assumed that two devices or equipment are directly communicating with each other and
for that purpose whatever protocol and techniques are necessary we have discussed like
the HDLC which we have discussed in detail. However, there are many situations where
two equipments are not directly connected so in such a case they perform communication
through a number of intermediate equipment or devices. So in such a situation the
technique that is adopted is known as switching technique and the switching techniques
can be broadly divided into two types; circuit switching and packet switching.

(Refer Slide Time: 03:00)

In today’s lecture we shall discuss about circuit switching. This is the outline of the
lecture. First we shall discuss why circuit switching, then in the context of circuit
switching we shall introduce to you Switched Communication Network that is the model
that is being used for communication. Then I shall discuss about the circuit switching
fundamentals their advantages and disadvantages then various concepts of switching that
is used in circuit switching such as space division switching and one application of that is
crossbar switches and Time Division Switching. Of course both of them have combine
having space and Time Division Switching. Then we shall discuss about how routing of
signal takes place in circuit switched networks and finally we shall discuss signaling in
circuit switched networks.
(Refer Slide Time: 03:15)

On completion of this lecture the student will be able to understand the need for circuit
switching, specify the components of a switched communication network, understand
how circuit switching takes place, understand how switching takes place using space
division and Time Division Switching, understand how routing is performed and finally
they will be able to understand how signaling is performed. So, let us consider the first
question how two devices perform communication when there are many devices?

Suppose you have got a number of equipments or stations say A B C D so in such a case
A wants to communicate with B C D may not be simultaneously but at a time A wants to
communicate with B or A wants to communicate with C or A wants to communicate with
D one possible alternative is to establish connection from A to B, A to C and also from A
to D.
(Refer Slide Time: 4:36)

Similarly it is necessary to establish communication from B to C, B to D and finally C to


D so we have to provide a number of links and it can be the shown that the total number
of links required for n such stations is equal to n into n minus 1 by 2 so that number is
very large. For example, here you have got four stations (Refer Slide Time: 5:10) so you
will require 4 into 3 by 2 is equal to 6 so if you count you will find that total number of
links is 6 that is 1 2 3 4 5 6 so you have got 1 2 3 4 5 and 6 so 6 links are necessary.
Hence this is known as mesh topology. Obviously when you have got a large number of
stations this kind of mesh topology is not practical. for example, if you have got hundred
nodes then you will get 100 into 99 by 2 so many links that means 99 into 99 so many
links will be necessary to establish a communication from any node to any node or any
station to any station so this is not a good choice.
(Refer Slide Time: 03:52)

So what is the alternative? One alternative is to use switched communication network. So


whenever you have got large number of devices then mesh topology is not practical so a
better alternative is to use switching techniques leading to switched communication
network.

(Refer Slide Time: 06:03)

So let me introduce to you what we mean by switched communication network.


(Refer Slide Time: 07:00)

In switched communication network we will find end devices namely essentially


computers, peripherals, communication equipments such as cell phones, laptops PDAs
and various other things and these are known as stations. The switching devices are
called nodes. That means we are using some additional devices intermediate devices
known as nodes and in such a situation some nodes connect to other nodes and some
nodes are attached to stations. Let me show you the network.

(Refer Slide Time: 07:06)

Here as you see these are the end stations A B C D E and on the other hand 1 2 3 4 5 6
are communication networks nodes so these are the nodes used as switches as
intermediate points for communication. How it is done? As you can see network topology
is not regular so as shown earlier there is the mesh topology there are other topologies
such as bass topology, ring topology, star topology etc. The bass topology is like this say
we have got a bass and on to that we can connect all the stations this is your bass
topology. So, in switched communication network you don’t have bass topology you
cannot use ring topology in which the stations are connected in the form a ring. So you
can see that each station is connected to the neighbor neither is star like this so we find
that in this case the topology is not regular. It uses FDM and TDM for node-to-node
communication.

So here you find node-to-node communication as you can see shown in different colors
from this station to node communication link. Here we find that this is narrower and this
is wider. Actually this one to two or one to three these are node-to-node links and this
links are of higher bandwidth. Since these links are with higher bandwidth and to make
maximal use of the bandwidth we use FDM or TDM techniques. We have already
discussed about the FDM and TDM techniques. We have also discussed how we can use
the higher bandwidth of links so these links are used as FDM and TDM.

There exist multiple paths between a source destination pair for better network reliability.
Here we find (Refer Slide Time: 9:25) it provides you a much higher reliability. for
example, if A wants to communicate with C it can communicate through the nodes 1 3
and 5 but if suppose one of the link is down or one of the nodes is down in such a case if
A can communicate through the node 1 2 6 and 4 or it can communicate through 1 2 4 5
so in this way several alternative routs are possible which increases the reliability of the
network and that is one objective of the switched network communication so that it
provides you higher reliability providing multiple paths.

Here another important feature is the switching nodes are not concerned with the contents
of data. That means whatever data is being sent by a station that is being communicated
by the node. In other words nodes can be considered as dumped devices. Whatever
received is being transmitted to other node or to the destination station node. So either
ways it is possible, a station will send to a node and the node will either send to another
node or it will go to another station as it is shown here (Refer Slide time: 10:43) and in
such a case the nodes do not modify the information or data. Their purpose is to provide a
switching facility that will move data from node-to-node until they reach the destination.
So their purpose essentially to communicate the data which is received from one end and
sent to another end connecting to a node or a station. This is the basic model of the
switched communication network and as I mentioned various switching techniques.
These are circuit switching, message switching and packet switching so there are three
alternatives.
(Refer Slide Time: 12:24)

In this lecture as I mentioned we shall be concerned with circuit switching and in circuit
switching the communication via circuit switching implies that there is a dedicated
communication path. So I am high lighting this so it is dedicated and this dedicated is
important because without setting up a dedicated no communication is possible in circuit
switched network. So it is the dedicated communication path between two stations. The
path is a connected sequence of links between network nodes. So it can be a connected
sequence of links. That means it is not necessary that there will be a direct link direct path
there may be a path to a number of nodes. So it can be a connected sequence of links
between network nodes. And on each physical link a logical channel is dedicated to the
connection. This is another important feature.

The circuit switching involves three important phases for communication. First one is
known as circuit establishment, then data transfer and third is circuit disconnect. The
circuit establishment phase is used to establish end to end connection before any transfer
of data. So before any transfer of data is performed you have to set up a direct link and
some segments of the circuit may be dedicated link while some other segments may be
shared. So there are two alternatives it can be dedicated or it can be shared.
(Refer Slide Time: 12:30)

Then in data transfer the transfer data is from the source to the destination. Once the link
is established whatever data is to be transferred is transferred from the source to
destination through the dedicated link already established in that circuit establishment
phase. then the data may be analog in nature or digital in nature either is possible
depending on the nature of network and the connection is generally full-duplex.

As I mentioned whenever two stations are connected usually the communication is full-
duplex in nature. And finally once the data transfer is complete, circuit disconnect is
performed that terminates the connection at the end of data transfer; signals must be
propagated to deallocate the dedicated resources. So whatever link was established the
deallocation is done for that. Here it is shown how it is being performed.

So here the call request signal is going from A through node 1 to 2 then call request also
goes from node 2 to node 4 then another call request goes from node 4 to node 5 for
establishing connection from A to C. then once the call acknowledgement comes through
the already established path that is after the call request comes data is sent from A to C
through nodes 1 2 4 and 5 so this is how data is sent and after the data transfer is
complete the acknowledgement signal sequence comes from the other end from station C
to station A and the circuit disconnect is performed. This is how the data transfer takes
place in circuit switching.
(Refer Slide Time: 14:20)

This circuit switching technique was originally developed for handling voice traffic but is
now also used for data traffic.

As I mentioned this circuit switching concept was originally developed for public switch
telephone network, for voice communication but now it is also used for data transfer. And
as I already mentioned I am emphasizing that once the circuit is already established the
network is transparent to the users. That means what data is being sent will be sent and
information is transmitted at a fixed rate without no delay other than the required
propagation through the communication medium. That means after the link is established
there is no other delay in fault except the preposition time. Depending on the distance,
depending on the medium that is being used there will be some propagation time but
apart from the propagation time there is no other delay involved in this circuit switching
technique. And as I mentioned best known example is the Public Switched Telephone
Network that is being used in communication network so Public Switched Telephone
Network (PSTN) is the best example of circuit switching technique.
(Refer Slide Time: 15:25)

Now let us look at the advantages of this circuit switching technique. First it is fixed
bandwidth and guaranteed capacity. That means there is an end-to-end link and since
end-to-end link is there the bandwidth is fixed and it does not change.

(Refer Slide Time: 17:06)

And here the advantage is it gets guaranteed capacity. That means after the establishment
of the link is known then both the ends know what is the possible transfer rate and there
is no possibility of congestion. Since the links are having dedicated link there is no
possibility of sharing, there is no possibility of congestion. And the second important
advantage is there is low variance in end-to-end delay, there is a constant delay. As I said
this delay is essentially the propagation time so there is no other delay involved in this
communication.

(Refer Slide Time: 17:49)

Now let us look at the disadvantages. In this world nothing is one sided there will be
some disadvantage. The disadvantages are; the circuit establishment and circuit
disconnect introduces extra overhead and delay. As I have shown before any data transfer
is performed it is necessary to perform circuit establishment and at the end of transfer we
have to perform disconnection. So, disconnection has to be done. Both of these will take
some extra overhead and as a result it will involve some delay also particularly before
any data transfer can be performed. It allows you constant data from source to destination
and channel capacity is dedicated for the duration of the connection even if no data is
transferred. That means after circuit establishment if data is not transferred obviously the
bandwidth is wasted.

For example, you have a set of long distance call and after setting up the call if you don’t
talk even when you don’t talk you pay for the link or bandwidth. This is one
disadvantage. And it has been found that for voice connection although the utilization is
high the statistics show that 64 to 73% time one speaker is speaking, 3 to 7% time both
the speakers are speaking and 20 to 30% of the time both are silent.

Therefore as you can see even in voice communication the utilization is not very high it is
only 64 to 73% but for data communication which is burst in nature this inefficiency can
be very high. Particularly most of the time no data is generated and only sometimes a
burst of data is generated in data communication. So, in a typical user host data
connection the line utilization is very poor. And the irony is that whenever the user is not
using the bandwidth that others cannot use. This is another important disadvantage of
circuit switching.
Now let us focus on the function of the switching nodes which play a very important role
in circuit switching.

(Refer Slide Time: 21:29)

Let us consider the operation of a single circuit switch node comprising a collection of
stations attached to a central switching unit which establishes a dedicated path between
any two devices that wishes to communicate. A single node network or a particular node
will have these three functions, it will have a digital switch which provides a transparent
signal path between any pair of attached devices usually full-duplex, it will provide the
network interface, it represents the functions and hardware needed to connect digital
devices to the network like telephones so you have to interface your devices like
computers, telephones and modems to the network and for that purpose some interface
has been provided. And finally we require the control unit which maintains and tears
down a connection.

Here is the block diagram or of a circuit switch node. One node is shown here.
(Refer Slide Time: 21:41)

Here as you can see these are the lines through which the stations are connected so all the
stations are connected through these links and through this interface so this interface is
used for linking a number of stations and there is a control unit which performs the
linking with the help of this digital switch. So the control unit controls this digital switch
with the help of which the interface can be…….., for example in this case there is a link
between say 1 and this is 2, (refer Slide Time: 22:35) this is 3, this is 4, this is 5, this is 6,
7 and 8 you have eight lines and this 1 is connected to 6, 2 is connected to 4 and 3 is
connected to 1 and then 5 is connected to 7 so this digital switch connects each pair with
the help of the control unit. So this is the basic function that is being performed by circuit
switch node.

Now switching can be done in a number of ways. One technique is known as space
division switching. This was originally developed for the analog environment. You know
that telephone network was originally used for voice communication and as a
consequence it was developed for analog environment but subsequently it was carried
over through digital domain as in nowadays. So, in a space division switch the signal
paths are physically separated. Therefore physically separate paths are provided for each
of these links or paths from one to another so it is divided in space essentially it is a
crossbar matrix.
(Refer Slide Time: 23:08)

Let us have a look at the cross bar matrix.

(Refer Slide Time: 24:02)

So here you see you have got inputs and outputs. These are input lines and the output
lines and it is organized in the form of a two dimensional matrix and as you can see at the
crossing of each particle on horizontal line there is a cross and essentially at each cross
you have got an electromechanical switch or micro switch it is nothing but a micro
switch. So with the help of a micro switch either a connection can be established between
a vertical line with a horizontal line. for example here this is 6 and this is 8 so if this is
closed that means if this cross here is closed this mirco switch is closed then a path is
established between 6 and 8 (Refer Slide time: 24:55). So at any junction if the switch is
closed a path is established between the horizontal line and the vertical line connecting
that particular cross. So you see this is how the cross bar switch operates. However, it
uses electromechanical lineage or micro switches.

(Refer Slide Time: 25:22)

The basic building block of the switch is a metallic crosspoint or semiconductor gate that
can be enabled or disable by a control unit. That means the control unit performs the
function. as I have shown earlier there is a control unit and this control unit is controlling
the digital switch and digital switch in case of space division switching is this cross bar.
So this is how the cross bar switch works and originally these cross bar switches were
having metallic crosspoint or semiconductor gates that can be enabled or disabled by the
control unit.

Of course the early electromechanical switches were not very reliable they were bulky
they used to consume lots of power. But with the advancement of VLSI technology
nowadays the cross bar switches can be implemented with the help of semi conductor
devices. For example, the Xilinx crossbar switch using Field Programmable Gate Arrays
is available and it is based on reconfigurable routing infrastructure and in this Field
Programmable Gate Arrays based switches you can have the high capacity non-blocking
switches and it provides very high capacity varying from 64 by 64 or 1k by 1k so you can
have a very big cross bar switch such as o1k by 1k which operates at a very high speed
200 Mbps. Obviously the data rate is quite high and nowadays possible by using this
Field Programmable Gate Arrays and available from Xilinx.
(Refer Slide Time: 27:27)

Let me introduce the concept of blocking and non-blocking before I discuss the other
functions of the cross bar switch. An important characteristic of a circuit switch node is
whether it is blocking or non-blocking.

A blocking network is one which may be unable to connect two stations because all
possible paths between them are already in use. That means you may have number of
paths. For example, I have shown eight inputs and eight outputs if you are not able to
connect a particular pair because all the pairs are already used in such a case you call it
blocking. On the other hand a non-blocking network permits all stations to be connected
in pairs at once and grants all possible connection requests as long as the called party is
free. That means as long as the called party is free the non-blocking switching allows
connection between any pair of stations or inputs and outputs.

For a network which supports only voice traffics a blocking configuration may be
acceptable since most phone calls are of short duration.

What I am trying to emphasize here is that a blocking network is acceptable in voice


environment because most of the time people are not talking that means the usage is
much less. On the other hand in data applications the connection remains active for hours.
Usually whenever we are talking over a telephone we usually talk for a few minutes and
then disconnect the call. On the other hand for data applications the connection may be
active for hours and non-blocking configuration is desirable. What I am trying to tell is
for voice application blocking network is acceptable or for data communication the non-
blocking network is desirable.
(Refer Slide Time: 29:52)

Now let us focus on the crossbar switch and what kind of limitations it has got. As you
have seen the number of crosspoints switch grows with the square of the number of
attached stations. Obviously as the number of inputs and outputs increase exponentially
at the rate of square……. for example if you are having let us assume 1000 input or 1000
output or 1000 by 1000 switch so in such a case we will require one million crosspoints.
That means a crosspoint is nothing but a electromechanical relay or an electronic switch
and in this case for 1k by 1k switch you will require one million that means ten to the
power six crosspoints so it is costly for a large switch. And another important
disadvantage is that the failure of a crosspoint prevents connection between the two
devices whose lines intersect at that crosspoint.

Let us go back to the diagram of the cross crossbar switch which will explain that.
(Refer Slide Time: 31:11)

Suppose this particular crosspoint or the switch has become faulty, if this becomes faulty
it is not possible to connect five with four so five and four cannot be connected if this
becomes faulty. So even when four is not busy the connection cannot be established
between four and five if there is failure fault here. That means the failure of a cross point
prevents connection between the two devices whose lines intersect at the crosspoint.

Another important disadvantage is that the cross points are inefficiently utilized. What do
you mean by that? We have seen that for a 1000 by 1000 network you will require one
million crosspoints. But are the one million crosspoints used efficiently? Unfortunately
no, even when it is used heavily may be 40 to 50 crosspoints are used but all the other
crosspoints remain idle. So only a small fraction are engaged even if all of the attached
devices are active that even when all the attached devices are active. That means
whenever you are active that means you have got n square crosspoints but at a time when
all pairs are connecting only n crosspoints remain busy so you see n square minus n
crosspoints remains idle so that is a large number. What is the solution to this? What is
the other alternative by which the efficiency can be increased? One better alternative is to
use multistage switches.
(Refer Slide Time: 33:22)

What is being done here is by splitting the cross bar switch into smaller units and
interconnecting them it is possible to build multistage switches with fewer crosspoints.
Here that example is shown which is nothing but a small 8 by 8 switch we have got eight
inputs and eight outputs. Instead of a single stage crossbar switch here you have got three
stage crossbar switches. As you can see here this is a 4 by 2 switch, this is a 2 by 2 switch
these are all crossbar switches and they are internal linked in this way.

Now how many crosspoints are here? If it is a single phase you will require 64 cross
points. So in this particular case as we shall see 4 into 2 is equal to 8 plus 8 is equal to 16
plus 4 is equal to 20 and another 20 so you will require 40 crosspoints so this is the
reduction. If it is large crossbar switch and if you use multistage switches then the
reduction is more significant. So in the previous example as I have explained the number
of crosspoints needed reduces from 64 to 40.
(Refer Slide Time: 34:31)

Another important advantage is that there is more than one path through the network to
connect two endpoints and this increases the reliability. For example, in this case suppose
one wants to communicate with five how connection can be done? One can be connected
through in this path using this path (Refer Slide Time: 35:03) or it can be connected using
this path so multiple paths are existing that increases the reliability. So if there is a failure
in one path another path can be used for establishing the connection so this is an
important advantage. Unfortunately there is a disadvantage because multistage switches
may lead to blocking but this problem can be tackled by increasing the number or size of
the intermediate switches which also increases the cost. That means although the
probability of blocking is reduced it will remain in case of multistage switches. Let us see
how it happens.
(Refer Slide Time: 35:52)

For example we would like to establish connection from 1 to three, two to four three to
sox and four to eight which is shown here. So 1, 2, 3 connection is established here then 2
to 4 connection is established in this manner but now how the connection can be done for
3 to 6. Now we find that 3 and 4 cannot be connected to 6 and 8 the reason is from this
switch (Refer Slide Time: 36:26) there is no other path available at this moment for
connecting 3 and 4 to 6 and 8 because from this crossbar switch there are two outputs and
both the outputs are now engaged for linking 1 to 3 and 2 to 4 and no other bars are
available so it leads to blocking. So, whenever we are using multistage switches
essentially it becomes blocking although it reduces the number of crosspoints. Now let us
consider another implementation based on Time Division Switching.
(Refer Slide Time: 37:02)

We have already discussed about TDM or Time Division Switching which is essentially
based on Time Division Multiplexing and here both voice and data can be transferred
using digital signals and all modern circuit switches use digital Time Division
Multiplexing techniques for establishing and maintaining circuits and synchronous TDM
allows multiple low speed bit streams to share a high speed line. We shall explain how
exactly this is being done.

A set of inputs is sampled in a round robin manner; the samples are organized serially
into slots to form a recurring frame of slots as we saw in TDM synchronous Time
Division Multiplexing. Then during successive time slots different I by O pairings are
enabled allowing a number of connections to be carried over the shared bus. Let us see
how exactly it is being done.

To keep up with the input lines the data rate on the bus must be high enough so that the
slots recur sufficiently frequently. That means we have to use high speed for Time
Division Multiplexing switch. For example, for 100 full-duplex lines at the rate of 19.200
Kbps the data rate on the bus must be greater than 1.92 Mbps. So the source destination
pairs corresponding to all active connections are stored in the control memory I shall
explain how exactly this is being done thus these slots need not specify the source
destination address because it is stored in the control memory.
(Refer Slide Time: 38:09)

So here for example we have a simple TDM where the switching is not done.

(Refer Slide Time: 38:55)

So in this case it is 1 2 3 4 where 1 is sending in data in slot A, 2 is sending in slot B, 3 is


sending in slot C and 4 is sending slot D and slot A is again taken to 1 and slot B data
goes to 2, slot C data goes to 3 and slot D data goes to….. this is the simple Time
Division Multiplexing. Obviously this will not serve our purpose so we have to use
something in between. The first thing that can be done is TSI. TSI stands for Time Slot
Interchange.
What Time Slot Interchange does is it essentially does the interchange of the slot. for
example, the connection is necessary from one to four so the data from 1 should go to 4
so this data is now in slot 4 then what comes from 2 goes to 1 so B goes to slot 1 data,
then 3 to 3 here there is no change and from 4 it goes to 1 so B goes to 1 so it is 1 to 4, 2
to 1, 3 to 3 and you can see D to A so this is how it is being interchanged so this data now
goes in the proper form. You can see here BDCA (Refer Slide Time: 40:45) so you see
all the data is going in this manner. So you find that this data communication is taking
place in this manner.

Now how exactly it is being done? It is being done in this manner.

(Refer Slide Time: 41:09)

So here you can see the data is coming in slots then it is stored in some memory so in this
memory the writing is done in a sequential manner that means 1 2 3 4 and then reading is
done selectively. That means here the writing takes place sequentially and reading takes
place sequentially that’s how the Time Slot Interchange takes place and data goes from 1
to 4, 2 to 1, 3 to 3 and 4 to 1. This is how it takes place in Time Slot Interchange.
(Refer Slide Time: 41:50)

Another alternative is the TDN bus switching. So here we use a high speed bus and here
the data comes at low speed and then is connected to a high speed bus. And for example
at slow speed whenever this switch is closed then this switch is closed 1 to 4, whenever
this switch 2 is closed then this 1 is closed and from the high speed bus the data goes
from this input to the output. So, for example here whatever data rate is here on this bus it
will be four times of that because it is doing the Time Division Multiplexing of data and
it is being read on 1 2 3 and 4 in this fashion. So it is essentially some kind of
Asynchronous Time Division Multiplexing that is being performed but however it is done
in a different way. From a high speed bus with the help of the control unit the reading is
taking place at different times. Therefore at different times it is coming from here 1 2 3 4
and here at different times it is being read by 1 2 3 4 in a different sequence. This is how
the TDM bus switching operates.
(Refer Slide Time: 43:10)

However, you can combine the space and Time Division Switching. we have already
discussed the Space Division Switching and Time Division Switching and it can be
combined here. So these are the typical TDM switches Time Division Switches and here
it is Space Division Switches which is nothing but a crossbar. Therefore as you can see
here this interchange can be done with the help of this crossbar switch.

Earlier we have seen the interchange was done with the help of memory or a first bus
here the cross bars switch does the necessary interchange. That means from any one of
the switch it will come here and then it can be taken to anyone of these multiplexes. So
here it is a multiplexer and here it is a demulitplexer. So, from any one of these
multiplexers it will come here and with the help of this cross bar switch it can be taken to
one of the demultiplexer and as you can sequence it can go to anyone of these lines. This
is how the space and time division switching are combined in implementing the circuit
switch nodes. And as I mentioned the most important application of circuit switching is
in telephone, the telephone network.
(Refer Slide Time: 44:25)

I have already discussed about the telephone network which is organized in a hierarchical
manner. As you know you have got end offices and with the help of local loop these are
connected to the end stations. So here you have the end stations which are connected
through the local loop with the end offices and the end offices are connected to the
Tandem offices and you have got trunk lines these are lines where we shall use the FDM
or TDM techniques and then Tandem offices are connected to regional offices.

So there is some kind of hierarchy here. That means say one end office will connect to a
number of stations so a number of stations will be connected. Similarly, a Tandem office
will connect to a number of end offices. These are end offices (Refer Slide Time: 45:36)
so these are stations or telephones you can say and this is your end office and these are all
local loops and these offices are connected to the Tandem offices and the Tandem offices
in turn are connected to the regional offices so several such Tandem offices are connected
to regional offices. Therefore in this way there is a hierarchical network used in Public
Service Telephone Network (PSTN).

Now let us consider the routing operation performed in circuit switch networks. As I
mentioned in large circuit switch network connections often require a path through more
than one switch that is the typical property of the switched communication network.
(Refer Slide Time: 46:32)

And here the basic objective is the efficiency and resilience. What do you really mean by
efficiency? Here we would like to utilize the minimum number of switches. Obviously
we would like to optimize on the hardware cost. On the other hand the resilience wants
high reliability. High reliability means for example you may decide the switch capacity
on the basis of average traffic. However, there are situations where the traffic will be
above average. So traffic handling with the help of this network actually decides the
resilience. Whenever there is some failure in the network even in such a situation the
network should be able to handle with some degraded performance. If there is a
catastrophic situation that occurs like flood or earthquake or something else then the
traffic on the network suddenly increases or even when some exam results are out
obviously it leads to lot of telephone calls. So network should be able handle that which
requires resilience.

For routing purpose there are two basic approaches. One is known as static routing and
other is known as dynamic routing. in static routing routing function in public switch
telecommunication networks has been traditionally quite simple and static, normally it is
very static in nature and as we have already seen the switches are organized as a tree
structure. Since it is organized in a tree structure there is a fixed well defined path from
one point to another point through a number of switches. However, to add some
resilience to the network additional high usage trunks are added to cut across the tree
structure to connect exchanges with high volumes of traffic between them.
(Refer Slide Time: 48:32)

That means to have some resilience some additional high usage trunks added some
redundancies are added so that in case of failure in case of heavy traffic this can be
tackled. However, it cannot adapt to changing conditions. As I mentioned there may be
some changing conditions because of some catastrophic situations or some special
situations. So for these special situations this static routing cannot perform and obviously
this leads to congestion in case of failure. Whenever there is some failure will lead to
congestion. That means the connection cannot be established between two users.

To overcome the limitations and to cope with growing demands all providers presently
use dynamic approach and routing decisions are influenced by current traffic conditions.
(Refer Slide Time: 50:05)

In dynamic routing it is not fixed and static it is based on the current traffic and
conditions. So, switching nodes have a peer relationship with each other rather than a
hierarchical one. That means in such a case it is not the hierarchical network. The nodes
or switching nodes have peer relationship among all other nodes, they exchange
information to find out traffic conditions and other things and as a result based on that the
routing is done. Routing is more complex and more flexible and obviously it will be
complex. Since it is not fixed it is based on the current conditions and it has to tackle the
catastrophic or special situations. However, it has to be more flexible as well.

There are two basic techniques is used one is known as alternate routing another is
known as adaptive routing. Let us see how they work. First let us consider the alternate
routing approach. Here the possible routes between two end points are predetermined. So
it is the responsibility of the originating switch to select the appropriate route for each
call.
(Refer Slide Time: 51:36)

That means in this case based on busy traffic hours the routes are decided. For example,
may be in the morning hours all people are going to office or schools and in the evening
hours people are returning from offices or schools and based on that you know that traffic
control is done, the route timings can be set in normal traffic control. Here also
something similar thing can be done that is based on the statistics of history the direction
of traffic can be decided at a particular time instance. In this case in practice usually a
different set of preplanned routes are used for different time periods. For example, in
morning period one route is established, in the evening period one route can be
established or depending on the traffic condition and different paths and that can be based
on history and statistics. So it takes advantage of different traffic patterns in different
time zones and at different time of the day.

(Refer Slide Time: 52


This is more appropriate in USA. For example, it has got different time zones and based
on different time zones the office hours are different in different places so accordingly the
routes can be set up in a predetermined manner and the routing can be done depending on
the date and time and hour of the day.

(Refer Slide Time: 53:36)

Then we have the alternate routing approach. In alternate routing approach several
alternate routes are predetermined which are used for communication. For example, you
can have a fixed alternate route, for example from A to D it can be that is from A to node
1 then to 2 then to 6 then it goes to D that means it is through the nodes 1 2 and 6 so this
is fixed alternate routing. But it can be dynamic alternate routing. For example, another
alternate route instead of 1 to 6 it can be 1, 3, 5 and 6 so this is the alternative. These
alternative routes are available may be initially the shortest route is tried and if there is
blocking then the alternate routes are explored in alternative approach.

On the other hand adaptive routing approach is designed to enable switches to react to
changing traffic conditions on the network and it allows you better management
overhead, switches must exchange information and it has potential for more effectively
optimizing the use of network resources.
(Refer Slide Time: 54:55)

For example dynamic traffic management is being used by some telephone companies. A
central controller collects data at the interval of ten seconds to determine preferred
alternate routes. So it gathers statistics then the routes are decided. So apart from routing
the nodes must send control signal to manage the network by which calls are established,
maintained and terminated. So, to perform these functions; establishment, maintenance
and termination various types of signals are to be generated.

For example, supervisory signals which essentially gives you the availability of sources
then at risk for example at different stations a particular node has some telephone number
that telephone number has to be sent so this is the address that has to be communicated
and the call information as whether it is busy or something has happened or network
management has to be done which is used for the purpose of maintenance and
termination.
(Refer Slide Time: 56:10)

And signaling can be done in two different ways; inchannel, it can be in band or out of
band and same band frequencies used by voice signals are used to transmit control signals
and in out of band it uses different part of the frequency band but uses the same facilities
as the voice signal.

(Refer Slide Time: 56:25)

On the other hand the common channel is dedicated signaling are used to transmit control
signals and are common to a number of voice channels. So we have discussed by the
circuit switching technique and it is time now to give you review questions. Here are the
five review questions.
(Refer Slide Time: 57:02)

1) What are the three steps involved in data communication through circuit
switching?
2) Mention the key advantages and disadvantages of circuit switching technique
3) Why data communication through circuit switching is not efficient?
4) Compare the performance of space division single stage switch with multistage
switch
5) Distinguish between inchannel and common channel signaling techniques used in
circuit switched networks.

Now I shall quickly give you the answers of the questions of the lecture minus 17.
(Refer Slide Time: 57:37)

1) Why are asynchronous protocols losing popularity?

Asynchronous protocols are losing popularity because of their high overhead of more
than twenty percent as we will explain in details on the other hand synchronous protocols
provide significantly lower overhead that is why asynchronous protocols are losing
popularity.

(Refer Slide Time: 58:03)

2) Why bit-oriented protocols are gaining popularity?


Bit-oriented protocols allow packing more information compared to character-oriented
protocols that is why bit-oriented protocols are gaining popularity.

(Refer Slide Time: 58:21)

3) In HDLC what is bit stuffing and why is it needed?

The presence of the bit sequence 01111110 used as flag to indicate the start and end of a
frame may lead to division of a single frame into more than one frame wrongly. This
problem is overcome by using bit stuffing. A 0 is introduced after each occurrence of five
consecutive ones in the information field of a frame.

(Refer Slide Time: 58:53)


4) What is piggybacking and how it is being used in HDLC?

As i have mentioned the acknowledgement can be sent along with the information from
the other end. That is being used in the piggybacking. So with this we come to the end of
the lecture, thank you.
Data Communications
Prof. A. Pal
Department of Computer Science & Engineering
Indian Institute of Technology, Kharagpur
Switching Techniques: Packet Switching
Lecture 19

Hello viewers welcome to today’s lecture on packet switching. In the last lecture I have
introduced to you the switched communication network and in switch communication
network different types of switching are done and one of them is circuit switching. In the
last lecture I have discussed in detail the circuit switching technique and how circuit
switching is done, its advantages and limitations.

Now in this lecture I shall I shall first identify the limitations of circuit switching
technique which will give you the background for packet switching. However, before I
discuss packet switching there is another technique known as message switching which
essentially can be considered as a first step of packet switching. So in this lecture we
shall discuss about packet switching and here is the outline of the lecture.

(Refer Slide Time: 01:49)

As I mentioned first I shall discuss the limitations of circuit switching then I shall
introduce to you the message switching concepts and explain how it overcomes some of
the limitations of circuit switching techniques. Then there are some disadvantages of
message switching technique which can be overcome by using packet switching. So I
shall introduce to you the packet switching concepts and as we shall see there are two
different techniques used in packet switching; one is known as virtual circuit packet
switching and the other is known as datagram packet switching. Let us also compare the
datagram versus virtual circuit packet switching techniques, their relative advantages and
disadvantages then we shall see the possibility of combining datagram and virtual circuit.
For example, it can be internal datagram, external virtual circuit etc having four
combinations and finally I shall conclude my lecture with discussion on circuit switching
versus packet switching.

(Refer Slide Time: 03:13)

On completion of this lecture the students will be able to understand the need for packet
switching, they will be able to understand how packet switching takes place, how
different types of packet switching techniques are used namely datagram and virtual
circuit then they will distinguish between virtual circuit and datagram type packet
switching and finally they will be able to compare circuit switching with packet
switching. So let me first start with the problems that is present in circuit switching. As
we have seen in the last lecture the network resources are dedicated to a particular
connection in circuit switching. As we know, before any data transfer can be done a path
has to be established and that path remains active until it is terminated whereas data is
sent or not and it is fully dedicated for that particular session.

There are two important short comings particularly when we use data communication.
For example, in a typical user host data connection the utilization is very low. As we have
seen even for voice utilization is not very high. Particularly when the traffic is busty as it
happens in user host data connection then the utilization will be very very low and also it
provides facility for data transmission at a constant rate. That means it does not allow
variable data rate.
(Refer Slide Time: 03:49)

As we have seen the link between the nodes have much higher data rate capability by
using TDM and FDM techniques and moreover for the use of transmission media of
higher bandwidth such as optical fiber, coaxial cable for them it is possible to transmit at
a higher rate at intermediate points but that cannot be done in circuit switching.

Obviously data transmission pattern which is burst in nature may not ensure this constant
rate of flow, ultimately it limits the utility of the circuit switching technique. These
limitations can be overcome by using message switching. The basic idea of message
switching is each network node receives and stores the packet store the message
essentially, determines the next leg of the route and queues to go out on that link. That
means here what is being done it is essentially a store and forward approach. Message is
stored that means nodes are provided with some buffer and they are stored until a link is
available or a path is available in the direction of the destination. So the queues of
message to go out on that link so obviously it has to do routing and then on that route the
message can be sent in the form of queues.

Here the advantage is line efficiency is greater because essentially it is a store and
forward approach so at a particular link any point of time can be shared by a number of
source destination pairs. That means a message is sent and the next moment another
message is sent so a queue is formed and from that queue a message is sent so as a
consequence the link is shared.
(Refer Slide Time: 05:52)

Data rate conversion is possible. Let us look at the diagram (Refer Slide Time: 7:28) for
example this is the station A which is sending data to station D. So this is the source and
this is the destination. Now as we can see each node is provided with a storage which is
shown by this symbol that means there is a storage capability at each node so this storage
is able to store the messages. So A will store the message in none 1 then from node 1 if
the destination is D it can go via this route from 1 to 2 that means when the path is
available it is free that message can be sent to 2 then again it will be stored here until this
path link is available. That means in both these places the messages are stored and
queued and then to the destination node where this station is connected and then it comes
to 6 from where it will go to the station D so this is how it takes place.
(Refer Slide Time: 07:27)

And as a consequence as you can see data rate conversion is possible.

(Refer Slide Time: 08:30)

That means here for example this may be a slow link (Refer Slide Time: 8:38) A to this
particular node 1. So here data rate transfer can be small on the other hand whenever it is
sent from node one to node two data can go at much higher rate. Not only can it go at
much higher rate if one can use different encoding techniques.

For example, depending on whether it is analog line or a digital line one can do encoding
or modulation depending on how it is done. So data rate conversion is possible and
encoding can also be done. So, even under heavy traffic packets are accepted possibly
with a greater delivery delay. that means in this particular case in case of circuit
switching in case of heavy load a path may not be available it may be blocked but in case
of message switching if there is heavy load then still this node will keep on accepting
messages however the packets will be accepted but the only outcome will be the delay in
delivery and nothing more than that.

Another important characteristic is that whenever the messages are queued and stored in
the storage it is possible to assign some priority to them. That means high priority traffic
real time traffic can be getting higher priorities and they can be sent fast before simple
data traffic can be sent. So these are the advantages of message switching technique. Let
us see how exactly it occurs.

(Refer Slide Time: 10:28)

here for example from A to C it is going so first it will go from node 1 to node 2 the
entire message is sent then it is stored there for strip duration and as you can see there is
some delay in this direction, you have got the time and here you have got the time in this
direction so again it is stored here for some duration then again it goes from node 4 to
node 5 where this destination station is attached. This is how message goes from a node
to another node and at each place it is stored and there is some delay and then the entire
message is sent this thickness or the width of this line is essentially showing the volume
of the message, the size of the message. The bigger the message wider will be this
particular………. so this is how the message communication takes place.

Now, what is the disadvantage of this? Obviously message switching overcomes some of
the limitations of the circuit switching. Is there any disadvantage of this? Disadvantage is
that whenever the message is large obviously there is no restriction on the limit of the
message; it can be very very large. In such a case what will happen is it monopolizes the
link and the storage.
(Refer Slide Time: 11:45)

That means whenever a message is stored here it will occupy a large storage area so the
other messages coming from other directions cannot be stored here to monopolize.
Similarly, whenever transfer is taking place it may take very long time and it
monopolizes the link so if very large messages are sent it will not only lead to long delay
but it may monopolize the link and storage. Hence this is the advantage of message
switching. That is why this message switching concept has been extended to another
technique is known as packet switching.

So this new form of architecture for long distance communication is being used since
1970 it was first used in ARPANET. ARPANET used packet switching technique to
overcome the limitations of circuit switching. However, the packet switching technology
has evolved over time although it was in use in 70s but over the years the technology has
improved it has evolved however the basic technology has not changed, the basic
concepts, the basic technology has remained the same and packet switching remains as
one of the few effective technologies for long distance data communication.
(Refer Slide Time: 12:44)

So nowadays particularly for wide area networks this packet switching is the technique
used in the internet. Let us see how it is being done.

(Refer Slide Time: 13:36)

First of all data transmitted in short packets few kilobytes not very big so that it can
overcome the limitation of message switching. So a longer message is broken up into
series of packets. If you have got a long message as it is shown here (Refer Slide Time:
13:57) it is broken down into several packets such as packet 1, 2 up to packet n. Every
packet contains some control information in its header required for routing and other
purposes. The header will contain some information like source address, destination
address then the sequence number so this information is necessary for control. So each
such packet is provided with this information source address, destination address and
sequence number.

Not only this but if it is necessary to perform error detection and error control then the
packet must be provided with the trailer and the trailer usually is provided with CRC.
That means a packet may have a header then you have got the packet that is your data and
it will have a trailer. So this is the formation of a frame of a packet and this is how a
message is broken down into a number of packets. Here only header is shown however
trailer may also be present. So as I mentioned earlier a packet switching network breaks
up a message into packets and there are two approaches commonly used for handling
these packets one is known as virtual circuit and another is known as datagram.

(Refer Slide Time: 15:28)

First we shall consider the virtual circuit approach.


(Refer Slide Time: 15:45)

In virtual circuit approach a preplanned route is established before any packets are sent. it
is somewhat similar to circuit switching that is why it is called virtual circuit approach.
So essentially it is packet switching. Again here the store and forward approach is used.

However, just like circuit switching some route is established there is a preplanned route
that is why it is called virtual circuit although effectively it is store and forward approach
used in packet switching. So call request and call accept packets are used to establish the
connection then the route is fixed for the duration of the logical connection like circuit
switching. Here there is no dedicated path only the route is fixed that does not mean it is
not shared so it is being shared by other source destination pairs but the only thing is it
follows the same route and each packet contains a virtual circuit identifier as well as data.

That means along with the data some virtual circuit identifier is provided and that virtual
circuit identifier is used to route the packet through the packet switch network and each
node on the route knows where to forward the packets. That means the node need not
really do the routing but the only thing is after receiving the virtual circuit identifier it
finds out where to forward or in which direction to forward. Finally a clear request packet
issued by one of the two stations terminates the connection.

So just like circuit switching it is also having some kind of three distinct phases. First of
all a route is established then the packet transfer takes place then with the help of a clear
request the route is terminated. Here let us see how it really works. From A it has to go to
C through nodes 1, 2, 4 and 5 so through the circuit switch network as you can see call
request packets are sent and since it is a packet this call request packet is also stored and
forwarded here so there will be some delay before call request packet is sent to the next
node.
(Refer Slide Time: 17:47)

Again they call request packet goes to the next node (Refer Slide Time: 18:15) and
ultimately the other side will send call accept packet and call accept packets are also
stored and forwarded just like any other packet it goes in the reverse direction and when
it reaches the source node where the packets are originating the packets may be sent one
after the other.

As you can see, now the thickness is smaller than the message so this essentially tells you
the size of the packet. So one packet is sent from node 1 to 2 then as you see here two
packets are going from 1 to 2 and 2 to 4 so some kind of parallelism is being performed
and again now see three packets of the same message are going and finally in this way
the packets will reach the destination. So a single message has been divided into three
packets and all the three packets are going in this manner and because parallelism is
possible it takes much shorter time.

Finally acknowledgement packet comes and the link is terminated. So in this way (Refer
Slide Time: 19:34) the acknowledgement packet will come in the reverse direction and as
it reaches the source node it can be terminated. This is how the virtual packet switching
works. Though we have seen virtual packet switching one main advantage is that
different packets of same message can go parallel just like somewhat similar to pipeline
that is used in processes. So these are the main characteristics of virtual circuit packet
switching.

Route between stations is set up prior to data transfer as I have already shown. So packet
is buffered at each node and queued for output over a line. That means just like message
switching there also the packets are buffered but since the packets are of smaller size it
does not monopolize the buffer, it does not occupy larger storage and also it does not take
longer transmission time as a consequence the link is also not monopolized. So a data
packet needs to carry only the virtual circuit identifier for effective routing decisions. The
virtual circuit identifier is used for the purpose of routing. The nodes do not perform any
routing operation except it uses the virtual circuit identifier for forwarding the packets. So
as I mentioned intermediate nodes take no routing decisions.

(Refer Slide Time: 20:10)

Another point is it often provides sequencing and error control. Some kind of sequencing
and error control may be provided in that case CRC is provided and suitable ARQ
technique can be used.

Now coming to the datagram approach it is different from the virtual circuit.

(Refer Slide Time: 21:29)


Here each packet is treated independently with no reference to packets that have gone
before. In the previous case we have seen all the packets of the same message goes
through the same route but here it is not so, each packet is treated independently as a
separate entity just like what you do in postal system. suppose say five volumes of a book
if you send through post office there is a possibility that if you send say volume I today
volume II after two days volume III after three days volume IV after four days volume V
after five days but at the destination there is a possibility that volume III will reach before
earlier than the volume I or volume IV will reach before volume V. Similarly in datagram
approach this situation can happen just like postal system. And here every intermediate
node has to take routing decision because each packet is treated independently and there
is no predetermined route as a consequence each node has to take routing decision. Hence
there is an overhead on the node to perform routing decision.

However, this routing decision can be taken care of with the help of the source and
destination addresses provided as part of the packet. So packet should contain the
information about the source address and the destination address and based on that this
routing decision is performed by the intermediate nodes.

Usually intermediate nodes maintain routing tables. However there are various routing
algorithms routing techniques. We shall discuss about them in the next lecture.

Let us see how the datagram packet switching works. here as you see (Refer Slide Time:
23:49) a packet is going from 1 to 2 then two other packets are going that is 1 to 2 and the
previous packet is going 2 to 4 and three packets are going 1 to 2, 2 to 4 and 4 to 5 so
packets of the same messages are going. Also, there is no call request or call forwarding
delay so each of these packets can go independently.

(Refer Slide Time: 23:43)


What is the advantage?

The advantage of this datagram approach is that call setup phase is fully avoided, there is
no call set up time required like circuit switching or virtual circuit switching. So, for
transmission of packets datagram will be faster. That means here it will take shorter time
because it is more primitive and it is more flexible. It is quite flexible because congestion
or failed link can be avoided so it is more reliable.

So here the advantage is in virtual circuit as we have seen each and every packet of the
same message has to go through the same route but suppose a link fails in between in
such a case the subsequent packets cannot be delivered cannot go the destination or if
there is some congestion on one part of that node in such a case that packet delivery will
take long time. This can be avoided in datagram approach because it is more flexible,
each node can take the routing decision.

So, if there is any failed link if there is any congestion in the path of the packet then what
it will do is it will simply send the packet through some other node so it is not necessary
that the packet 1 of a message will go through a particular route and packet 2 has to
follow the same route it can go through a different route and as a result different packets
can reach out of border. Packets may be delivered out of order so this is a disadvantage.

That means since the packets are delivered out of order the receiver must store all the
packets then they have to order them to form the message. The message has to be formed
by combining the packets before it can be forwarded to the upper layer of the software.
So the destination node will require much processing power and storage space so that all
the received packets can be stored and then they can be combined to form a message and
then it can be forwarded it to the upper layers.

(Refer Slide Time: 24:22)


As a consequence if a node crashes momentarily suppose some packets are coming and it
crashes momentarily that may lead to a problem where all of the queued packets are lost.
So these are the two problems or disadvantages of datagram approach.

Let us now compare between the datagram packet switching with virtual circuit packet
switching.

(Refer Slide Time: 27:25)

In virtual circuit the node need not decide route. The virtual circuit identifier generated
while performing the routing using the call request and call acknowledgement signals
used to perform the route so the nodes can be very simple in this case. It is no difficult to
adopt congestion. If there is any congestion in the network it cannot really take care so it
cannot adapt to the congestion because all the packets has to go in the same route whether
there is congestion or not. It also maintains the sequence order.

As I mentioned all the packets are going to the same route so all the packets are delivered
in the same order so the messages or the packets need not be re-ordered in the destination
station. As a result it is advantageous that each packet can be forwarded to the upper
layer without forming the complete message. The typical characteristic of the virtual
circuit is that all packets are sent through the same preplanned route on the other hand in
datagram service each packet is treated independently. We have seen that although they
belong to the same message but each of them is considered as a separate entity and as a
consequence for the purpose of routing each of the packets can be sent independently
through different routes depending on availability of path, depending congestion, and
depending on failure of links.

The call set up phase is avoided and as it is not required the delay will be less and it is
inherently more flexible and reliable because it can adapt to failure and it can deliver the
packet more reliably compared to virtual circuit.
Now there is a question about the packet size.

(Refer Slide Time: 29:54)

We have seen that a message is broken down into smaller size packets, the question is
how small, should it be very small? Obviously that cannot be done. As packet size
decreases the transmission time reduces until it is comparable to the size of the control
information. So there is a close relationship between packet size and transmission time.
So, as we have seen if the size of the packet is large transmission time will be longer and
if the size of the packet is transmission time is smaller. On the other hand there is another
important parameter. As I said a packet will have header and trailer. The header and
trailer bytes will be the overhead. Thus if the size is very small then the overhead will be
much higher although transmission time will be short.

Let us take up an example to illustrate the situation. Here let us assume that there is a
virtual circuit from station A through nodes 1, 2, 4 and 5 to station C. So message size is
32 bytes and packet header is 4 bytes .so here wherever the number of packet is 1 that
means essentially it is message switching and the total number of bytes that is being sent
is 108. It is shown in this diagram.
(Refer Slide Time: 31:40)

For example, this is the header four bytes and here is the data 32 so 4 plus 32 is equal to
36 then it will require three hops so 36 into 3 is equal to 108 hence that many bytes are to
be transmitted which is the total transmission time. We are assuming here that there is no
time needed for storage and there is no other processing time, the propagation time is
very very small. So, assuming that propagation time is small transmission time will be the
time required to send 108 bytes. So depending on the data rate this time will be different.

Whenever a message is divided into two packets as it is shown in this diagram (Refer
Slide Time: 32:34) so here we see each message is divided into two packets so this is
message 1, this is message 2 and this is message 3 or message 1 data 1 data 2, message 2
data 1 data 2, message 3 data 1 data 2. So here as we see the total number of bytes to be
transmitted is 120. However, because of parallelism this packet and this packet is being
sent simultaneously and as a consequence the total transmission time will be shorter. So
here we find transmission time has been decreased although more number of bytes are
transmitted through the network the overall transmission time the source to destination
transmission time has reduced to from 108 to 80 that is the normalized value as we can
say.
(Refer Slide Time: 31:41)

Now as we increase further that means if we divide into four packets we do it in the same
manner then we find that the transmission time required will be 72. However, if we
increase the packet size further like divide the message into eight packets we find that
total number of bytes become 192. However, the transmission time is now increased to
80 because the benefit of parallelism is compensated and the higher number of bytes is
essentially responsible or the overhead is responsible for longer transmission time.

Now, if it is divided into sixteen packets the total number of bytes to be transmitted is
288 bytes and the transmission time is 120. So what does it indicate? it indicates that if
we plot a curve say on this if it is the packet size and here is the transmission time (Refer
Slide Time: 35:01) as the packet size is increased and as you can see here for smaller
packets it takes longer time for longer packets it takes also longer time so there is an
optimum packet size when the transmission time will be smaller.

This is the transmission time. So transmission time is small for some intermediate value
that is some value in between the maximum and minimum. So we find that in the
previous example whenever the message is divided into four packets each of eight bytes
then it gives you the minimum transmission time. So this clearly shows that packet
should not be too large that is why we have come from message switching to package
switching and the packet should not be too small then also the transmission time will be
longer.

Now as I mentioned there is a possibility of external and internal operation that means
whether we are using virtual circuit or datagram.
(Refer Slide Time: 36:05)

We have seen that in the network we have got stations connected to nodes and nodes are
also communicated. So when the station is connecting to the node that is external to the
switched communication network. On the other hand node-to-node communication is
essentially part of the switched communication network. That is why external means
from station to node and internal means from node-to-node so in both the cases it can be
virtual circuit or datagram so there are two dimensions of the problem. At the interface
between a station and a network node we may have connection oriented or connectionless
service. Internally the network may use virtual circuits or datagram. This will lead to four
different scenarios. Let us explain each of them one after the other.

(Refer Slide Time: 37:14)


First one is the external VC and the internal VC. So in both cases it is virtual circuit.
Whenever it is external VC and internal VC a virtual line or virtual path is created for
example we are sending from 1 to D and 1 to C so virtual circuit is A to 1 to 2 to 6 to D.
On the other hand another virtual circuit is A to 3 to 1 to 3 to 5 to C so these are the
virtual circuits. It can be the other way also, say 1 to 2 to 6 to 5 to C that is also possible
so there can be overlapping but here incidentally these are not overlapped.

Therefore we find this virtual circuit is created from end-to-end that is station to station
so in such a case all the packets are originating from A for C and D. Obviously they will
be having some sequence number. We will follow this route that the packets with
destination D will go from A to 1 to 2 to 6 to D on the other hand all the packets with
destination address C will go from A to 1 to 3 to 5 to C so follow the same route
depending on the destination. Hence this is your external VC and internal VC.

(Refer Slide Time: 38:55)

Then comes the second scenario. In this the network handles each packet independently.
That means external virtual circuit and internal datagram. So the network handles each
packet separately. Since the internal is datagram here the internal is a virtual circuit and
since it is a virtual circuit it will be given in this order. However, since internally it is
virtual circuit here at one point this packet can go in this route, the other packet can go in
the other route and the third packet can go in a different other route. Similarly this 2.2,
2.2, 2.3 as it enters the switched communication network then it may follow different
routes. However, the network buffers packets if necessary so that they are delivered to the
destination in the proper order.

That means in this node these packets are buffered although within the network it comes
in different sequence then they are delivered in order. As you can see in the same order it
was sent from A and has reached the destination D 1.1, 1.2 and 1.3 because it was stored
in this node 6 and it was buffered here and then they had been ordered in the proper
sequence. Similarly, the packets with the destination at C have been stored in node 5 and
they have been ordered properly with the number 2.1, 2.2, and 2.3 although it may come
from different routes. Hence this is your external VC and your internal datagram.

(Refer Slide Time: 40:44)

Coming to scenario three where it is external DG and internal DG, here the packets can
be delivered out of order and once it is enters the network inside the network also it may
go through different routes. Since it is external datagram the nodes do not buffer the
packets or arrange the packets in proper order. The way it is received here is sent to the
destination because it is external datagram. And as a consequence we find that the
packets have reached the destination out of order. Although 1.3 was sent at the end it has
reached the destination earlier then 1.1 and 1.2. Similarly 2.2 has reached earlier than 2.1
although it was given to the node one earlier than 2.2.

Therefore in both the cases the packets have reached out of order which is possible in
case of external datagram and internal datagram. In both the cases we are using the
datagram so each packet is treated independently both from users end and the networks
point of view that is why the packets are reaching out of order from the destination.
(Refer Slide Time: 42:15)

Now let us come to scenario four. Here it is external datagram and internal virtual circuit.
So here you will find external datagram means here it can be given in any order, however
as it is given in a particular order inside the switched communication network there is a
virtual circuit and since the packet is given in order D.1, D.2 and D.3 As you can see
(Refer Slide Time: 42:48) they are going through the network in the same path and
following the same sequence through the same virtual circuit. So here also the order in
which the packets are provided to the node in the same order it goes because inside it is a
virtual circuit.

Similarly, if A to C also you will find that the C1, C2 and C3 have been presented in the
same order to node A and you can see here that C1 has reached earlier than C2 than C3
and so on. Therefore this is the route that is being followed through the virtual circuit that
is being created between node 1 and node 5 however externally it is datagram.

Therefore in all the four situations we have the external datagram and the internal
datagram. We have seen how internal paths are created and the packets are delivered at
the destination using external datagram.
(Refer Slide Time: 43:55)

Now we have come to the conclusion part of the lecture where we are comparing the
three techniques; circuit switching, datagram packet switching and virtual circuit packet
switching because these three techniques have been merged as the key techniques used in
switched communication network. Hence in circuit switching we have seen that a
dedicated path is created from source to destination and that path remains valid and is
used for sending the message.

In case of datagram packet switching no dedicated path is created and as we have seen
each packet is treated independently. In virtual circuit packet switching also no dedicated
path is created because in both these cases the paths can be shared by other packets or
other source destination pairs. In circuit switching as we have seen the path is established
for entire conversation. As I mentioned in case of circuit switching just like your
telephone network whenever you dial and after you get the receiver successfully then the
link is established and for the entire conversation the link remains established whether
somebody talks or not.

On the other hand in datagram packet switching routes are established for each packet.
We have seen that each packet is treated independently in case of datagram packet
switching and as a result the route is established for each packet separately. In case of
virtual circuit route is established for entire conversation that means for all the packets of
the same message. These are the differences between these three.

Now let us come to the call set up delay. We have seen that the circuit switching has call
set up delay. Call set up delay is necessary for establishing the path before any data
transfer. In datagram packet switching there is packet transmission delay there is no call
set up delay. In datagram packet switching we have seen that the packets are stored and
then forwarded. If there is a big queue then there may be delay in transmission so packet
transmission delay is present. In virtual packet switching also there is call set up delay
and packet transmission delay.

Then coming to the fourth point which is overload we see that in case of circuit switching
overload may block call set up. That means if the circuit switched network is blocking
type, if there is heavy traffic, for example, whenever it is implemented with the help of
multistage cross bar switches in this case it can be of blocking type so in such a case
when there is heavy load it is not possible to set up the link. Hence here even the path
cannot be set up so data transfer is not possible.

But in datagram packet switching even when there is overload only the delay will
increase but packets will be accepted by the network nodes and they will be stored in
queue. However, in case of overload there may be long delay. In case of virtual packet
switching overload may block call set up. Just like circuit switching there may be
blocking in call set up. If path is not available then call set up cannot be done and also it
increases the packet delay.

So if we can see here in the context of overload (Refer Slide Time: 48:06) the virtual
circuit packet switching is not really very good because it may lead to blocking of call set
up and also there is delay.

In circuit switching there is no speed or code conversion. Since all the data has to go
through the same path the guaranteed bandwidth is provided big or small and there is no
conversion in between, it is not possible. So if the sender is sending at the rate of 64 Kbps
to the entire path then the data will go at the rate of 64 Kbps. On the other hand in case of
datagram packet switching speed or code conversion is possible.

As I mentioned from the station to the node if it is an analog line using modem you can
use suitable modulation technique. On the other hand from one node to another node it
can be time division multiplexing and data transfer can take place at a much higher rate
and code conversion is possible. So various encoding techniques and different speeds are
possible in intermediate nodes which is true not only for datagram packet switching but
also true for virtual packet switching.

Then the circuit switching technique gives you fixed bandwidth. Fixed bandwidth means
once the network is set up it gives you the fixed bandwidth in circuit switch. On the other
hand in virtual circuit packet switching or datagram packet switching in both cases the
bandwidth can be dynamic because in different path you can have different speed and
conversion times so it is possible to have dynamic bandwidth in datagram and virtual
circuit packet switching. The only thing possible is since it is being shared that bandwidth
will be dependent on the load.

In case of circuit switching we have seen there are no overhead bits after call set up. After
the path is set up, for example, in case of telephone connection once the connection is
established you can simply talk or sing and do whatever you want and there is no extra
overhead.
On the other hand in datagram packet switching or virtual circuit packet switching, that is
in datagram packet switching you have to do routing so you have to provide the source
address, destination address, sequence number for control purposes and for using error
detection and control you have to use CRC. Similarly that is true in case of virtual packet
switching. in both the cases overhead bits are present in each packet not only in call set
up delay but subsequently.

Therefore this gives you the comparison of the three switching techniques that we have
discussed. One circuit switching we have discussed in the last lecture and in this lecture
we have discussed the datagram packet switching and virtual circuit packet switching. So
now it is time to review the review questions.

(Refer Slide Time: 51:38)

1) How the drawback of circuit switching is overcome in message switching?


2) What is the drawback of message switching how is it overcome in packet switching?
3) What are the key differences between datagram and virtual circuit packet switching?
4) Distinguish between circuit switching and virtual circuit packet switching
5) How packet size affects the transmission time in a packet switching network?

These are the five questions which will be answered in the next lecture.
(Refer Slide Time: 52:22)

Let us see the answers of the questions lecture minus 18.

1) What are the three steps involved in data communications through circuit switching?
As we have there are three basic steps. First step is circuit establishment that means with
the help of call request and call acknowledgement. So, with the help of call request and
call acknowledgement circuit establishment is performed. Then we have the circuit
maintenance. That is, during the data transfer when the data transfer is going on that
circuit has to be maintained. Since the circuit has to be maintained for the entire duration
the circuit maintenance operation has to be performed by using suitable signals. Then
comes the third state that is circuit disconnect. When the data transfer is over that circuit
has to be disconnected. So these are the three steps in circuit switching as we have
discussed in detail.
(Refer Slide Time: 53:35)

2) Mention the key advantages and disadvantages of circuit switching technique.

The advantages of circuit switching techniques I emphasized many times. The advantages
are after path is established, data communication takes place without delay. Only delay
involved is the propagation time. It is very suitable for continuous traffic. Obviously
circuit switching is very suitable when there is a stream traffic continuous traffic is being
sent from source to destination. Then it establishes dedicated path and there is no
overhead after call set up. Also, it is transparent and data transfer is in order.

As I mentioned whenever a circuit switched network is established then whatever is sent


reaches the destination in the same order. So if you send data for packet then the packets
will reach in order, this is the advantage of circuit switching technique.

The disadvantages are it provides initial delay for setting up the call and as I mentioned it
is inefficient for bursty traffic. Third disadvantage is data rate should be same because of
fixed bandwidth. Fifth disadvantage is when load increases some calls may be blocked
that means it may not possible to set up the link.
(Refer Slide Time: 55:26)

3) Why data communication through circuit switching is not efficient?

In data communication traffic between terminal and server are not continuous.
Sometimes more data may come and sometimes there is no data at all. Circuit switching
is not efficient because of its fixed bandwidth. Circuit switching provides you fixed
bandwidth but in data communication because of the burst in nature or traffic this fixed
bandwidth is not suitable rather dynamic bandwidth is suitable which cannot be provided
by circuit switching that is why it is not efficient.

(Refer Slide Time: 56:06)


4) Compare the performance of space-division single-stage switch with multi-stage
switch.
Space-division stage switch requires more number of crosspoints as we have seen. It is
nonblocking in nature but provides no redundant path.

On the other hand multi-stage switches require lesser number of crosspoints, they are
blocking in nature, and the multi-stage switches can be blocking in nature but provides
redundant paths. Here you have got more crosspoints, (Refer Slide Time: 56:42) here the
number of crosspoint is less, here it is nonblocking, here is blocking, here there is
redundant path, here there is no redundant path. Thus there are advantages and
disadvantages in both the cases.

(Refer Slide Time: 56:53)

5) Distinguish between inchannel and common channel signaling techniques used in


circuit switched network.

Inchannel signaling technique uses the same band of frequencies used by voice signals to
transmit control signals. That means the signals can reach wherever the voice can reach.
On the other hand common channel signaling uses dedicated channels to transmit control
signals and this dedicated channel is common to a number of voice channels.

So with this we come to end of today’s lecture. In the next lecture we shall discuss about
the routing in um packet switch network, thank you.
Data Communications
Prof. A. Pal
Department of Computer Science & Engineering
Indian Institute of Technology, Kharagpur

Lecture 20
Routing - I

Hello viewers welcome to today’s lecture on routing. In fact this is the first lecture on
routing in switched communication network.

(Refer Slide Time: 01:03)

Here is the outline of today’s talk. First we shall discuss about what is routing in other
words we shall define routing then we shall consider various desirable properties of
routing and as we shall see we have to develop routing algorithms. So whenever you
develop routing algorithms we shall consider in detail about what are the design
parameters then we shall discuss about two routing algorithms. One is known as fixed
routing. In this context you have to discuss some algorithms on the shortest path or
finding out the shortest path, Dijkstra’s shortest path algorithm then we shall discuss
another routing algorithm known as flooding.
(Refer Slide Time: 01:10)

So we shall primarily discuss two important routing techniques in this lecture fixed
routing and flooding and in the next lecture we shall discuss some more routing
algorithms.

(Refer Slide Time: 02:20)

And on completion of this lecture the students will be able to understand the need of
routing. They will understand what is routing and they will understand the desirable
properties of routing. Then understand various routing algorithms such fixed routing,
flooding and also as I mentioned the students will be able to understand the shortest path
algorithm and they will be able to find out a shortest path from a particular source to a
destination in a switched communication network based on this algorithm.

(Refer Slide Time: 03:03)

First let us focus on what is routing and why it is needed. Here a typical schematic
diagram of a switched communication network is given. Here we have a number of
stations A, B, etc and a number of nodes. Nodes are essentially switches and as we know
the stations are connected to nodes and nodes in turn may be connected to some other
node or connected to other stations.

And as we can see, suppose station A wants to send packets to station D in such a case
this particular network or switched communication network has no direct path. or in other
words A is not directly connected to the station D or in other words both of them are not
connected to the same node. So as you can see from A it can go via several nodes it can
go via node 1 then to node 2 then to node 6 so in this way the packets from station A can
go to station D. As we find from this, this is not the only route or path there exists several
alternatives. For example, it can go via 1 to 3 then to 5 then to 6 or it can go via 1 then to
4 then to 6 so you find there are several alternative routes.

Question arises which route is the optimum and at a particular instant a particular route
may be efficient or in other words it will be better to send through a particular route and
may be at the next moment because of node failure because of congestion that particular
route may not be possible so we may have to send through alternative route. Hence the
key point that I would like to make is from a particular station to another station source to
destination there exist multiple paths and routing is the mechanism for finding out the
most cost effective path from a source to a destination. So, for a source destination power
the intermediate nodes through which the packets are to be routed is given by the routing
algorithm. As a consequence routing is one of the most complex and crucial aspect of
packet-switched network design.
(Refer Slide Time: 06:05)

So whenever somebody is designing a packet-switched network the way routing has to be


done placed a key or important role in the design that is the reason we shall devote two
lectures on this routing.

Now let us see what are the desirable properties of routing.

(Refer Slide Time: 08:25)

We want the routing algorithm to deliver a packet from a source to destination. Now, first
two important properties are correctness and simplicity. These two terms are self
explanatory. By that I mean whenever we say correctness that means the packet should
be deliver to the correct destination. If source is A and destination is D it has to be
delivered to the proper destination station. That is what is meant by correctness.

By simplicity mean the routing algorithm can be simple can be very complex depending
on which one we are using. Now if it is complex then it will put large overhead on the
nodes. So it is necessary to make the routing algorithm simple so these two terms
correctness and simplicity are quite simple and easy to understand or self explanatory.
Then we have the second important property that is robustness.

As you will see the network is a queue of packets the queue sizes is keep on changing
with dynamically with time and also there is a possibility of some node failures. So under
the situation of node failure, under the situation of congestion in some part of the network
the routing algorithm should be robust so that it can deliver packets fire some route in the
face of failures. Therefore even in the phase of failures or even in the face of congestion
it is necessary that routing algorithm is able to deliver packets to the correct destination
that is what is meant by robustness. So this property is very important. Whenever the load
is changing dynamically failure is also taking place occasionally or frequently depending
on the network.

(Refer Slide Time: 09:09)

Third important property or fourth important property after correctness, simplicity


robustness is stability. The algorithm should converge to equilibrium fast in the face of
changing conditions in the network. As I mentioned the queue of packets may keep on
changing quickly in the network. In such a situation suppose there is congestion in this
part of the network then what can happen is a particular station can divert all the packets
towards another nodes, not one station but all the stations can divert all the packets to
another part of the network and because of that the congestion may develop in that part of
the network. Again those packets are diverted to previous node previous part of the
network where there was congestion. in this way because of quick routing of the packets
or changing the routes from one congested area to another congested area what can
happen is it cannot reach stability and as a result it may lead to unnecessary delay in
delivering the packets. That is why it is important that the algorithm should converge to
equilibrium fast in the face of changing conditions of the network. So, even when the
loads are changing, even when there is some failure when the routing algorithm should be
stable and should converge to equilibrium quickly. This is the important property of the
routing algorithm.

(Refer Slide Time: 12:44)

Finally comes the fairness and optimality. What is desired is each and every station
should have equal right for getting a particular route for delivering their packets. But
because of optimal routing it may be necessary that some packets on a particular station
has been given some priority. So, if you give priority then fairness is violated. On the
other hand if you give priority to some nodes or to a particular path for example if you
give priority to packet which are to be delivered to adjacent nodes then the long distance
nodes do not get the fairness. So these two terms are little conflicting. These two are
important so there is you have to arrive at some trade-off between these two fairness and
optimality. Obviously although they are conflicting you can arrive at some trade off and
find out some mechanism of giving some kind of weightage or priority without violating
the fairness or without compromising much of fairness.

Then finally we have what is called the efficiency. As I mentioned the overhead of
routing should be as little as possible. But if the routing algorithm is very complex then it
will put very high overhead on the nodes in the network. That is why the routing
algorithm should be simple so that the overhead of efficiency of the routing is high and
the packets are delivered in an efficient manner through the shortest path routes or at a
low cost.

Now let us focus on the design parameters.


(Refer Slide Time: 13:18)

Whenever you try to design a routing algorithm you have to take several parameters into
account. First is the performance criteria. What performance criteria you will use in
finding out the efficacy or effectiveness of a particular routing algorithm? One
performance parameter is number of hops. As you have seen whenever packets are going
in a network it is making several hops. So the count of how many hops it makes through
the intermediate nodes can be a measure of the performance. Another parameter can be
cost.

Suppose you have a node and you have several paths therefore you are having several
alternatives so now bandwidth of these particular links may be different. Now if the
bandwidth is high then the average cost for sending through that particular path may be
smaller so you will try to send the packets through a high bandwidth route so that the cost
is less. So the cost is inversely proportional to the bandwidth of a particular link.

Then comes to the third important performance criteria that is delay. Delay is usually
dependant on the size of the queue. So, in a particular route if there is a long queue of
packets then delay will be longer hence that delay can be measured as a performance
criteria and the measure of delay can be obtained from the size of the queue or queue
length.

Then fourth important criteria is the throughput. Throughput is essentially the number of
packets delivered per unit time. So the number of packets delivered per unit time can be
found out which can be used for measuring the effectiveness of a particular routing
algorithms. These are various performance criteria that can be used in designing a routing
algorithm.

Second question or second important parameter is the decision time.


Now decision time means when you will decide for routing a packet. The decision can be
per packet basis. For each of the packet it is independently decided as what should be the
route for that particular packet. On the other hand it can be per session basis. For
example, in virtual circuit network as you know for a particular session a virtual circuit is
established and all the packets for that session or during that session are sent through the
same route or path. So decision time is important whether it is per packet basis or per
session basis and accordingly you have to use a datagram type packet switching or you
have to use virtual circuit packet switching.

Then the third important parameter is decision place. Here we have to look into various
things like who will decide about routing and where it will be decided, whether it will be
decided in each node and so on. That means each node receives a packet then it decides
the route for that particular path to which particular output link it has to be forwarded so
in that case we call it distributed.

Another possibility is that there can be a central control node so in that central control
node we will have some kind of routing table which will decide about routing then we
call it central routing. Or it can be decided by the originating node which is the node
where it is originated. So with that it decides where it has to be, in which direction it will
go or what will be the next node so the decision place can be each node, central node or
the originating node in that case we call it source routing. So this is the third important
design parameter.

The fourth important design parameter is network information source. Now as you see
there exist algorithms for routing which does not make use of any information neither the
topology, nor the offered load, nor the costs in different paths so it does not use anything
so in that case we call it none. In the second case it may use some local information. For
example, you have got a node and you have got say three links so the output queues the
queue length in each of these path may be considered as the information used for routing
so in that case we call it local it does not gather information from its neighbors or any
other network.

The third possible alternative is that a particular node gathers information from the
adjacent nodes from it neighbors and accordingly does the routing or it may be nodes
along the route. A particular before routing gathers information from all the nodes along
the route then decides whether it will forward in that route or in some other nodes or it
can be very global. In such a case each node gathers information from all other nodes at
regular interval and accordingly it makes some kind of routing table to decide about
routing. Therefore that network information source can vary from none to all nodes and
depending on that obviously the complexity also will vary.

Finally we have the network information update time. The network information update
time can be continuous. In the first place as I told the routing algorithm should decide
what kind of information it will use, whether it is local or from neighbors or none or
global. So whenever it is local it can be continuous or if it is from the neighbors or if it is
from all other nodes that it can be periodic or it can be that whenever some major load
change is taking place only then it makes change in the routing algorithm. So whenever
there is a major load change or topology change there is a change in network topology
then only the network updates information and changes the routes accordingly. These are
various design parameters which should be used in developing routing algorithms.

(Refer Slide Time: 21:06)

So a large number of routing strategies have been evolved over the years and some of the
important strategies are fixed routing, flooding, random routing, flow based routing,
adaptive or dynamic routing and so on. In this lecture I shall discuss about two important
routing algorithms the fixed routing and flooding and the other algorithms we shall
discuss in the next lecture.

Therefore first let us focus on fixed routing. In fixed routing a route is selected from each
source destination pair of nodes in the network. The routes are fixed and they may only
change if there is a change in the topology of the network. That means for a given
topology of the network the routing is fixed so it changes only when there is a change in
the topology of the network otherwise it remains the same. And question arises how fixed
routing may be implemented?
(Refer Slide Time: 21:38)

There are several alternatives, let us see how it can be done. Here as you can see there is
a central routing matrix created based on least cost path which is stored at a network
control center.

(Refer Slide Time: 22:22)

There is a network control center where this central routing directory is created and here
suppose this is node 1 so from node 1 you have to send a packet to node 6 then what will
be the next node or to which node it has to be forwarded. For example, if it is going to be
6 then next node to be forwarded is 2 and cost of each link is given here, and this cost
may be based on queue length or it may be based on the bandwidth of the network
whatever it may be some cost is assigned and the cost for each of these links are given
here (Refer Slide Time: 23:33). Here the links are bidirectional and that’s why we can say
this is a weighted graph that means all the ages has some weight and these weights are
essentially assigned as the cost of a particular link.

Now here you can see say from any node to any node what is the next node to which a
particular packet has to be forwarded is given here. For example, if the destination is say
3 then from 1 it has to be forwarded directly to 3 because it is directly connected. On the
other hand if it is 4 then it will be forwarded to 2 then it can be obtained. In this way such
a big routing directory has to be stored in the network control center. This particular
approach this centralized control has one draw back, the draw back if this network
control center fails then everything will collapse that means the routing cannot be done as
a consequence it is prone to failure that means it is not a very reliable approach.

Another alternative is to divide this routing directory and create separate directories for
each of these nodes. That means in each node you can keep a small copy of this particular
directory. Previously as we have seen there is a big directory and from this you can create
directory corresponding to each of these nodes. Essentially for a given destination what
will be the next node to which it has been forwarded that is being stored in this directory.
That means if the destination is 2 next node to be forwarded is two and if the destination
is 3 then the next node to be forwarded is 3 so this is the directory for node 1 and if the
next node is 5 then it has to be forwarded to 2.

(Refer Slide Time: 25:06)

As you can see for 4, 5 and 6 if these are the destination nodes the packets are to be
forwarded to node 2 for all the cases. This is quite evident from this topology. So 4, 5, 6
as you can see from one it will go to two for delivering three packets. So in this case the
routing tables are used by each individual node where these routing tables can be
developed and can be kept in each of these nodes. So this is a distributed routing
algorithm. Here as you can see here each of these tables can be kept in each of these
nodes and as a packet is received for a particular destination it is forwarded to the next
node. In this way the routing can be done based on the information of next node available
in the routing directory. So these are the routing directories to be kept.

Question arises how these routing directories will be created. That can be created based
on some cost it can be least cost. As I mentioned a cost is associated with each link as we
have seen. Now the simplest criterion is to choose the minimum hop path route through
the network and this minimum hop route means we can find out how many hops the
packet will take and based on that a routing table is created.

(Refer Slide Time: 27:14)

However, the generalization of this is least cost routing. as I mentioned this least cost can
be based on several parameters like; it can be based on the load that means queue length
at different paths of the network, it can be based on the bandwidth at different nodes or
distance of different links so whatever it may be the least cost can be associated with each
link and correspondingly least cost path can be created. So, any pair of attached stations
the least cost route through the network is looked for.

Question arises how you find out this least cost route. For this several well known
algorithm has been developed to obtain the optimal path. These two algorithms are very
popular; Dijkstra’s algorithm and Bellman Ford algorithm. These two algorithms are
quite similar and performance is also similar. What they do is for a given topology and
for a given cost associated to each of these links a least cost path is found out from a
particular source to all the destinations which that is being done with the help of this least
cost routing algorithm other than Dijkstra’s algorithm for obtaining optimal path.

Let us see how it is done. It finds the shortest paths from a given source node to all other
nodes in order of increasing path length.
(Refer Slide Time: 29:15)

The algorithm converges under static conditions of topology and link cost. That means
when the topology is fixed and link cost associated with each link is given then it will
converge to a solution and it will give you the optimal path. It has got three steps. First is
initialization. First it creates an initial set starting with the source node and also a cost Dn
which is essentially the distance based on the link cost from source node to a particular
node and when n is not equal to s for all nodes for which n is not the source node.

Next one is find neighboring nodes not in M that has least-cost path from s and include in
M. In this way in an iterative manner you will find that it will choose a particular node
and include in M and update the least cost paths. So let me illustrate this with the help of
an example. So here as you can see we have got a network and the topology is fixed and
the interconnection is fixed and the cost associated to different links are given here in this
network. Now as I told in first iteration M is the source node so here the source node is 1
(Refer Slide Time: 30:53) and as you can see this node is connected to node 2, node 3 so
the cost for this path from 1 to 2 is 2 then cost from 1 to 3 is also 2 and since from 1 we
don’t know the cost for 4, 5 and 6 right now so these are given as infinity so there is no
path to 4, 5 and 6 and the cost is given as infinity in the begin and this is after the first
iteration.

Then after the second iteration this 2 is included as part of M because this is the least-cost
path among the available paths. So, after including 2 now you can get a path to 4 as well
as 6. So, for 4 the path is now through 1, 2 and 4 so cost is 6 so 2 plus 4 so these are the
link costs these are added and you get a cost 6. Now this 5 is not yet linked there is no
path from 1 to 5 through 1 or 2 so for 6 as we can see the path is 1, 2 and 6 so the cost is
2 plus 1 is equal to 3 as it is given here.
(Refer Slide Time: 30:30)

Now in the third iteration 3 is added in the least and after adding 3 obviously the 5 is also
now reachable from 1. After including 3 now you find that the cost from 1 to 2 remains
the same, cost from 1 to 3 remains the same but cost for 4 has now changed because now
there is a path. after including 6 because 6 has been already included here so now there is
a path 1, 2, 6 and 4 so now the cost is 5 and here to 5 the cost is now 7 (Refer Slide Time:
33:21) through the path 1, 2, 6, 4 and 5 so cost is now 7.

Now 6 is added to the list and after adding 6 we find the cost is 1, 2, 3 and 6 and the cost
remains unchanged for this, for this and for this, actually in the previous case it should
have remained 1, 2, 4 here is a mistake… 33:46, here cost should become 5 with 1, 2, 6
and 4 after including 6 so now cost is reduced from 6 to 4 through a longer path so the
number of hops are increasing as you can see from this but cost is decreasing.

For 5 the cost remains same 1, 2, 6, 4, 5 so cost is 7. Now this cost also remains same as
1 to 6. Now you have to include 4 into this set and this list is now 1, 2, 3, 4, 6, and this
cost, this cost, this cost (Refer Slide Time: 34:36) remains unchanged, here also it
remains unchanged 1, 2, 6, 4, 5 it remains unchanged and here it becomes 1 to 6 and now
we find the final solution and everything remains unchanged.

Therefore we find that from node 1 after sixth iteration we get paths to all the remaining
nodes. For 2 it is 1 to 2 cost is 2, for 3 it is 1 to 3 because direct link is there and cost is 2,
for 4 the cost is 5 through the path 1, 2, 6, 4. So when we cleared the table you have to
forward it from 1 to 2 for a packet with destination address 5. Similarly, for node 5 the
cost is 7 and again it has to be forwarded to 2 but it will make several hops like 1 to 2, 2
to 6, 6 to 4 then ultimately it will go to node 5. Similarly for delivering packet to 6 the
path is 1 to 6 and cost is 3. So from this you can create the routing table which I
mentioned earlier. Whether it is a central routing table or distributed routing table it can
be created from this least-cost path algorithm given by Dijkstra. It is quite an efficient
algorithm and it is widely used.

(Refer Slide Time: 36:14)

Now what are the advantages and disadvantages of fixed routing? As we have seen fixed
routing is simple, simple in the sense for a given topology the cost is given and the least-
cost paths are obtained using Dijkstra’s algorithm or Bellman ford algorithm. You can
use it and it is quite simple after the routing table is created. And it works well in a
reliable network with stable load. Another important feature is whether it is virtual circuit
or it is datagram type of packet switching this fixed routing algorithm is same and it does
not change because route is decided based on routing table created based on those least-
cost routing algorithm.

It has got several disadvantages. The first disadvantage is there is lack of flexibility. Lack
of flexibility arises because the network dynamic in nature. It cannot take care of
changing situations changing load conditions because the routes are fixed. As long as
topology does not change the route does not change so it does not react to failure or
network congestion. These are the limitations of this fixed routing algorithm and that is
why fixed routing algorithm is not very suitable whenever we are working in real life. In
real life we have to take care of the dynamic conditions and we have to use those
adaptive algorithms.

However, this fixed routing is used in some situations where the network is reliable and
load is developed. Next we have the flooding algorithm. This flooding algorithm has a
very unique property, it does not require any network information neither the topology
nor the load condition nor the cost of different paths or anything but what it does is, every
incoming packet to a node is sent out on every outgoing line except the one it arrived on.
So it simply forwards in the entire path except through which it has gone. Let us see how
it happens. So let us assume a packet from A has to be delivered to some destination may
be D or may be C.

(Refer Slide Time: 38:36)

So from A it is sent through both the links. So the packet has received by 2 and 3 through
these links. now what two will do it will forward through these two links and three will
also forward through these two links as you can see here (Refer Slide Time: 39:41). Now
4 has received, 5 has received and 6 are received, these three nodes have received packets
so they will again forward in all directions. Thus to deliver a single packet a large number
of packets are generated this is a very alarming situation. So we find that this technique
has some important characteristic.

(Refer Slide Time: 40:18)


All possible routes between source and destination are tried. A packet will always get
through if a path exists. The flooding algorithm has some important characteristics. The
first important characteristic is that all possible routes from source to destination are tried
because the packet is visiting all the destinations all the nodes and it explores all the
nodes and as a result it will always get through if a path exists that means it is very
robust. Under any condition failure or congestion a packet will be delivered it is
guaranteed so that is why it is very robust and reliable.

As all routes are tried at least one packet will pass through the shortest route. Not only it
will deliver a packet to the destination but possibly multiple packets will be delivered and
one will be delivered through the shortest path. So the first packet that it will receive is
through the shortest path with minimum delay. This is another important feature. When
somebody is trying to find out the least-cost path it can be used also from source to
destination, it is a dynamic condition.

Third important feature is all nodes directly or indirectly connected are visited. Because
of these important characteristics this flooding algorithm is very useful however it has got
several disadvantages. The limitations are flooding generates vast number of duplicate
packets. As we have seen to deliver a single packet many multiple copies are being made
and if it is not contained the number of such duplicates will increase alarmingly, the
number increases in an unbounded manner so suitable damping mechanism must be used.

If we use flooding then the network will receive a large number of packets which may
make it congested, we don’t want that. So, to overcome what has to be done is we have to
make some measure such that the number of packets does not increase in an unbounded
manner, it does not increase alarmingly.
(Refer Slide Time: 42:07)

Let us see how it can be done. One important technique that is being used is known as
hop count. Hop count is a technique that is used in some situation, a counter is used it is
contained in the packet header which is decremented at each hop. That means the header
is initialized with some value. You may be asking with what value it should set? You can
set some number but if no number is known then usually the full diameter of the subnet
can be used. Essentially it is the worst guessed value.

(Refer Slide Time: 43:09)

So, if no other value is known and if counter is set with some worst guessed value then
after each hop that count value is decremented by 1 and whenever it becomes 0 that
particular packet is discarded, it is no longer forwarded and as a result the number of
packets will not increase alarmingly. This is a very important feature. This hop count
helps us to minimize the number of packets under flooding condition. Another technique
that can be used is it keeps track of packets which are responsible for flooding using a
sequence number. That means the sequence number keeps track of which packet is
flooding the network. So whenever it comes the second time it is not forwarded. So it
avoids sending that packet the second time it comes to the node. For the first time it is
forwarded and that sequence number information is monitored and whenever it comes
again it is not forwarded. This is another important technique that can be used to reduce
the number of packets whenever you are using flooding.

Now there is another approach and this is in someway a modification of flooding known
as selective flooding. It’s a variation of flooding technique. What it does is the routes do
not send every incoming packet on every line, only on those lines that go in
approximately in the direction of destination. In the example we have discussed we have
seen that a packet is forwarded in all directions except through which it has received the
packets.

(Refer Slide Time: 45:25)

Now it may be forwarded direction in the opposite direction of the destination that is
being restricted in the selective flooding technique. There what is known is in which
direction the destination exist. Some weightage can be given for forwarding the packet in
a particular direction so that weigthage can be used to decide whether it is going towards
the destination or in the opposite direction of the destination. Thus the selective also
restricts the number of packets in the network.

Hence we find that flooding has some important properties, some limitations but it has
some important utilities because of the advantage. First of all flooding is highly robust
and could be used to send emergency messages. for example whenever there is a war
there is possibility that some of the nodes will be destroyed by the enemy, some link may
be destroyed or the network topology keeps on rapidly changing or the load keeps on
rapidly changing but some emergency message has to be sent to a particular destination
under such case flooding technique can be used hence in military applications it is very
useful. Then it can be used to initially set up the route in the virtual circuit.

As we know in the virtual circuit initially a route is set up. Question arises, how it will be
set up? It can take help of the flooding algorithm to find out the best possible route. So,
whenever the destination node receives the first packet with minimum delay it sends an
acknowledgement with information about the routes through which the packet has gone
and that can be used to set up the virtual circuit. Subsequently all other packets can be
sent through the same route. And as I already mentioned flooding always chooses the
shortest path since it explores every possible path in parallel. This is another important
advantage and that is the reason it is too useful.

It can be useful for the dissemination of important information to all nodes. There are
some situations where you have to broadcast a particular message. All the fixed routing
algorithms are essentially from a source to a destination, it is not for multicasting, it is not
for broadcast. But flooding can be used for broadcasting. Suppose you have to upgrade
some configuration or pass on some information to all the nodes in the network in such a
case flooding can be used. So we find that in spite of the disadvantages of flooding
because it increases the load in the network it has many utilities because of its robustness
and other advantages.

(Refer Slide Time: 46:48)

Thus in this lecture we have discussed why routing is important, what are the important
features that is desired from routing, what is parameters that is used in routing and we
have also discussed two important routing techniques that is fixed routing and flooding.
Now let us consider some review questions.
(Refer Slide Time: 50:18)

1) Why routing is important in a packet-switched network


2) What are the primary conditions that effect routing?
3) What is flooding? Why flooding technique is not commonly used for routing
4) In what situation flooding is most appropriate? How drawbacks of flooding can be
minimized?

These questions will be answered in the next lecture.

(Refer Slide Time: 50:51)


Now it is time to give the answers to the questions of lecture minus 9.

1) How the drawback of circuit switching is overcome in message switching?


Message switching is based on store and forward technique. Instead of establishing a
dedicated path the message is sent to the nearest node which is directly connected. Each
node stores the message, checks for error and forwards it. It allows more devices to share
the network bandwidth and one message can be sent to several users. Destination host
need not be on at the time of sending the message. so we find in case of circuit switching
which we use in telephone network unless the destination telephone number is free and
the person will be receiving the message is awake or available in the house you cannot set
up a link and talk over the telephone.

On the other hand if you want sent say email, if somebody sleeping at that moment then
also you can send the email so that’s the difference between circuit switching and
message switching. So, message switching first of all does not make full utilization of the
bandwidth and the dedicated path has to be set up before sending any message which is
not necessary in message switching but which is best on store and forward even when the
destination node is in not on or the person using that particular node is not awake the
packets can be forwarded in message switching and it will be delivered. That’s how the
drawback of circuit switching will overcome in message switching.

(Refer Slide Time: 53:05)

2) What is drawback of message switching? How is it overcome in packet switching?

In message switching large storage space is required at each node to buffer the complete
message blocks. On the other hand in packet switching messages are divided into subset
of equal length which are generated in the source node and reassembled to get back the
initial complete message in destination node. Moreover, to transmit a message of large
size link is kept busy for a long time leading to increase in delay for other messages.
As we have discussed in the last lecture message switching has got a number of
drawbacks. First of all it monopolizes the storage. It blocks a large storage area because
messages are of large size. Moreover whenever it is sent it takes very long time and as a
result it monopolizes the link and also the possibility of error increases because the length
of the message is long. On the other hand whenever you are doing packet switching
messages are divided into smaller sizes and as a consequence possibly it does not require
larger storage, it does not monopolize a link and also possibility of file transmission
reduces because packets are of smaller sizes.

(Refer Slide Time: 54:38)

3) What are the key differences between datagram and virtual circuit packet switching?

One point that I would like to make is both datagram and virtual circuit packet switching
are based on store and forward approach. However, there is a difference between the two.
In datagram the packets are routed independently and it might follow different routes to
reach the destination in different order. That means each packet is treated independently.
So what can happen is like it happens in a postal system the different packets may go
through different routes and may be delivered out of order.

On the other hand in virtual packet circuit switching first a virtual connection may
established may be by flooding and all the packets are sent serially through the same
path. In this case packets are received in order. So, in virtual circuit packet switching it is
also store and forward approach but the only difference is that it is sent through same
path and as a result the packets are delivered in order.
(Refer Slide Time: 55:55)

4) Distinguish between circuit switching and virtual circuit packet switching.

In circuit switching a dedicated path is established, data transmission is fast and


interactive, nodes need not have storage facility however there is a call setup delay. In
overload condition it may block the call setup, it has fixed bandwidth from source to
destination and no overhead after the call setup so there is no extra overhead except the
propagation time.

In virtual circuit packet switching there is no dedicated path, it requires storage facility
and involves packet transmission delay because you are storing it and then forwarding it
so there is a packet transmission delay. It can use different speed of transmission and
encoding techniques at different segments of the route which cannot be done in circuit
switching.
(Refer Slide Time: 56:52)

5) How packet size affects the transmission time in a packet switching network?

As I mentioned initially transmission time decreases as packet size is reduced but as


packet size is reduced and the payload part of the packet becomes comparable to the
control part the transmission time increases. That means if the header size becomes
comparable to the payload size then the transmission time increases. This I have
discussed in detail in the last lecture. So, friends with this we come to the end of today’s
lecture, thank you.
Data Communications
Prof. A. Pal
Department of Computer Science & Engineering
Indian Institute of Technology, Kharagpur

Lecture -21
Routing-II
Hello and welcome to the second lecture on routing on packet switched networks. In the
first lecture we have discussed the basic issues related to routing in packet switched
networks particularly why routing is needed, what are the parameters to be considered
when you design routing algorithms and various constants. And we have discussed few
important routing techniques such as fixed routing and flooding. So in this lecture we
shall continue our discussion on routing techniques and here is the outline of today’s
lecture.

(Refer Slide Time: 1:45)

So we shall start with random routing and then we shall consider flow-based routing
which is again one kind of fixed or static routing technique on the other hand,, Distance
Vector routing and link state routing these two belong to the category of adaptive routing.
We shall also discuss about multicast routing. Finally we shall consider the routing
algorithms used in Arpanet the case study. Arpanet as you know is the foundation of
internet technology. Various approaches, tools and techniques were developed in Arpanet
which is nothing but Advance Research Program Network. Here various techniques were
developed and we shall discuss and see how the routing algorithms evolved as a part of
Arpanet. Here is what the students will learn.
(Refer Slide Time: 3:00)

On completion the student will be able to understand the function and utility of the
following routing algorithms: random routing, progress routing, Distance Vector routing,
link state routing and multicast routing and also they will understand how routing
algorithms evolved in Arpanet. Let us start with the various routing techniques. As I
mentioned we have already discussed fixed routing and flooding and in this lecture we
shall consider the other approaches and as I mentioned a large number of routing
strategies have evolved over the years and here is the least of the important ones so we
shall consider these techniques in this lecture.

(Refer Slide Time: 3: 48)


Let us start with random routing. This can be considered as a special case of flooding. we
have already discussed flooding and we have seen in flooding it leads to too many
packets and also it leads to too much load on the network so that can be somewhat
reduced with the help of this random routing and it has the simplicity and robustness of
flooding with far less traffic. So it gives you some benefits of flooding and at the same
time it reduces or overcomes some of the limitations of flooding that means it reduces the
traffic load which is the main drawback of flooding.

What is being done is a node selects only one outgoing path for retransmission of an
incoming packet. Suppose we consider a particular node in a subnet and suppose this is
the link through which a packet has been received and these are the various outgoing
links (Refer Slide Time: 4:56) so as we have seen in case of flooding if a packet is
received here it is transmitted or forwarded on all the links except through which it has
been received. But in random routing that is not done. The outgoing link is chosen at
random excluding the link on which the packet has arrived that means it is not sent
through this path but not only that it sends through one of the three paths it may be at
random.

(Refer Slide Time: 5:40)

Now you may ask how you do it. If you do it at random there is no need for any other
information of the present state of the network whether how many packets are in the
queue or what is the bandwidth of the link and so on. Another possibility is that you can
do it in a round Robin fashion. that means whenever there is no other criteria you may
choose one of the links next time you may choose another link and next time you again
choose another link so in this way you send through all the paths one after the other so
you may call it round robin technique.
In this case it will distribute the load evenly on different links but unfortunately this has a
limitation. The limitation is that some of the packets may get forwarded in the direction
of the source rather than the direction of the destination that can be overcome by using
some approach or another refinement is to assign probability to each outgoing link and to
select the link based on probability. That probability can be computed based on may be
data rate. Let us consider it now.

Suppose this is the link through which packet is come and these are the three outgoing
links. Now let us assume the rate of transmission on these links are R1 R2 and R3 then
probability Pi of sending through a particular link can be computed in this way Pi is equal
to Ri that is the rate of transmission through a particular link by summation of all these
Ris.

That means in this particular case (Refer Slide Time: 7:35) here it will be R1 plus R2 plus
R3 that will be in the denominator and in the numerator it will be one of the links.
Therefore based on these the probability can be calculated and the link having the highest
probabilities is used for transmission. That means what is being done is the packet is
forwarded in a particular direction and for that the probability is calculated based on the
bandwidth of that link. So the higher bandwidth is being referred for transmission so it
can be done in this way also.

The actual route will typically not be the least-cost route. So whenever you do it in this
manner you cannot guarantee that the route chosen is least-cost route because we are not
considering the cost of the different routes in a systematic manner. As a consequence the
transmission of packet will not go to the least-cost path and as a consequence the load on
the network will be higher than the optimum traffic load.

Therefore we find that this random routing does not use any other information, it can be
purely random or it can be round robin or if some information is used that can be the data
rate of different links. And this has the advantage that the traffic and the network is
significantly lower than the flooding. However, it has the robustness because it is sending
through a particular path. That means ultimately the packet will be delivered to the
destination like flooding so it has the robustness and simplicity of flooding but with much
reduced traffic load. So this is random routing.
(Refer Slide Time: 9:30)

Then we have another important routing algorithm known as flow-based routing. So far
in fixed routing we have seen that only the topology of the network has been taken into
consideration. We have seen that the nodes are connected to different links and for each
of these links either the queue length or the delay or the transmission time the actual data
rate these are being used but not the information about the load of the flow.

Now what can be done is in addition to topology information about the load can be used
for routing and a concept of flow is used here. The flow between a pair of nodes is
relatively stable and predictable. That means if we know the traffic through a particular
direction and the data rate of a particular link then very easily the flow between pair of
nodes can be calculated.

So, for a given capacity that means the data rate of a particular link average flow is
known, it is possible to compute the mean packet delay using queuing theory. Thus
queuing theory can be used to compute the mean packet delay for each of the links and
from the mean delays of all the lines it is easy to calculate flow-weighted average to get
the mean delay for the whole subnet. So once you get the mean delay of the whole subnet
that can be used for finding out the least-cost path by using Dijkstra's algorithm.

Therefore using that again the routing table is calculated and based on that the minimum
average delay path is chosen so again it can be considered as a fixed or static routing.
However, here we are taking into consideration not only the information about the
topology but load information and it is based on flow. So this is another example of fixed
routing that we have discussed here.

Now we have seen that the static or fixed routing techniques have a serious limitation
particularly when the network is not stable. There are two principle conditions that affect
routing decisions. One is failure. So when a particular node fails obviously you cannot
pass a packet through that and obviously you have to reroute the traffic in some other
direction. so when a node or trunk fails that means not only the node but a link or trunk
may fail so it can no longer be used as a part of the route. So in this condition obviously
the fixed routing will not work because fixed routing will always use the routing table
which does not take into consideration the failure of a node or a link.

Another possibility is congestion. We shall discuss about congestion in next lecture in


detail and we shall we see that when a particular portion of the network become heavily
congested it is desirable to route packets around the area of congestion. As in a normal
traffic scenario where the vehicles are controlled whenever there is a traffic jam in a
particular area usually the cars or other vehicles are diverted through some other route.
So the same technique has to be used here. It is necessary to reroute the traffic such that it
does not go through the congested area but around the congested area. That is adaptive
routing. So, the adaptive routing will keep on modifying the routing table or whatever it
may be so that the dynamic condition because of failure and congestion is taken care of.
And the classification of these adaptive algorithms can be done based on the information
source.

(Refer Slide Time: 14:10)

There are three alternatives about the information source. Information source can be
purely local. In such a case what is being done is information is gathered from that node
itself. That means it may be the queue length on different links of a particular node so in
that case it is purely local. That means it shows how many packets have been
accumulated in each of these links. So this is purely a local information that can be used
for routing.

Another possibility is that we can use adjacent nodes. that means each node can gather
information from its neighbors to which a particular node is directly connected so that
can be used for routing in a such case information gathering is taking place from adjacent
nodes.

Another alternative is the information can be gathered from all nodes. So in a packet
switch network then may be many nodes so each node will gather information from all
other nodes. hence in such a case it is more global obviously whenever it is more global
more information has to be gathered so it will take more time and it will put more on the
network because whenever we gather information obviously exchange of packets will
take place and that will put some load and also the node has to do computation for using
this information gathered from all the nodes for the purpose of routing. so classification
can be done based on these three approaches.

Then for adaptive routing to be possible network state information must be exchanged
among the nodes. Now you may be asking what kind of metric can be used for the
purpose of routing. One simple approach is to use number of hops. So, for a particular
source to destination the information about the number of hops the number of nodes to be
used for relaying or retransmitting or forwarding the packets can be used as a metric. This
is possibly the simplest metric that can be used.

(Refer Slide Time: 16:42)

However, this particular metric is not commonly used. Second metric that can be used is
the time delay in millisecond. So this time delay in millisecond can be used. However,
how do you get information about the time delay? One representative of this time delay
information is the queue length. One can use the queue length for each outgoing link
towards a particular node so this can be used. Another approach is that the delay can be
actually measured and based on that the time delay can be used and the total number of
packets queued in the path. So here it can be more global approach. Here what we are
doing is the packet queued along the path from source to destination is used as a metric
for finding out the route.

Here we find there is a trade off. More information exchange you do that means if the
information is more global if you gather more information there is a possibility that
routing strategy will be better. On the other hand, if it is based on local information or no
information as you have done in case of flooding and routing then the routing strategy
may not be very good. So what is happening is whenever more information exchange is
used for better routing that increases overhead. So we find that it is essentially a trade off
between better routing and more overhead. That means if you want lesser overhead then
we cannot get better routing. On the other hand, if you want better routing then the
overhead on the network increases.

Another important parameter is frequency. That means if it is more frequent you can
perform better routing but it will be more overhead. That means if the information
gathering takes place more frequently then obviously the routing will be better. On the
other hand, if the information gathering takes place after a large interval then the routing
that is being done based on the information may not be very relevant and by the time that
information is used for routing the network parameter may change. So the more frequent
the better but it increases the overhead. Therefore these are the trade off that will be
encountered whenever you design adaptive routing algorithms. So as a consequence it
will be quite complex.

There are very two popular approaches for adaptive routing. One is known as Distance
Vector routing and another is link state routing. We shall discuss about them in detail in
this lecture but before that let us take a very simple example of adaptive routing
technique. It is essentially local approach that is why it is known as isolated adaptive
routing. In this case a node routed an incoming packet to the outgoing link with the
shortest queue length Q. So based on the queue length in different paths the routing is
done. Thus the basic idea is that it will do the balancing of the load on outgoing links so
the chosen outgoing link may not be heading in the right direction. Unfortunately what
can happen is whenever it is based queue length of different links then there is a
possibility that a packet will be headed towards the source rather than the destination.

How do you take that into consideration? to take direction into account each link
emanating from the node has a bias so the basic approach is modified and bias is used for
each destination I. for each arriving packet heading for a node I the node would choose
the outgoing link that minimizes Q plus Bi that is the queue length the bias these two are
added together which minimizes Q and Bi which is used for linking.
(Refer Slide Time: 21:17)

And whenever you do that a node would tend to send packets in the right direction with a
concession made for the current traffic delay. That means this Q factor takes into account
the current traffic delay. On the other hand, this Bi that is your bias will try to send in the
right direction. Let us take an example for that.

Suppose here we have considered a node in this network with destination at this 5 that
means a packet is going from node 2 which is here (Refer Slide time: 22:08) towards 5.
So from 2 it is going to 5 and this node 4 in between is shown here.

As you can see the node 4 has got four links and queue length on different links towards
node 2, node 4, node 6, node 5 and 3 are shown here. So queue length towards 2 is 2,
towards 6 is 1, towards 5 is 4 and towards 3 is 3. On the other hand, there is a bias table.

(Refer Slide Time: 22:05)


As you can see bias towards 2 is 7 because it is going in the direction of the source that is
why here the bias is more. On the other hand, towards 3 if it goes the bias is 6 because
again it is going away and on the other hand, if it is going in this direction (Refer Slide
Time: 23:09) bias is 0 and whenever it is going in this direction again it is going little off
so bias is 3. Therefore if you add 2 plus 7 is equal to 9, 1 plus 6 is equal to 7, 0 plus 4 is
equal to 4 and 3 plus 3 is equal to 5 then it is directly delivered to 5 because although four
packets are there but because of zero bias it is directly forwarded to 5. So we find that if
only the queue length was used then the packet would have been forwarded towards 6.
That means in this direction again it would come in this way, so it would have taken a
longer path but that is being avoided because with the help of the bias.

So because of zero bias the node 4 is forwarded directly to 5 although the queue length is
4 here. So we find that by using local information and also using suitable bias it has been
possible to do the routing in an effective manner in this isolated adaptive routing.

Now as I mentioned one of the most popular adaptive routing technique is Distance
Vector routing.

(Refer Slide Time: 24:37)

These are the key characteristics of Distance Vector routing. In Distance Vector routing
knowledge about the entire network is gathered and routing only to the neighbors.
However, the routing is done only to the neighbors and that is why here we find that a
table is maintained based on the information sharing at regular intervals. These are the
three basic features of this.

First of all knowledge about the entire network is gathered and routing is done only to the
neighbors and information sharing is performed at regular interval. Based on that each
node maintains a routing table having only 1 and 3 for each node with two other fields so
these essentially adjusts the destination node. And here (Refer Slide Time: 25:46) the cost
is computed based on the information shared at regular interval. That information that is
being gathered is stored here. And with the help of this preferred next node this is the
next hop node that is being mentioned. So, from this table each node gets the information
about the cost and also the next hop node. And based on this cost information the routing
can be used based on the Bellman-Ford algorithm or some other least-cost path
algorithm. Hence in this way the node maintains a routing table and the minimum cost
route is used for routing purpose. This is Distance Vector routing. Then comes the link
state routing

(Refer Slide Time: 26:47)

In link state routing the basic steps are; identify the neighboring nodes, measure the delay
or cost to each of its neighbors and so on. Therefore here as you can see in the previous
case the knowledge about the entire network was gathered in Distance Vector routing. So
the distance and cost vectors are being maintained here. on the other hand, here only the
information about the delay or cost to each of the neighbors is computed and then form a
packet containing all the information and send the packet to all other nodes. So, in the
previous case information was sent only to the neighbors. On the other hand, here the
information is being sent to all the nodes and obviously it is being done by flooding. Here
although information gathering is taking only from neighboring nodes but the distribution
of information is taking place to all other nodes so compute the shortest path to every
other node by using Dijkstra’s algorithm.

In this case the information is gradually gathered based on the information received from
all other nodes and then the routing is done. Thus the basic idea is as you can see it uses
knowledge about the neighborhood and then routing to all by using flooding. It uses
flooding to send information to all the nodes and information sharing is performed at
regular interval just like your Distance Vector routing.
(Refer Slide Time: 28:30)

This link state packet is sent to all the other nodes, this is the advertiser ID (Refer Slide
Time: 28:55) that is the person or thing who is sending the packet. As we have seen
although information gathering is taking place from the neighborhood it is being sent to
all other nodes. As a consequence the nodes should know who is doing the advertisement
or who is passing on the information. So this advertiser ID tells each node from where the
information is coming and the network ID is essentially the destination node’s address
and cost for transmission is given here for that destination and the next hop node ID
number is given here. So all this information that appear in these four columns is used
known as link state packet.

Link state packet is broadcasted with the help of flooding to all the nodes and all the
nodes will gradually gather information. initially it may be sparse but as time passes and
more and more information is received from different nodes then each and every node
may make some kind of database and they will use that database to find out the shortest
path to every other node and then using the Dijkstra’s algorithm each link is obtained and
then the shortest path is found to every other node which is then used for routing purpose.
So we find that this link state routing is quite powerful and here let us make a comparison
between the link state versus Distance Vector routing. so link state algorithm converge
more quickly and are somewhat less prone to routing loops than Distance Vector routing.
(Refer Slide Time: 31:07)

It has been observed that the link state algorithms which I have discussed just now
converge more quickly and are somewhat less prone to routing loops than the Distance
Vector routing. However, link state algorithms require more CPU power and memory
than Distance Vector algorithms. Link state algorithms therefore can be more expensive
to implement and support.

As we have seen in case of link state routing the nodes are gradually gathering
information for making a database and the database will require lot of memory then they
have to do the computation on the database. Therefore each and every node that is
developed will use the Dijkstra’s algorithm to get the least-cost path to each of the
destination node. And as a consequence it will be little expensive to implement and
support it however link state protocols are generally used. Particularly they are more
scalable than Distance Vector routing. So link state routing has been found to be more
scalable compared to Distance Vector routing and that is why the link state routing are
becoming more and more popular.
(Refer Slide Time: 36:05)

So far we have discussed about three adaptive algorithms. First one was isolated adaptive
routing which uses local information. On the other hand, Distance Vector routing uses
information from the neighbors and Distance Vector routing gathers information from all
the nodes. And we find that the routing decisions are more complex and has more
processing burden on the switching nodes. That means compared to fixed routing or
flooding or random routing we find that adaptive routing is more complex and obviously
they will put more and more processing burden on the nodes. Therefore it depends on
status information that is collected at one place but used at another. So what is happening
is the information about the network status that is queue length, delay or whatever it may
be is being gathered at different nodes in network, however, the decision has to be taken
not in the nodes but in the stations or computers. So as a result they have to be
transmitted to them and that increases the traffic overhead significantly.

If it reacts too quickly to changing network state it may produce congestion producing
oscillations. That means if the adaptive algorithms reacts too quickly then it may lead to
congestion producing oscillation somewhat like this. Suppose there is a traffic jam in a
particular part of the city and very quickly all the traffic is diverted to another part of the
city so that part of the city where the traffic is diverted to will have a major traffic jam
and in this way the traffic will hop between two areas instead of going to the destination
so somewhat similar kind of a situation can arise here known as thrashing situation.

This thrashing situation may arise unless it is carefully implemented but this has to be
avoided. However, in spite of these drawbacks adaptive routing is widely used because of
improved performance and can aid in congestion controls. Although it is complex,
although it may lead to some congestion producing oscillation adaptive routing is still
referred because it improves the performance of routing and also it can aid in congestion
control.
Congestion is a very important aspect of packet switched network and congestion has to
be controlled at any cost and this adaptive routing will help in minimizing the possibility
of congestion. Now we shall discuss about another type of routing known as multicast
routing.

(Refer Slide Time: 35:22)

So far what we have done is routing was essentially from a source to a destination
between a pair of stations. But there are situations where a particular node will send to a
group of stations, a particular station will send to a group of stations. For example, there
is a video service provider who is giving video on demand to a group of people. In such a
case a particular video may be TV channel or a movie has to go a particular group of
users and not to a single user depending on their demand known as multicasting.

So, to do multicast routing each node computes a spanning tree covering all other nodes
in the subnet. Let us see how it is being done. In this case 1 2 3 are the sources of
different packets (Refer Slide Time: 37:22). On the other hand, 4 5 6 are the designations.
Or you may consider that these are the subscribers and these are the service providers. So
video signals are going from these three stations 1 2 3 and on the other hand, it is going to
4 5 6 destination stations these are the consumers or users. Now let us see how this
spanning tree works.

So here we find a spanning tree from 1 it can go through this path 1 to node A, to node C,
node D and from D it is multicasted it is transmitted not in one direction but in three
directions because 4 5 6 these four users are the group of stations who want to get the
service. Similarly from another source it can be 2 3 and this path is common (Refer Slide
Time: 38:32) then it will go to 4 5 6 so from these two stations service can be taken by 4
5 6 so a spanning tree is formed. So we find that when a packet is sent the first router
examines this spanning tree and prunes it. So here as you can see after this node A
receives a packet may be related to a video then instead of sending to G it prunes it so it
simply uses this path and sends it. The direction to which it will send any further is based
on the spanning tree. That means from A it will go to C and C also will not send to any
other path but it will send only in this direction. So, multicasting is done and spanning
tree formation can be done which will decide the route.

Here you can see D will send through all the links because there are subscribers
connected to E F and directly to D. this is the example of multicast routing. One very
special situation of this multicast routing is your broadcasting. Suppose a particular node
has to broadcast some information to all other nodes for that the obvious candidate is
flooding. So flooding can be done whenever the broadcasting has to be done for all the
nodes.

For example, if this node wants to broadcast something it will do the flooding and it will
reach all the stations so in that case the routing algorithm will be simple and there will be
no need to form spanning tree and the process needed for each of the node will be
smaller. However, we already know that the limitations of flooding will be there but it is
a very robust technique for broadcasting. That is why flooding is used in many situations
for broadcasting purpose.

Now we have discussed various routing techniques both static and dynamic. Now let us
take up a case study. We shall study the Arpanet network and we shall see how the
routing technique has evolved in Arpanet gradually.

(Refer Slide Time: 41:19)

Arpanet is Advance Research Project Agency Network which is the foundation of the
present day internet. Many tools and techniques have been developed in Arpanet which
are still being used present day context. The first generation algorithm was based on
Distance Vector routing which was developed sometime in 1969. So this first generation
algorithm is a Distance Vector routing and it is a distributed adaptive routing so there is
no central controller and all the nodes take part in routing so it is a distributive adaptive
algorithm using delay as the performance criteria. so each node maintains their Distance
Vector as you can see it maintains two vectors Di is equal to di1, di2, din so this is one
vector essentially this is the delay information for sending packet to node 1 2 to n and S
the si gives you the successor node vector to node I. so here as you can see the list of
successor nodes are given as si1, si2 and sin to successor node Qi to where packet has to
be forwarded for sending to destination 1, 2 and n.

So here Di is the delay vector, dij is the current estimate of the minimum delay from node
I to node j. that means from node I to node 1, 2, n this vector is stored. Similarly xij is the
next node in the current minimum delay route from I to j. that means based on the delay
information the next hop node is calculated and these two vectors are stored in each and
every node. Obviously for doing that some calculation has to be done. Periodically each
node exchanges its delay vector with all of its neighbors.

As we have seen in Distance Vector algorithm we already discussed that exchange of


information takes place with the neighbors but not with all the nodes so that is being done
here. And the interval as you can notice it is very frequent at an interval of 125
millisecond. And using the incoming delay vectors node k updates its vectors as follows.
So it finds out the minimum delay path for all I in A using skj is equal to I using I which
minimizes the expression above. So the minimum value is calculated and here A is the set
of neighboring nodes from k and iki the current estimate of delay from k to I. So the
minimum value is taken as the delay information which is being used to compute the
least-cost path may be by using Bellman-Ford algorithm.

Here the estimated link delay is simply the queue length for that link. Suppose this is the
particular node (Refer Slide Time: 44:58) and these are the out links so the queue length
for each of these links is being used as the representative of the delay. So, in building the
new routing table a node will tend to favor outgoing links with shorter queues. In the link
in which the queue length short obviously whenever the routing table is made a node will
tend to favor the outgoing links with shorter queues. Since queue lengths vary rapidly
with time a thrashing situation may result. So whenever this thrashing situation occurs
because by the time the next routing table is formed and packet is forwarded that queue
length gets changed. So what can happen is again routing calculation has to be done so a
packet continues to seek out areas of low congestion rather than aiming towards
destination.
(Refer Slide Time: 46:22)

Suppose this is the part of the subnet so initially in this part of the subnet the packets are
forwarded in this direction because of smaller queue so there will be congestion here and
whenever it is found that queue length has increased it is forwarded in this direction so
there will be congestion here also instead of going towards destination. Hence this
congestion is known as thrashing. So this is the problem that arise Arpanet first
generation network. So whenever the second generation approach was decided the
drawbacks of the first generation algorithm were evaluated. The drawbacks are only
queue length were considered and not the line speed. Of course when the first generation
technique was developed in those days the links were of low bandwidth low data rate.
However, when the second generation algorithm was considered by that time the links
have been upgraded from low rate to high rate and as consequence it was necessary to
take into consideration the line speed.
(Refer Slide Time: 47:58)

The queue length is only an artificial measure of the delay it cannot be a correct
representative of the delay and processing time of a packet before placing it in the queue
may itself be variable. That means the queue length may change by the time processing is
being done by a node to send it in a particular queue so the algorithm responds slowly to
congestion and delay and as a consequence delay increases. These are the limitations
which were identified when the first generation algorithm was considered and second
generation algorithm that was proposed in 1979 after two years shifted to link state
routing. They modified the Distance Vector routing to a link state routing. So here also
link state routing is also a distributive adaptive algorithm but which uses the actual delay
as the performance criteria rather than the artificial delay based on the queue length. So
the delay is measured directly by time stamping the packets using positive
acknowledgement.

For example, if a node receives the packet that incoming time is known as time stepped
and the time it is going out is also the time step. The difference of that is added with the
transmission time and the propagation time of the packet to get the information about the
delay and that delay is being used for the purpose of computation. Here as you can see it
is compare to 120 millisecond and at every ten second a node computes the average delay
on each outgoing link. As I mentioned earlier in link state routing the interval is
significantly increased from 128 millisecond to 10 second. So, at the interval of 10
second by flooding the information exchange takes place. And since the interval is quite
long this will not affect the performance of the network that much.
(Refer Slide Time: 48:13)

By using this second generation algorithm by measuring the actual delay the performance
was significantly improved. However, when it was again reviewed in the year 1989 it was
found that problem with the previous approaches was that every node was trying to
obtain the best route for all destinations, and that effort is conflicted.

That means since all the nodes were trying to forward in the direction of the least-cost
path and best effort routing was done which was leading to some problem. Whenever the
node is lightly loaded, network is lightly loaded it works fine. However, under heavy load
condition this best effort routing or routing in the least-cost path direction does not work
so well. So the goals of routing are to give the average route a good path instead of
attempting to give all routes the best path. So, instead of using the best paths the good
path was computed. The basic approach was not changed, however there was a change in
policy, instead of forwarding towards the best path it was rather forwarded towards good
path and as a result the network or the subnet towards the best path will not congested.
(Refer Slide Time: 51:52)

So the designer decided that it was unnecessary to change the overall routing algorithm.
It was sufficient to change the function that calculates the link costs. So link cost
calculation was done in a different way. It was done in a way to damp the routing
oscillations. As I said the congestion producing oscillation are stopped here such as damp
oscillations and reduce routing overheads. It uses simple concepts from queuing theory.
So queuing theory was used to find out the link costs. The average cost function is to use
utilization rather than the delay. Here earlier delay was used as the main parameter.
higher however the utilization was considered as the main parameter so whenever the
load was small it was essentially based on that Distance Vector routing, it was somewhat
like this. That means as utilization is initially small it was based on the Distance Vector
routing and then as the utilization increases it goes in this region (Refer Slide Time:
53:10). Here for example utilization is 1, it is normalized 100%. So whenever it goes in
this region based on utilization it switches to that link state routing. As a consequence
there was significant improvement in performance. We have discussed about Arpanet and
we have seen how the routing strategy has evolved in Arpanet. Now it is time to consider
the review questions.
(Refer Slide Time: 54:13)

1) In what way random routing is superior to flooding


2) Why adaptive routing is preferred over static routing
3) What are the limitations of adaptive routing?
4) Compare and contrast Distance Vector routing with link state routing
5) In what way second generation Arpanet algorithms differs from the first generation
Arpanet routing algorithm?

These questions will be answered in next lecture. Here are the answers to the questions
of lecture minus 12.

(Refer Slide Time: 54:30)

1) Why routing is important in packet switch network?


The path to be followed by a packet introduced in a packet switched network is decided
by the routing algorithm. Since the packets will be making several hops before it reaches
its destination routing is very important and routing tries to find out the least cost or the
optimized path between source and destination stations. If routing is not done properly
delay increases and this may lead to congestion.

(Refer Slide Time: 55:15)

2) What are the primary conditions that affect routing?


As I have seen if there was no change if everything was static then static routing is good
enough. However, there are two important factors that changes. One is failure. Whenever
a node or trunk fails it can no longer be used as part of the route so we have to go for
dynamic algorithm and second aspect is congestion. So whenever a particular portion of
the network becomes heavily congested it is desirable to route packets around the area of
congestion rather than through the congested area. These are the two primary conditions
that affect routing and routing strategy has to be changed whenever these two happen.
(Refer Slide Time: 55:56)

3) What is flooding? Why flooding technique is not commonly used for routing?
As you know flooding is one of the non adaptive routing techniques where no network
information is used. In case of flooding each node receives packet it is retransmitted or
forwarded to all links connected to the node except the links through which the packet
has arrived. Flooding is not commonly used for routing for the following reasons. As we
know flooding leads to unbounded number of packets, it may lead to congestion in the
network and a number of copies of the same packet is delivered to the destination node.
These are the limitations.

(Refer Slide Time: 56:46)


4) In what situation flooding is most appropriate? How the drawbacks of flooding are
minimized?
Flooding is most appropriate in some critical operations like military network because of
its robustness. In flooding routing technique the packet delivery is guaranteed if a path
exists. The drawbacks of flooding can be minimized by the following two ways. As we
have seen one can be done while forwarding a packet. Each node should find whether a
particular packet has been already transmitted and if so the second transmission of the
packet should be stopped. Another is hop count. Hop count information should be
maintained at each node. A packet is not forwarded if hop count is more than the
specified limit. With this we come to the end of today’s lecture, thank you.
Data Communications
Prof. A. Pal
Department of Computer Science & Engineering
Indian Institute of Technology, Kharagpur

Lecture – 22
Congestion Control

Hello and welcome to today’s lecture on congestion control in packet switched network.
In the last two lectures we have discussed about the routing techniques used in packet
switched network. So, after routing another very important aspect that has to be
considered is the congestion in the packet switched network. Here is the outline of
today’s talk.

(Refer Slide Time: 1:46)

First we shall discuss about why congestion rather why congestion arises at all then the
common causes of congestion or in other words the sources of congestion and we will
also discuss about what are the effects of congestion on the packet switched network
whenever congestion takes place. Then we shall discuss about two basic approaches for
controlling congestion. One is open loop technique and another is close loop technique.

Under the open congestion control technique we have two important algorithms; one is
known as the Leaky Bucket Algorithm and another is the Token Bucket Algorithm. We
shall discuss a number of close loop congestion control techniques such as admission
control, weighted fair queuing related to admission control, resource reservation which is
known as the RSVB then the use of choke packet, load shedding and so on. Finally we
shall close our discussion by considering a comparison between congestion and flow
control.
On completion of this lecture the students will be able to explain the causes for
congestion, understand the effects of congestion, what happens when congestion take
place, understand various open loop and close loop congestion control techniques such as
Leaky Bucket Algorithm, Token Bucket Algorithm and so on then they will be able to
distinguish between flow and congestion control. So with this background let us start our
discussion about why congestion.

(Refer Slide Time: 3:24)

So here is schematic diagram of a packet switched network. Let us have a closer look of a
particular node. So here I have considered a bigger view of a particular node. Let us
consider the node 1, this is the node 1. As we can see node 1 has got 1 2 3 4 so four links
coming from either the host or the station or connected to some other nodes so it has got
four nodes. And also it has some storage, so how this storage is used? This storage is
used to maintain queues, it can be considered as input queue and also output queue which
stores the packets which are used as packet buffer. So queue of packets are present on
each node as shown here (Refer Slide Time: 4:28) where the packets are buffered before
they can be transmitted through the links, and it has got four links so here is node number
1.

Now let us assume it has got certain number of packets in the queues in this particular
node. What happens in normal situation? In normal situation a packet which is introduced
into the network gets delivered to the destination if the network is not heavily loaded.
That means all the packets which are entered from this station say from station A into the
network may be temporally stored in this buffer then it is sent to a proper output link
through some buffer and then it is transmitted towards the destination.

In the last two lectures we have discussed about the routing techniques and also discussed
about how it is sent towards the destination, that is the normal situation but what happens
when the traffic increases suddenly. We know that the data communication network is
bursty in nature. As a consequence what can happen is suddenly the load can increase
within a short period of time. What happens in such a situation? In such a situation one
situation is that the buffer may get filled up and whenever there is no more storage
available here the packet gets discarded. That means the buffer becomes full there is no
empty space in the storage then the packet is discarded that is one possibility.

(Refer Slide Time: 6:50)

Another possibility is that whenever there is a big queue at the output nodes suppose it is
going in this direction then what can happen is the particular packet takes very long very
time to reach the front of the queue that means before transmission it spends a long time
in the buffer and as a consequence delay increase significantly and whenever delay
increases significantly the source node after waiting for sometime does not get an
acknowledgment and as a consequence it retransmits the same packets which also in turn
increases the load of the network. These two things happen and as a result whenever the
packets do not get delivered the delay increases to a large extent resulting in congestion.
So congestion arises because of heavy load in the network and for various other reasons
also.

Once again to summarize what we can say is, as packets arrive at a node they are stored
in an input buffer if packets arrive too fast because of bursty nature and incoming packet
may find that there is no available buffer space that is one possibility of a packet getting
discarded.

Another possibility is that even very large buffer cannot prevent congestion because of
delay, timeout and retransmissions. As I have mentioned one can argue that since the
buffer space is insufficient why not you increase the buffer space. But increasing the
buffer size will increase the size of the queue and the packet which is at the end of the
queue will take the long time to reach the front of the queue before it gets transmitted and
this will lead to timeout, lead to retransmissions which will increase the traffic in the
network and which in other words contribute towards congestion. Then the slow
processors may also be responsible for congestion.

The slow processors may take very long time although the link may be of high speed but
the slow processors may take very long time to process a packet so it has to do the buffer
management, it has to do some housekeeping and for all these things the processors will
take sometime. If the processor is slow that may take quite sometime to do this
processing. As a result that may delay a packet leading to congestion.

Then the low bandwidth line may also lead to congestion. As we know the network may
have links of various data rates or line capacity and as a consequence if the bandwidth of
a particular link is small even that can lead to the increase in congestion because the
buffer size will increase and the packet may not be delivered. These are some of the
common causes for congestions. And as we discuss we shall see how the effect of these
causes can be minimized. Now we can define congestion. Now we are in a position to
define congestion.

(Refer Slide Time: 10:25)

If packet arrival rate exceeds the packets transmission rate the queue size grows without
bound.

Delay in delivery of packets leads to retransmissions.

Whenever these two things happen the thumb rule says: When the line for which the
packets are queuing becomes more than 80% that means its original capacity may be
100% but whenever it becomes more that 80% utilized the queue length grows
alarmingly. That means if the utilization of a link increases more than 80% that means the
network has become overloaded and we can say that the network is in congestion. There
will be a port and when too many packets arrive at the port the performance degrades in
the packet switched network, this is known as congestion.

So we can say that, because of overload the delay increases and the network is not able to
handle the packet it has received and as result the packets do not get delivered to their
destination. This is called congestion. Now let us see what is the effect of it.

As you can see this greenish curve corresponds to uncontrolled that means no controlled
measure has been taken for overcoming congestion. In such a situation initially as the
offered load or the number of packets introduced in the network increases all the packets
gets delivered that means the throughput follows linearly. That means if it is 0.2% of the
total capacity then here also it is 0.2% so it rises linearly. But as you can see when it
crosses 0.6 to 0.8 mark there is delay and that delay leads to retransmission and as a
result the throughput decreases and the rate of increase decreases initially then the
throughput suddenly drops although the offered load increases and at certain point it
becomes 0 which is known as thrashing situation.

In thrashing situation although the stations introduce the packets in the network but not a
single one is delivered because of long delay and retransmission and various other
problems and that situation is known as thrashing. Thrashing occurs when the throughput
becomes 0. Now ideally it should follow this curve (Refer Slide Time: 12:45) if there is
no congestion but because of congestion it behaves in this manner. And by taking
suitable congestion control approaches the congestion can be controlled. In such a
situation the behavior of the network will be somewhat like this.

As you can see the throughput is less than the ideal curve. That means there is some
overhead for implementing congestion control and because of that overhead the
throughput is less because there are some overhead packets which are also transmitted
resulting in a decrease in throughput but it will not drop like the congested network. As a
result although the throughput is less it will never reach the thrashing situation. This is
the effect of congestion on throughput both in controlled and uncontrolled situations as
discussed.

Now let us see the other parameter known as delay. Delay is a very important parameter
and as you can see in the ideal situation as long as the offered load is within the capacity
of the network the delay is very small which is decided only by the propagation time.
(Refer Slide Time: 14:18)

Propagation time and the transmission time of the packet is the only delay and there is no
other delay in the network. However, suddenly whenever congestion occurs whenever we
know that traffic is more than the capacity of the network then the delay will increase
because they will be buffered around 0.8% or so.

Now, in the uncontrolled case as you can see delay can be very high compared to the
ideal situation so you see that the delay is very high in the uncontrolled network. On the
other hand, by using control although the over all delay increases ultimately it can sustain
very high offered load compared to the uncontrolled situation. So we see that how the
delay parameter is affected in all the three conditions ideal, uncontrolled case and
whenever congestion control measures are taken. This curve shows how delay is affected.

Therefore we have seen two important effects of congestion, one on throughput and
another on delay. Now let us see what are the techniques we can use to control
congestion.
(Refer Slide Time: 15:49)

The basic principle of congestion control refers to the mechanisms and techniques used to
control congestion and to keep the traffic below the capacity of the network. That is the
basic objective of the congestion control. So, basic objective is to keep the traffic within
certain limit such that congestion does not work or whenever congestion works we have
to overcome that. That is why congestion control technique basically can be categorized
into two types; one is open loop congestion control and another is close loop congestion
control.

You may be familiar with control theory; this can be explained in terms of control
theory. There are two basic techniques; one is open loop control and another is close loop
control.

The open loop congestion control can be further divided into two types; one is based on
source and another is based on destination. That means who does the control, it can be
done by the source that means source will try to control the congestion by introducing
traffic in a controlled manner or it can be implemented at the destination, the destination
will take suitable measure it will inform the source such that the traffic is reduced. So
who controls is further decided by the two different types under open loop congestion
control technique.

Then under the category of close loop we have got two different types. As we know in
close loop case there is some kind of feedback. That means when the network is
monitored it checks whether control has taken place or not which is not done in case of
open loop. in open loop case there is some feedback from the system. There is some way
of checking the status of the network as whether it is congested or how much congested it
is and so on. Therefore in such a situation there can be two different mechanisms; one is
explicit feedback, another is implicit feedback.
In case of explicit feedback some of the nodes usually switches or nodes they will detect
congestion then inform the source or the one who controls. That means it will go to the
source or sources of the packet. This is explicit feedback. Another is based on implicit
feedback.

Implicit feedback is based on the fact that whenever a packet is transmitted into the
network a particular source may wait for the acknowledgment and by monitoring the
delay in receiving the acknowledgment the source may decide whether the network is
congested or not. If the delay is small which is essentially the propagation time plus
transmission time then the source may say that the network is not congested. On the other
hand, if delay is very high in such a case the source or the station gets some implicit
feedback that means that acknowledgment accesses an implicit feedback which can be
used for controlling congestion. So based on this the close loop can be divided into two
different types.

Now, the basic objective of the open loop congestion control is by adopting suitable
policies such that the congestion does not take place in the first place that means to
prevent congestion. As you know prevention is better than cure so this is the policy
adopted for open loop congestion control.

(Refer Slide Time: 20:18)


For that purpose suitable policies can be adopted while doing various functions such as
flow control. When flow control is done there are various techniques used in flow control
such as stop-and-wait, go-back-N or sliding-window protocol is used for flow control.
Now, whenever you are using sliding window protocol the number of packets in the
network will depend on the window size. So if window size restricted to small numbers
as 7 or 3 then the traffic on the network will be small. On the other hand, if a large
window size is used the number of packets in the network will be high before
acknowledgments are received. Hence that flow control may help in preventing
congestion. By adopting suitable policies for flow control congestion can be prevented.

Then we have the acknowledgment policy. as you know a particular destination node can
send explicit acknowledgment packet which will reach the source station. Another
possibility is that whenever full-duplex communication is going on there is traffic in both
directions then the destination node can send the acknowledgment in the form of
piggybacking. We already discussed the piggybacking approach. Thus if piggybacking
approach is used for acknowledgment then the number of traffic in the network is
reduced which in turn helps in preventing congestion.

Then we have the retransmission policy. The retransmission policy essentially discusses
the timeout interval. If the timeout interval is small then many retransmission will take
place before acknowledgment is received. On the other hand, if the retransmission time is
longer that means the timeout interval is longer there is a possibility that the number of
packets that is the acknowledgment is received before timeout takes place so
retransmission will not take place. That is why the retransmission policy particularly the
timeout interval will help in preventing congestion.

Now let us see the caching policy. We have seen that there are two approaches.

Whenever automatic repeat request technique is used for error control we know that we
can use either go-back-N ARQ or we can use repeat request protocol. So whenever it is
repeat request then we know that we have to do some kind of caching of the frames. So
whenever it is being done then you do not have to retransmit a number of packets which
have been received correctly by the destination. So this caching also helps in reducing the
number of packets in the network. Then we have the packet discard. The discard policy
will also decide how to control congestion. We shall discuss this later on.

We have discussed the routing techniques. For example, flooding increases the number of
packets in the network. The traffic increases in an unbounded manner. So, if flooding is
used for routing then it may lead to congestion. Therefore by adopting suitable policy in
routing congestion can be prevented.
(Refer Slide Time: 24:14)

Now let us consider two important techniques of open loop control. One is Leaky Bucket
Algorithm and the other is Token Bucket Algorithm.

(Refer Slide Time: 24:17)

Here what is being done is some kind of traffic safety is done so that the network is not
congested. Let us see how it is being done. Here this is basic philosophy. You can see
here that the bucket is getting filled up and here you have got some kind of bursty flow
but there is a fixed flow at the output. Here whenever the bucket gets filled up (Refer
Slide Time: 24:40) leading to what is known as packet discarding. Thus what it
essentially does is suppose the input packet generated by the source node is like this
suppose it generates at the rate of 10 Mbps per 2 second and then for 2 seconds it sends at
the rate of 10 Mbps so here it is bursty in nature and what the leaky bucket will do is it
will smooth out and send it at the rate of 2 Mbps for 10 seconds. So, for 10 seconds it is
sent in the uniform rate. So as a result as you can see the traffic is at a fixed rate
throughout the 10 second period. This is what is being done by this approach.

(Refer Slide Time: 25:47)

So what it does is it shapes bursty traffic into fixed rate traffic however it has the
disadvantage that packets are dropped when the bucket is full. When the bucket is full the
packet gets dropped in this particular type of algorithm. One another important drawback
of this technique is that whenever the network is not congested there are few packets in
the network and even under that situation it reduces traffic rather controls the packet in
the same manner. So whether the network is congested or not, whether there are many
packets or smaller number of packets at a uniform rate the packets are introduced which
is not necessary. When you have got less number of packets in the network the packets
may be introduced at a higher rate and that is precisely what is done in this Leaky Bucket
Algorithm. Therefore in the Leaky Bucket Algorithm the input is again bursty in nature
and as we have already discussed the leaky bucket is generated in a uniform manner. So
this is the limitation even in the beginning and at the end as it generates at a uniform rate.
This limitation can be overcome by using the Token Bucket Algorithm as we shall see.
(Refer Slide Time: 28:35)

As I already mentioned this enforces rigid output pattern even when the traffic in the
network is small. Now let us see what is being done Token Bucket Algorithm.

In Token Bucket Algorithm it is done in a different way. Here you have got some kind of
a counter and per each tick say (delta t) a token is added to the bucket. And whenever
packets come if you have got enough number of tokens accumulated then it allows the
traffic to be sent at the rate in which it has come.

After all the tokens are exhausted it will then introduce the packets at the rate of one
token per tick. That means it will become the same as the Leaky Bucket Algorithm. So let
us consider the same example. Suppose it has received at the rate of 10 Mbps for two
second this is one this is two because of the bursty nature of traffic therefore in case of
token bucket what will be done is may be for one second it will send at the rate of 10
Mbps and assuming that five tokens were stored and after that for five more seconds 1 2
3 4 5 that means in six seconds all the packets are transmitted.

Therefore as you can see, initially for this part there were accumulated tokens and
because of that the data is transmitted at the rate which it has been introduced into the
network and after that it is transmitted. Both these Leaky bucket and Token Bucket
Algorithms can be implemented by the operating system or by a network interface chord
which is connected to the network and as you can see it is implemented with the help of a
counter which initializes to 0 in the beginning and per tick the counter is incremented.
(Refer Slide Time: 31:20)

On the other hand, whenever each packet is sent the counter is decremented. In other
words countdown for each packet sent and count of it performs for each tick and in this
way the counter is maintained to implement the Token Bucket Algorithm so it can be
implemented either by hardware or by the operating system of the host.

We have seen that the Token Bucket Algorithm saves up to a maximum of N tokens and
allows a burstiness of N packets in the output stream thereby giving faster response. We
have already seen that the time taken for introducing the packets in the network is smaller
than the Leaky Bucket Algorithm in Token Bucket Algorithm so it increases throughput
compared to the Leaky Bucket Algorithm thereby providing you better throughput.

Now, to make the traffic shaping approaches successful it is necessary to specify the
requirement in a precise manner known as flow specification. That means traffic shaping
can be done very effectively if the flow specification is provided by the source node in a
proper manner and its requirement is specified in a proper manner. What are the things to
be specified let us see.

(Refer Slide Time: 32:20)


first one is input characteristic that means the maximum packet size in terms of may be
kilobits or megabits or whatever it may be then the token bucket rate it can be say
megabit per second the maximum rate at which the packet can be sent then token bucket
size it can be megabyte or gigabyte and the maximum transmission rate that can be done.
These are the four input characteristic to be specified. On the other hand, services
required desired by the host must also be specified like what is the reliability desired,
what is the delay in sending a packet is desired, and what is the delay it can tolerate for a
particular application that has to be specified depending on the nature of traffic whether it
is data or voice or video.

Hence whenever you are sending voice or video another important parameter is not the
delay but the variation of delay and that has to be specified properly. So the variation of
delay needs to be properly specified that is expressed in terms of jitter. Also, it is
necessary to specify the bandwidth required for a particular application. So obviously for
sending speech or music the bandwidth required is smaller compared to bandwidth
required for sending video. These are the services desired that must be specified so that
traffic shaping can be done properly. With this we come to the end of open loop
congestion control technique. Now let us focus on the close loop congestion control
technique.

(Refer Slide Time: 34:30)


In close loop congestion control technique there are three basic steps. First one is monitor
the system for detection of congestion. That means monitoring of the status of the
network is necessary and usually it is done by the nodes or the switches they will monitor
and decide based on the delay, based on queue length and various other parameters and
then that has to be informed to the place where the action is taken. That means pass the
information to the proper place for taking action to control congestion. Usually this action
has to be taken by the stations or the source nodes. So information has to be passed on to
the stations for taking action to control congestion. Then the source will adjust system
parameters to get out of congestion. That means the close loop congestion control
techniques are used when the network is really in congestion may be it is in light
congestion mode or it is heavily congested whatever it may be both will be detected. Here
we have various approaches used for congestion control.

(Refer Slide Time: 35:22)


First let us consider the admission control. This admission control technique is used in
virtual circuit networks. So, when congestion is detected no more virtual circuit is
allowed to be set up. That means whenever congestion has already taken place the
network will not allow setting up any more virtual circuit. That is the basic approach of
admission control. This is one policy.

Another is, the virtual circuit is allowed to be set up around the congested subnets. That
means may be the congestion has affected or the traffic has increased significantly in one
portion of the network and not throughout the network. So, if the packet can be sent
around the congested subnets and not through the congested subnet then this is also part
of the admission control. That means the virtual circuit is allowed to be set up around the
congested subnet and not through the congested subnet that can be done as part of the
admission control.

So an agreement between a host and a subnet is negotiated when the virtual circuit is set
up so that resources along the path are guaranteed throughout the session. That means
whenever a virtual circuit is set up the required bandwidth, required buffer space are all
negotiated between the source node and the subnet so that the congestion is overcome.
Therefore this particular approach is known as admission control approach.

However, this may lead to underutilization of the resources. Since the source node is
negotiating and locating some kind of bandwidth and resources still it is not utilized and
because of bursty nature of traffic it is underutilized that is one limitation of this
admission control approach. Even under the availability of resources the virtual circuit is
not allowed to be set up so this may lead to underutilization but definitely it helps in
overcoming congestion.

Second approach under the close loop control technique is choke packets.
(Refer Slide Time: 38:22)
This approach can be used both in case of virtual circuit as well as datagram type of
networks. Here each node monitors the utilization of the output lines. This is based on the
utilization of the output line. As we already mentioned the thumb rule, whenever
utilization is increased beyond 80% then we may consider that the network is congested.
so whenever this kind of utilization increases beyond some limit, beyond some threshold
level the output line enters a warning state so the particular line enters a warning state. So
if the output line is of a newly arriving packet is in warning state a control packet called
choke packet is sent from a congested node to the source station.

That means the node to which a link is connected and which has reached a warning state
that sends a choke packet towards the source node. so the original packet is tagged so that
it does not generate any more choke packet. So although a choke packet is sent towards
the source node the node forwards the packets towards the destination so it may be
possible that the other nodes in the path may again send the choke packet to prevent that
it is tagged so that no choke packets are generated. Then after receiving a choke packet
the source station reduces traffic by a certain percentage.

Suppose after receiving a choke packet the station reduces traffic by 50%. Now, after
reducing the traffic for fifty percent again it waits for certain duration and still if it
receives a choke packet it further reduces by 25% and again after waiting for some more
time even if it receives a choke packet it reduces by another 10% so in this way it goes on
and at some point time it may discard a packet. But usually whenever this kind of
reduction takes place the network will come out of congestion and no more choke packet
will be received.

If no more choke packet is received for certain duration then the traffic is again increased
but not at a high rate, but may be increased at the rate of 10%. So it is increased at the
rate of 10% and reduced at a higher rate in the beginning then the rate is smaller and
smaller. This is how the traffic is controlled by the source node in response to choke
packets and the system comes out of congestion. This choke packet approach has a
serious drawback that the action taken by a source in voluntary. That means a source
node after receiving a choke packet may decide to reduce the traffic or it may not reduce
the traffic.

(Refer Slide Time: 41:56)

So to get around this problem or to overcome this problem fair queuing approach is done,
fair queuing algorithm is used where queues are scanned in a round-robin manner and
then packets are sorted in order of their finishing time. so you can see that a byte by byte
scanning is done and the finishing time is found out for example 6 is the finishing time
for queue packet on the input line B, then 10 is the finishing time on the packet on the
input line C and so on.

Now, after finding out the finishing times the packets are ordered in this manner in order
of finishing times that means B C A D then the packets are sent in that order. First B is
sent, C is sent, A is sent and D is sent this is known as fair queuing. This is based on the
queue length. Another possibility is that it can be based on the application. For example,
for some applications it may be necessary to discard a recent packet. For example, if it is
sending data. In case of data it is necessary to discard a recent packet rather than old
packet. The reason is if the recent packet is not discarded and if an old packet is discarded
then again you have to perform retransmission of a number of packets by using go-back-
N ARQ.

(Refer Slide Time: 43:50)


On the other hand, if a recent packet is discarded then the number of retransmissions will
be reduced. On the other hand, if it is a video or voice in such a case it may necessary to
discard an old packet rather than a recent packet. The reason is old packet is no longer
important and it may have already cost some kind of damage or jitter in the system. But if
a recent packet is not allowed to be discarded then some weights can be assigned based
on the application so that the packet discarding policy can be formed and this may lead
to weighted fair queuing. Therefore based on these applications you can do weighted fair
queuing where some weightage is given to different packets based on different
applications. Then as I was mentioning packets have to be discarded and when other
methods fail the nodes can resort to heavy artillery known as load shedding.

(Refer Slide Time: 46:30)


That means whenever the nodes are not able to handle the packets the buffer is full or
delay is too high so in such cases packets can be discarded. Now one possibility is that a
node drowning in packets may use at random discard policy.

That means it may discard packets at random however this at random policy is not good.
in many situations some packets are more important than others as I already mentioned so
in such a case some intelligent discard policy can be adopted based on the application and
the application must mark the packets with priority classes. In such a case instead of
discarding packets at random using the intelligent discard policy the packets with lower
priority are discarded than packets of higher priority. So this approach is known as load
shedding. The terminology or the word has been taken from the electrical distribution
system where whenever the load on the electrical network is high load shedding is done
so this approach is similar to that and here essentially packet discarding occurs.

The simulation results show that it is better to start discarding packets early rather than
wait until the network is completely clogged up. That means question arises when packet
discarding should start. Simulation result has confirmed that it is better to start discarding
in the beginning other in the unset of congestion rather than when the network is heavily
congested. That means whenever discarding is performed in the early phase of the
congestion it may come very quickly out of congestion but whenever it is already
congested then its effect is lesser.

Then we shall discuss about another important technique known as resource reservation
protocol.

(Refer Slide Time: 46:54)


This is particularly important in the context of multicast applications. So far we have
discussed about point-to-point data flow that means there is a source and there is
destination. The traffic is going from source to the destination, it is essentially point-to-
point between two nodes. But there are many applications where the data has to go to
multiple destinations or it has to be broadcasted. So, whenever a packet has to go to
multiple destinations we call it multicasting.

One important application for multicasting is, for example there is a station who is giving
video on demand service to a number of users so depending on the network a group of
people will ask for video services for particular stations so may be the same video has to
go to a number of stations depending on the requirement. In such a situation this is useful.
First of all it uses multicast routing using spanning trees. So let us see how it is being
done.

Here 1 2 3 (Refer Slide Time: 48:28) these are the sources and 4 5 6 are the destinations.
Now 1 2 3 are giving video services to the destination stations 4 5 and 6 so each source
will form a spanning tree. As we can see here the spanning tree is from one station will
go to node A, then it will go to node C, then it will go to node D and from there it is
going to three different destinations so it is transmitted on three paths rather than one and
this spanning tree is based on the least-cost path.

Thus from 1 to 4 5 6 this is the spanning tree similarly this spanning tree can be from 2 to
again 4 5 6. Now let us see how the resource reservation can be done. Suppose 4 wants
some service from the source 1 so it will reserve sources along the network the
bandwidth and the data rate that is required and after that the same node may decide to
have another video from another source so it reserves the bandwidth and the buffers in
different nodes along the path.

Now at this point what can happen is another destination may also want to have the
sources along the path. Now you can see here (Refer Slide Time: 50:22) that between C
and D 4 has already reserved for two streamed video traffic and at the same time 6 has
demanded another one. Now, if the nodes between C and D have to find out whether
there exist enough resources between C and D only then it will allow destination station
D to set up the virtual circuit. So we find that in this way the resources are reserved
before transmission of data takes place. Particularly in multicasting application this is
very important.

Another possibility is that it may be necessary to broadcast some information to number


of destination stations. So in such case one important candidate for broadcasting is
flooding as you know. Although flooding increases the traffic significantly but for
broadcasting purpose when broadcasting has to be done the flooding is used for the
purpose of broadcasting.

Finally we shall discuss about the difference between congestion and flow control.

(Refer Slide time: 51:56)

People are confused about these two techniques congestion control and flow control. We
have seen that both cases the destination nodes sometimes send a packet towards the
source node either for flow control or for congestion control the choke packets are sent as
we have seen. That is why people get confused about these two techniques. But they are
much different.

As you can see congestion control is a global issue, it is a joint responsibility of the users
and the network. That means so far as the congestion control is concerned not only all the
nodes but all the stations are responsible for the congestion control for congestion to take
place and it occurs because of combined behavior of the stations, switches, routing
policies and so on.
Its job is to ensure that the subnet is able to carry the offered load. Let us take an example
where congestion occurs.

Suppose you have got thousand nodes and they are sending at the rate of 1 Mbps so in
such a situation when all of them start sending at that rate may be at lower rate that is 1
Mbps congestion will occur. On the other hand, flow control is a local issue between
sender-receiver pair and it is not a global issue. The source destination pair or the sender-
receiver pair is responsible. Its primary function is to ensure that a fast sender does not
overwhelm a slow receiver. That means flow control is necessary in such a situation
where if there is a server which can send at the rate of 10 Giga bytes per second on the
other hand, the source destination node which is a desktop can receive only at 1 Giga
byte per second so in this case it will get overwhelmed. Although there is no heavy traffic
in the network a source node with heavy load can overwhelm a receiver thus in such a
situation flow control is performed.

Thus we have discussed the differences between the congestion control and flow control.
Now it is time to give you the review questions on this lecture.

(Refer Slide Time: 54:47)

1) What is congestion? Why congestion occurs?


2) What are the two basic mechanism of congestion control?
3) How congestion control is performed by Leaky Bucket Algorithm?
4) In what way Token Bucket Algorithm is superior to Leaky Bucket Algorithm?
5) What is choke packet? How it is used for congestion control?

Now let me give you answers of lecture – 21.

(Refer Slide Time: 55:24)


1) In what way random routing is superior to flooding?

As we have discussed the random routing significantly reduces the number of copies of a
single packet to be transmitted through the network compared to flooding but has the
same level of simplicity and robustness of flooding. That means random routing reduces
the traffic but has the same characteristic features like simplicity and robustness of
flooding that is why random routing has some advantage over flooding.

(Refer Slide Time: 56:24)

2) Why adaptive routing is preferred over static routing?


We have seen that adaptive routing increases the traffic in the network. However,
adaptive routing is preferred because of some advantages. First advantage is that it
improves the performance and it works under changing conditions. What are the
changing conditions? When there is a node or link failure or there is congestion then this
adaptive routing will help in aiding congestion control which the fixed routing cannot do.

(Refer Slide Time: 56:55)

3) What are the limitations of adaptive routing?

Routing decision is more complex in adaptive routing which imposes more processing
burden on the switching nodes. As I have already mentioned the routing algorithm used
in the adaptive routing is quite complex as a result it imposes burden on the switching
nodes. And the traffic overhead is more in adaptive routing. We have seen that in
adaptive routing we have to use some additional packets for control purposes that is why
the traffic increases.

If adaptive routing algorithm reacts too quickly to changing network state it may lead to
congestion producing oscillation in the network. As I have already mentioned whenever
adaptive routing is used if it is performed very quickly then the traffic is sent to another
part where congestion occurs and again the traffic is sent to another part that means the
congestion area keeps on changing that is why it leads to congestion producing
oscillation. So these are the limitations of adaptive routing.
(Refer Slide Time: 58:15)

4) Compare and contrast distance vector routing with link state routing.

Distance vector routing uses the knowledge about the entire network, routing only to
neighbors, information sharing at regular interval. On the other hand, link state routing
has knowledge about the neighborhood, routing to all, information sharing at the regular
interval, it converges quickly, requires more CPU power and memory, and it is more
scalable.

(Refer Slide Time: 58:40)


5) In what way the second generation ARPANET algorithm differs from first
generation ARPANET routing algorithm?

As we have seen in second generation delay was actually measured rather than the queue
length. That was the basic difference between the second generation and first generation.
Data Communications
Prof. A. Pal
Department of Computer Science & Engineering
Indian Institute of Technology, Kharagpur
Lecture -23
X.25 and Frame Relay

Hello and welcome to today’s lecture on X.25 and frame relay. In the last couple of
lectures we have discussed about packet switching techniques and also discussed various
issues related to packet switching such as routing, congestion control, flow control and so
on.

In this lecture we shall discuss about two important examples of packet-switched network
such as X.25 which is the oldest one and frame relay. Here is the outline of today’s
lecture. First we shall discuss the basic features of X.25 then consider the three layers of
X.25 then we shall consider frame and packet formats of X.25. We will see that it
operates in two layers data link as well as network layer.

(Refer Slide Time: 2:20)

Then we shall consider the virtual circuits that is being used in X.25 for data
communication and also we shall consider the multiplexing used in X.25. After that we
shall focus our attention to frame relay by first introducing the key features of frame relay
why frame relay instead of X.25 then we shall consider virtual circuits used in frame
formats and we shall see the congestion control is very important in the context of frame
relay and we shall discuss how congestion control is used in frame relay. Finally we shall
conclude our lecture by comparing X.25 and frame relay.
(Refer Slide Time: 2:59)

And on completion of this lecture the student will be able to understand the key features
of X.25, explain the frame format of X.25, understand the function of packet layer of
X.25 and they will also understand the limitations of X.25 and they will able to explain
key features of frame relay and understand the frame relay frame format and explain how
congestion control is performed in the frame relay network. So let us start with X.25.
X.25 is a packet-switched network developed by ITU back in 1976. Of course
subsequently several versions have come up several editions by enhancing the features of
X.25 but it is the one of the oldest one developed in late 70s.

(Refer Slide Time: 4:05)


And this particular protocol defines how a packet mode terminal can be interfaced to a
packet network for data communication. So here the user machine is termed as DTE as
we already know so this is the terminal DTE Data Terminal Equipment (Refer Slide
Time: 4:00) and packet switching node to which this data terminal equipment is
connected is termed as DCE. We are already familiar with these two terms and as you
can see DCE is part of the X.25 network and this particular specification explains in
detail how DTE working in packet mode can interface with DCE and perform packet
transmission. It has got three layers physical, frame and packet layers.

(Refer Slide Time: 4:38)

So if you are comparing it with the OSI model it essentially occupies the three lower
layers, the physical data link and network layer. And, the physical layer in X.25 is
essentially X.21 and then the frame layer uses a subset of HDLC which is known as
LAPB, we already mentioned about it and then the packet layer PLP. These are the three
layers. We shall consider the functions of these three layers in detail.

Physical layer: as I mentioned it deals with the physical interface between the attached
station and the link that attaches that station to the packet switching node. This interface
designs the physical, electrical, functional and procedural specification. The X.21 is the
most commonly used physical layer in standard what has been recommended for used
with X.21. However, in absence of X.21 other standards like RS-232 C can also be used
which is analogue in nature and on the other hand X.25 is a digital interface. But this RS-
232 C can also be used in place of X.21.
(Refer Slide Time: 6:17)

Coming to the frame layer it facilitates reliable transfer of data physically by transmitting
the data as a sequence of frames. It uses a subset of HDLC known as Link Access
Protocol Balanced (LAPB) it is a bit-oriented protocol as seen in HDLC.

Then the third layer is responsible for end-to-end connection between two DTEs and
functions performed are establishing connection, transferring data, terminating a
connection and it also performs error and flow control which is important in the context
of DTEs. Then with the help of X.25 packet layer data is transmitted in packets over
external virtual circuits. We have discussed different types of virtual circuits, here
external virtual circuits are used to perform data communication using the X.25 network.
(Refer Slide Time: 6:45)

(Refer Slide Time: 7:30)

Here it shows the X.25 interface. As you can see this is the DTE this is the data terminal
equipment, here is the data communication terminal equipment DCE which is essentially
part of the X.25 network. So this interface specifies that X.25 physical interface. As I
mentioned we can use RS-232 C in absence of X.21. Then LAPB is used as the logical
interface between the link access layers which essentially works in the data link layer and
it has got multi-channel logical interface which allows several virtual circuits to be
established for communication of data in the packet layer. Then we have the user
processors with which you can communicate to remote users by using this interface. So
here it specifies the X.25 interface the interface between DTE and DCE. Let us look at
the X.25 frame format.

(Refer Slide Time: 8:33)

here the user data is used to form a packet in X.25 packet layer by putting header which
we may call layer 3 header.as you can see layer three header is attached to user data to
form a packet then the packet is passed on to the data link layer and the data link layer
adds LAP-B header and LAP-B trailer and this is how a data link layer frame looks like.

A data link layer frame has this kind of format as we have already discussed in the
context of HDLC. it has got flags at the beginning and at the end and then the address
then the control and information. Then as you know there are three types of frames
allowed in HDLC. One is information frame which is used for communication of user
data, then S frame which is usually empty not used in the context X.25, then U frame
which is used to pass on control information because it uses in-band signaling. In-band
signaling is used with the help of these control frames.

One point you should notice that here it is essentially a point-to-point communication so
you do not really require many addresses. Only two addresses are used. As you can see
here 0000 and 0001 is used by the command issued by the DTE and response to it. That
means whenever some comment goes from the DTE and some response to it is given
with this address 0000 0001. On the other hand there is a common issued by the DCE and
response to it given by the DTE that is given in the address 0000 0011. These two
addresses are only used in the context of X.25 because it is a point-to-point
communication and not a multipoint communication. However, as we shall see
multiplexing can be done in data link layer. We shall consider about it in detail.
(Refer Slide Time: 10:57)

Here it shows how it works in the packet layer. here as you can see asynchronous packet
mode is sent and unnumbered acknowledgement comes from the other end for setting up
the link and then the data transfer can take place, information frame can go, here
unnumbered acknowledgement comes from the other side and not only that but several
such data can be transferred before it can be disconnected by sending a disconnect frame.
Thus each time this kind of virtual circuit is created by using this protocol. The virtual
circuits are created at the packet layer. It uses a virtual circuit identifier known as the
Logical Channel Number. So the virtual circuit identifier is used to identify a particular
virtual circuit.

(Refer Slide Time: 12:20)


As you can see from this DTEA (Refer Slide Time: 12:21) there are several virtual
circuits created. One is going to B, another is going to C and another is going to D
simultaneously. And this particular link that is this DTE to DCE this link has three virtual
circuits so through the same physical link you can create several virtual circuits which are
identified by a Logical Channel Number or LCN. Also, several virtual circuits through
the same link using in-band signaling can be created. So this is the fundamental idea of
virtual circuit that is being used in packet layer of X.25.

Now there are two types of virtual circuits. That is also used in X.25. One is known as
PVC or Permanent Virtual Circuit which is somewhat similar to leased line used in
telephone network, that means line always exists there is no need dial to set up the link so
it differs from the dial-up link. So always there is a connection whether you send data or
not. Similarly here also the Private Virtual Circuit is fixed similar to leased line
established by the network or providing this link. Data transfer occurs with virtual calls
and packet transfer takes place one after the other and there is no need to have call set up
or termination in case of type of Permanent Virtual Circuit.

(Refer Slide Time: 14:10)

However, in case of Switched Virtual Circuit it is necessary to have link for each data
transfer and the sequence of events to be followed are given here. First of all links are set
up between the local DTE-DCE and the remote DTE-DCE. That means first a link is
established between the local DTE and DCE and also between the remote DTE and DCE
that is the first thing that is being done then a virtual circuit is set between the local and
remote DTEs. So, after these two local and remote links are established between the DTE
and DCE the virtual circuit is set between the local and remote DTEs then the data
transfers are performed between the DTEs then the virtual circuit is released and the link
is disconnected. This is the sequence that has to be followed for each session of data
transfer using Switch Virtual Circuit in X.25.
(Refer Slide Time: 15:25)

Now let us focus on the different types of packets used in the packet layer. As I
mentioned the packet can be of different types; broadly it can be divided into two types;
data packets and control packets. Data packets can be essentially used for sending the
user data. For user data obviously there is some maximum limit on the size and that limit
is used to form the data packets. On the other hand control packets can be used for
various purposes. One purpose is to perform flow control and error control. This flow
control and error control can be done with the help of packets like RR which stands for
Receive Request, RNR stands for Receive Not Request and REJ stands for Reject
Packets.

With the help of these three types of packets it is possible to perform flow control and
error control but congestion control is not used in X.25. Essentially flow control and error
control are performed.

Then the other packets are necessary essentially for in-band signaling. As I have
mentioned you have to set up the virtual circuit, perform the data transfer, disconnect the
link so all these links can be done with the help of these packets. And as we know the
user data are broken into blocks of some maximum size and a 24-bit or 32-bit header is
appended to each block to form a data packet. Then it uses sliding-window protocol,
piggybacking for flow control. As I mentioned it performed flow control and Go-back-N
protocol for error control. So it performs flow control and error control using sliding-
window protocol and Go-back-N for error control.
(Refer Slide Time: 17:48)

We have already discussed about these two approaches. Now we shall see how this is
being implemented in X.25 network. X.25 also transmits control packets related to
establishment maintenance and transmission of virtual circuits as I mentioned. Each
control packet includes the virtual circuit number the packet type for example call
request, call accepted, call confirm, interrupt, reset, restart etc. These are the various
control packets that can be sent that can be communicated between the two DTEs and
additional control information specific to a particular type of packet. So, in addition to
this type of packet some additional information specific to that type may be added to the
packet. Thus here is the packet format of X.25 system.

(Refer Slide Time: 18:54)


Here you see the packets are divided into two groups. First of all as I mentioned it is
divided into three different types; data packet, control packet and RR RNR REJ packets
so three different types. And as you can see here the data packet comprises either 3 byte
here it is 3 byte and here it is 4 byte and that depends on whether we are using 3-bit
sequence number or 7-bit sequence number. So whenever a 7-bit sequence number is
used it is necessary to have a 4 byte header instead of 3 byte header. On the other hand
whenever 3-bit sequence number is used that means you can have three bits for giving the
sequence number or for acknowledgement in the user data or in control packets. So here
let me explain different functions of different bits.

This Q bit (Refer Slide Time: 20:15) is actually not used by this layer, it is left to the user
to use it properly for some function. On the other hand this D bit is used as an
acknowledgement from the remote DTE. That means this bit as we know the
acknowledgement or flow control and error control can be performed between in the data
link layer as well in the packet layer. So whenever it is in the data link layer it is
essentially node-to-node or hop-by-hop basis. On the other hand it is end-to-end flow and
error control then it is essentially in the network layer and that is being specified by D.
Whenever D is equal to 0 then it is essentially between two adjacent nodes or between
the local DCE and DTE.

On the other hand whenever D is equal to 1 then it is used for end-to-end flow and error
control so this bit designates that. And essentially this 0 1 or 1 0 is used to differentiate
between the different sizes whether it uses 3-bit sequence or 7-bit sequence. Then 12-bit
number which comprises of two parts group number and channel number together forms
the Logical Channel Number LCN and Logical Channel Number is used to form a
number of virtual circuits.

Then the PR and PS these two are used for acknowledgement and flow control. For
example, PS is used by the sender, he uses this as the sequence number of the packet and
that sequence number can be from 0 to 6 if he uses a 3-bit sequence or it can be 0 to 127
if he uses a 7-bit sequence number. The number cannot be 128 or 8 here. Hence this
number is used as sequence number and PR is used essentially for acknowledgement in
the piggyback form. That means it uses piggyback acknowledgement using that Go-back
acknowledgement using sliding-window protocol. The window size can be 7 or 128
depending on the number of bits used in the PR and PS fields. That is why we will see
that PS is not present in this RR, RNR and REJ packets.

When it is Received Request packet it specifies the number the receiver is waiting for or
is looking for. that means it specifies the number or whenever it is RNR that means that
packet has been corrupted and whenever it is rejected it is using the error control that
packet has to be rejected by using the Go-back-N ARQ technique. These are the
functions.

On the other hand whenever you are using a large sequence of packets that end-to-end
acknowledgement can be done either at the end or for each of the packet. So it is a usual
practice to send an end-to-end acknowledgement after the end sequence of the packet is
being specified by M bit. Whenever this bit is equal to 0 1 then essentially it specifies that
the end of sequence has occurred and end-to-end acknowledgement is sent.

On the other hand whenever it is 0 then you are using the sequence and for each of the
packet acknowledgement is being sent. So we have seen the function of different fields of
this packet layer. Now one point I should emphasize on is multiplexing that is being used
in case X.25. This is one of the most important services provided by X.25.

(Refer Slide Time: 25:35)

As we have seen DTE is allowed to have up to 4095 simultaneous virtual circuits with
other DTEs over the single DTE-DCE link as we have already seen with the help of the
12 bits in the packet, this 12-bit group number and channel number and with the help of
these bits up to 4095 virtual circuits can be created. Each packet contains a 12-bit virtual
circuit number expressed as four bit logical group number plus an 8-bit Logical Channel
Number as I have mentioned.

Individual virtual circuits could correspond to application, processes, terminals etc so you
can divide them as per the application or processes or terminals. The DTE-DCE link
provides full-duplex multiplexing. That means data communication can be done in both
directions and not just in one direction through the same link by using in-band signaling.

So we have discussed in nutshell the various functions of X.25 and we have seen how it
really works. Now X.25 was developed back in 1976 where the speed of the telephone
network was not very high, only 64 kilobits were sent was the standard, only 64 kilobit it
is not very high number in today’s context moreover the links were very error prone, as a
result it was necessary to have elaborate error control and flow control mechanism which
however is not ready not really required in present day context that assured the
development of a faster packet switched network known as frame relay. We shall discuss
about it now.
(Refer Slide Time: 27:18)

X.25 cannot satisfy the present day requirement such as higher data rate at lower cost.
For example, if we use X.25 network since it is a point-to-point network communication
if four stations want to communicate to each other we have to set up this kind of point-to-
point link. So, if there are five nodes which wants to communicate with each other you
will require ten different links. This can be avoided in frame relay network. So in a frame
relay network as you can see each DTE is connected to a data communication network
and the frame relay network does the remaining thing.

So here it is not point-to-point communication. Here (Refer Slide Time: 28:07) it works
like a switched communication network, the nodes can communicate with each other and
it does the switching. So, as a consequence it is much more efficient. Moreover that data
rate is much higher in the case of frame relay. For example, compared to 64 Kbps used in
X.25 the frame relay uses minimum of 1.544 Mbps. nowadays higher data rates can also
be used but that was the minimum value with which it was started so speed is
significantly higher.

Another important point is the X.25 does not support bursty nature of data. It establishes
a point-to-point communication and the data flow has to be streamed or continuous. But
unfortunately the present day context it is necessary to have bursty nature of data because
the computer communication uses bursty nature of data.
(Refer Slide Time: 29:24)

For example, this is the stream data flow and this is the bursty nature of data flow. And
the frame relay has been designed to support the bursty nature of data. So the data can be
sent in burst as you can see within some time say 8 may be millisecond the data is
coming in say 3 bursts compared to stream data that is used in X.25 so here frame relay
support this by buffering the data. And as we shall see, if the total value of data does not
exceed some limit then the packets should be delivered without error.

(Refer Slide Time: 31:36)

Third important factor is, as i mentioned in the era of X.25 network was not reliable. The
links were error prone, slow and as a consequence it was necessary to perform flow
control and error control not only between end-to-end but between each link as you can
see here. So here as you can see the data is going from station A to node 1 and then
acknowledgement comes which are used for flow control and error control then if it is
correct then the data goes to the next hop and again acknowledgement is sent from that
particular side. Similarly from node 2 to node 3 data goes and acknowledgement comes
from node 3 to node 2 to perform flow control and error control then node 3 to station B
data goes and acknowledgement comes. The story does not end there.

As we have seen there should be end-to-end flow and error control and that is being
performed by sending acknowledgement from the DTE of the other side and again the
same is repeated between each hop as you can see. Then the acknowledgement goes
between each hop and the acknowledgement frame goes from node 2 to node 1 again
acknowledgement of that comes node 1 to node 2 and finally acknowledgement reaches
the DTE. So here it completes the end-to-end flow and error control.

(Refer Slide Time: 32:05)

Therefore what you see is, to send a single packet how many packets have been
transferred, it increases the load in the network although it makes it very reliable but
increases the load significantly. So in the era of X.25 this was necessary because the links
were not reliable. However, in the context of frame relay this is not necessary. So what
was done was the flow and error control was completely removed not only the data link
layer but also in the frame layer.

So as you can see the traffic significantly reduced here so data goes from a station A to
node 1 and again data goes from one node 1 to node 2 and data goes from node 2 to node
3 and finally it is delivered to station B. So if any flow and error control is necessary that
has to be supported by upper layers, the frame relay does not supported it because it is not
necessary, the links are much more reliable and flow and error control may not be
required and also the speed is high.
But as we shall see it will be necessary to perform congestion control because of higher
speed. In frame relay also the virtual circuit is identified by a number called data link
connection number. Here also it is based on virtual circuit, creation and there are two
types of virtual circuits as we can see the Permanent Virtual Circuit and the DLCIs the
Data Link Connection Identifiers are permanent and assigned by the network provider.
So the network provider provides these DLCIs and there is a Permanent Virtual Circuit
created from this station A to station B or DTE A to DTE B and this particular link (Refer
Slide Time: 34:04) from A node 1 the DLCI is 75 and between these links DLCI is 85. So
these numbers will be different for different links. On the other hand whenever the
Switched Virtual Circuits are created the DLCIs are temporary. The data link connection
identifiers are temporary and are assigned by the frame relay that means the network does
the assignment during the connection phase.

(Refer Slide Time: 34:45)

As you can see for communication from A to B these two DTEs the DLCI here is 75,
here from 1 to 2 it is 48, from 2 to 3 it is 65, from 3 to 5 it is 98 and from 5 to B is 85.
This is being set up in a dynamic manner as you can see. A set of phase is involved for
setting up the virtual circuit and connection is established after that data transfer is
performed, several packets can be communicated and finally the link is released as it is
done in case of X.25. So this part (Refer Slide Time: 35:33) somewhat similar to X.25.
Now let us see how the switching takes place within the network.
(Refer Slide Time: 35:45)

Here each node maintains some kind of table somewhat similar to the routing table. As
you have seen in case of fixed routing the routing tables are stored in each node so a
similar thing is done in this case also. Here you see (Refer Slide time: 36:04) this node 2
this node or the frame relay switch is shown in an expanded form and here you see the
table that is being stored. That means it has got three interfaces coming from one, then
another is two and another is three. This is the node or switch two. So, interface one this
is the incoming so when it goes from one to two the DLCI is 48 and the outgoing DLCI is
65 and whenever it is going to three the DLCI is different that is 62 and the interface that
goes is 3 DLCI is 98.

When it comes from the other direction that means comes from two to one when it goes
in the reverse direction the DLCI numbers are different, that is how the full-duplex
communication can be done. So here it is 75 and it is 85 similarly from 2 to 3 it is 82 and
92, from 3 to 1 this is 52 and 76 these are the DLCI numbers and from 3 to 2 is 42 and
96.

Therefore you can see how the DLCI numbers are used to do the switching and using this
particular table the switching is done in case of this virtual circuit switch. So within the
network each node performs this kind of switching. The frame relay operates in two
layers compared to three layers used in X.25. So you can see here, this is the frame relay
it got two layers physical and data link and the upper layers are provided by others may
be by IP protocol.
(Refer Slide Time: 37:57)

On the other hand X.25 also supports the network layers, it has got three layers as we
have already discussed. With the help of these two layers the communication is
performed. And the frame format used in frame relay is shown here.

(Refer Slide Time: 38:30)

It has got flag at both ends then the address, control, information and frame check
sequence. Here the DLCI comprises of ten bits so DLCI comprises of ten bits. So you can
have up to 1k DLCI numbers. Then you have got another bit which says whether it is a
command or response, this bit specifies whether a particular frame corresponds to
command or a response then this EA stands for Extended Address. It is not necessary that
the DLCI number has to be restricted to a 10-bit number but it can have more number of
bits. In that case the EA bit is used; it is set to 1 whenever it has got a larger address.
Then FECN, BECN and DE these are used in the context of congestion control. So
although a frame relay does not perform flow control and error control it performs
elaborate congestion control. Let us see later on about how it is done.

The congestion control that is performed is explained here. Because of higher data rate
and no use of flow control frame relay network is prone to congestion. The basic reason
for congestion is bursty nature of data and the frame relay has been designed to support
bursty nature of data, the link speed is high so these two parameters has made it
vulnerable to congestion and as a result it necessary to have to congestion control
mechanism developed for frame relay. It is done in this manner.

(Refer Slide Time: 40:46)

It uses two bits for congestion control. One is Backward Explicit Congestion Notification
BECN bit and another is Forward Explicit Congestion Notification FECN. And also
another bit is used; it uses another bit that is DE Discard Eligibility bit which is used for
packet discarding. That means whenever other things do not work then this discarding is
performed. Let us see how it is being done.
(Refer Slide Time: 41:37)

Thus the packet is going from this sender in this direction and it has been found that this
part that is beyond three in this direction there is congestion. So this particular switch is
identified by some means. This identifies whether congestion has taken place in the
forward direction so each switch or node does. So whenever it identifies that congestion
has occurred then it sends one explicit packet BECN to the source. So it goes towards the
source and the sender in that case takes suitable measure which reduces the traffic
reduces the rate of packet that is being introduced to the network to come out of
congestion. This is the use of BECN bit (Refer Slide Time: 42:38) for controlling
congestion. Then comes the efficient bit. So here you see there is this Forward Explicit
Congestion Notification. This bit is used to inform the receiver.
(Refer Slide Time: 42:58)

So, whenever there is congestion in this part of the network then this bit is set, this is the
direction of congestion (Refer Slide Time: 43:14) in both cases as you can see, then the
receiver is alerted about the congestion. Now it is up to the receiver to inform the sender
by sending a packet that it is necessary to perform flow control so that it can come out of
congestion. Therefore by using these two bits BECN FECN you have got four
possibilities, when these two bits are 0 0 it means there is no congestion in either
direction. Then whenever it is 0 1 or 1 0 there is congestion only one in one direction.

On the other hand whenever these two bits are 1 1 there is congestion in both directions.
So, by using these two bits the congestion control can be done. However, whenever this
does not work then this Discard Eligibility DE bit is used. How it is used? This bit gives
the priority which means whenever it is set to 0 it is set to 0 for some packets and it is set
to 1 for some packets.

That means whenever it is using this particular bit whenever it is set 1 it means they have
lower priority where these packets can be discarded. So it essentially sets a priority in the
packet. In case of congestion the packet has to be discarded and based on this bit the
packets are discarded by the switches. And as you already know the packet discarding is
a very important mechanism used for congestion control particularly when you have to
come out of congestion.

Now you may be asking how do you find out that congestion has taken place or when to
send FECN or BECN bits by a packet? For that purpose it is necessary to perform traffic
control measurements. Each switch performs some kind of measurement on the network
and based on that it will decide when to send FECN or BECN and for that purpose four
attributes are used.
(Refer Slide Time: 45:57)

First one is known as access rate. It is decided by the bandwidth of the channel that
means it is the maximum rate of introduction of packet. For example, if the T1 line is
used then the maximum rate is 1.544 Mbps. That means if a particular node sends a
packet at a rate faster than this then we can say that access rate has been violated.

The second parameter is committed burst size Bc which is the maximum number of bits
in a predetermined period. It does not say that you have to send at this rate. It may be
necessary, for example, in 5 millisecond or may be 5 seconds how many bytes can be
introduced by a particular source node that is specified by this committed burst size. So
may be this is the duration and within this duration it can be introduced in the form of
bursts. And as long as the total size does not exceed the committed burst size it is
guaranteed that the packets will be delivered without error.

Then the third parameter that is being used is Committed Information Rate. It defines the
average number of bits per second that means it performs the averaging and finds out
whether the average number of bits is essentially related to the access rate and this
parameter is also specified in the beginning and if the average number of bits per second
is not exceeded then it is guaranteed that the packets will be delivered without error. On
the other hand it is exceeded then the switch will inform the sender that the Committed
Information Rate has been exceeded.

Another parameter is excess burst size. This is the maximum number of bits in excess of
the Bc where already the committed burst size is there. The network allows in addition to
Be some more before it decides to send FECN or BECN. That means in addition on top of
BECN, suppose this is the limit of the BECN, so this is Bc and this is the Be (Refer Slide
Time: 48:28) so this much of excess is allowed by the frame relay network. But this is
loosely bound on the sides but usually the committed burst size is more important than
the excess burst size but in any case the burst size should not exceed this excess burst
size.

Thus we have discussed how the traffic control is performed and how it is used to decide
when to send the congestion control particularly by sending FECN and BECN. Now
frame relay has been designed not only to support or send frame relay packets but it can
also send packets coming from other networks.

(Refer Slide Time: 49:30)

For example, X.25, ATM or PPP point-to-point so for that purpose special functionalities
known as FRAD Frame Relay Assembler Disassembler is introduced because the frames
of other types of standards or network like X.20 or ATM or PPP may not be the same as
that of frame relay. So what the FRAD function does is, it disassembles the packets sends
through the frame relay network, and then at the other end it performs the assembly. This
is your frame relay network (Refer Slide Time: 50:00). Thus both assembly and
disassembly are performed by two ends so that you can send packets coming from other
protocols and that can be carried using the frame relay frames.

Thus the Frame Relay Assembler Disassembler assembles and disassembles frames
coming from other protocols which can be carried by frame relay frames. This function
relate is provided and which shows that it can be made compatible with X.25 ATM and
other packet switched networks.

Before I conclude today’s lectures it is time to compare the functionalities of X.25 and
frame relay packet switched network. We shall compare the features.

First one is connection establishment. As we know in X.25 connection establishment is


done by network layer. On the other hand it is not done by the frame relay it is done
upper layers. Then the flow and error control is performed in X.25 both at data link as
well as network layer hop-by-hop and end-to-end. On the other hand frame relay does not
perform any flow and error control and for X.25 the data rate is fixed on the other hand
frame relay supports bursty nature of data. The multiplexing performed by X.25 is in the
network layer on the other hand frame relay performs the multiplexing in the data link
laye. as we have seen it is done by using DLCI and here it is done by Logical Channel
Number. Congestion control is not necessary in X.25 because of elaborate use of flow
control and error whereas it is necessary in frame relay.

(Refer Slide Time: 51:20)

So it compare these two packet switched network we have discussed today. Now it is
time to give you the review questions.
(Refer Slide Time: 52:39)

1) In what layer X.25 operates?


2) what are the key functions of X.25 protocol
3) What limitations of X.25 are overcome in frame relay protocol?
4) Distinguish between permanent virtual and switched virtual connections used in
frame relay protocol
5) How congestion control is performed in frame relay network?

Now it is time to give the answers to the questions of lecture – 22.

(Refer Slide Time: 53:21)


1) What is congestion? Why congestion occurs?

When the offered load crosses certain limit then there is sharp fall in the throughput and
increase in delay. This phenomenon is known as congestion. As there is sudden increase
in the load congestion arises. In that situation whenever link utilization exceeds beyond
80% then it is used as rule of thumb for identifying congestion. Congestion may occur
due to certain increase in traffic in the network, it may also arise because of slow
processors and slow bandwidth lengths.

(Refer Slide Time: 54:10)

2) What are the two basic mechanisms of congestion control?

As we know there are two mechanisms of congestion control. One is preventive based on
open loop technique such as the Leaky bucket algorithm or token bucket algorithm where
precautions are taken so that congestion cannot occur in the first place. On the other hand
another approach is based on the recovery from congestion by using close loop
congestion control techniques when congestion has already taken place. We have
discussed various techniques such as choke packet and so on.
(Refer Slide Time: 55:00)

3) How congestion control is performed by Leaky bucket algorithm?

In Leaky bucket algorithm a buffering mechanism is introduced between the host


computer and the network in order to regulate the flow of traffic. Bursty traffic is
generated by the host computer and introduced in the network by Leaky bucket
mechanism in the following manner. Packets are used in the network in one per tick. In
case of buffer overflow packets are discarded.

(Refer Slide Time: 55:30)

4) In what way token bucket algorithm is superior to Leaky bucket algorithm?


Leaky bucket algorithm controls the rate at which the packets have been introduced in the
network, but it is very conservative in nature. Some flexibility is introduced in token
bucket algorithm. In token bucket algorithm tokens are generated at each tick up to
certain limit obviously based on the size of the counter. For an incoming packet to be
transmitted it must capture a token and transmission takes place at the same rate. Hence
some of the busty packets are transmitted at the same rate if tokens are available and thus
introduce some amount of flexibility in the system, this also improves performance.

(Refer Slide Time: 56:19)

5) What is choke packet? How it is used for congestion control?

As we know choke packet scheme is a close loop mechanism where each link is
monitored to examine how much utilization is taking place. If the utilization goes beyond
a certain threshold limit the link goes to a warning state and special packet called choke
packet is sent to the source. On receiving the choke packet the source reduces the traffic
in order to overcome congestion.

With this we come to the end of today’s lecture. In the next lecture we shall discuss about
another packet switch network that is ATM, thank you.
Data Communications
Prof. A. Pal
Department of Computer Science & Engineering
Indian Institute of Technology, Kharagpur
Lecture - 24
ATM

Hello and welcome to today’s lecture on ATM. in the last lecture we discussed about two
important packet switched networks X.25 and frame relay. In this lecture we shall discuss
about another very important packet switched network that is your ATM. In fact ATM is
the most popular packet switched network today. Let us see the outline of today’s talk.

(Refer Slide Time: 1:54)

First I shall discuss about the need for ATM, what is the requirement for ATM then the
one very important concept that is being used in ATM that is cell switching. I shall
elaborate on the concept of cell switching then focus on ATM architecture. And as we
shall see ATM is based on Virtual Circuit Connections. We shall discuss how virtual
circuits are set up or what are the different types of virtual circuits used in ATM and also
discuss about the switching type used in ATM. We shall also discuss about the physical
layer device that is the switching fabric used in ATM. ATM has got three different layers;
physical, ATM and AAL .we shall discuss about the functions of these three layers in
detail.
(Refer Slide Time: 2:40)

And on completion the students will be able to understand the need for ATM, they will
be able to explain the concept of cell switching, they will understand the architecture of
ATM, they will be able to explain the operation of virtual connections and switching type
used in ATM and they will be able to explain the functions of switching fabric of ATM
and again as I mentioned they will be able to explain the functions of the three ATM
layers namely the physical, ATM and AAL.

(Refer Slide Time: 3:22)


So let us start with, the need for ATM. if we look at the existing technologies you will
find that it has got several limitations. First one is the different protocols uses different
frame sizes. The size of the frames can be small, can be large, in other words the frame
size is variable. And for complex network information carried in the header is extensive
leading to inefficiency. That means the information carried in the header, header is quite
big and it contains lots of information, and as we know the bigger the header it is
essentially overhead on the network. So bigger header leads to higher overhead and this
leads to inefficiency.

To reduce inefficiency the size of the data field is increased or it is made variable.
Variety of frame sizes creates traffic and unpredictable and inconsistent data rate
delivery. That means whenever the frame sizes are different it will make the data traffic
unpredictable as, when a particular frame will reach, how long it will take, what will be
the delay etc so it becomes unpredictable and as a result data rate delivery cannot be
ensured. And as we know TDM is Time Division Multiplexing which is commonly used
to make use of the broadband technology because the transmission media with higher
bandwidth that is being used would like to make use of the broadband technology and for
that purpose Time Division Multiplexing is commonly used.

ATM technology has been developed to overcome some of the limitations that I have
already mentioned.

(Refer Slide Time: 5:11)

And the most important design features of ATM are mentioned here. First of all it makes
use of the high bandwidth transmission media which has lesser probability of error. Most
of the earlier protocols for example X.25 when frame relay uses transmission media of
lesser bandwidth for example twisted pair or coaxial cable. On the other hand ATM uses
optical fiber. In other words, nowadays optical fiber is the most popular transmission
media available today and as you know the optical fiber has got very high bandwidth and
the probability of error is much reduced. That means the frames that are transmitted
through the transmission media is less prone to error. So we have to exploit these two
features in ATM that is one design goal. And we have the interfacing capability with
existing systems. Obviously whenever we go for new technology having high bandwidth,
low data rate, low error probability we have to also be able to use the interface with the
existing system so that the data coming from the existing systems can be transported
through ATM. And also it is necessary to have cost effective implementation.

As already discussed our telecom industry uses some hierarchies like T1 T2 T3 and so on
so the ATM should support these hierarchies and that was one of the design features. As
we know most of the circuits are circuit switched. For example, our public telephone
system is circuit switched. On the other hand, the circuit switch network has got a
number of limitations that is why it is essential to have packet switched network,
however, it should be connection oriented that means it must be based on virtual circuit
switching concept to ensure accuracy and predictability which cannot be ensured by
datagram techniques.

There is more functionality in hardware than in software for higher speed. Also, one of
the design goals was that most of the functionalities will be implemented with the help of
hardware rather than software so that it is fast and can be implemented with lower cost.
Let us look at the limitations of the variable size frame multiplexing. here we see that in
the present day technology the frames are variable sized, so whenever there is a big frame
in front of a small frame, for example A is much bigger frame than B1 or C1 and because
A takes quite a long time to go through the medium then B1 will suffer much longer
delay.

(Refer Slide Time: 8:30)


That means different frame sizes make traffic unpredictable and data rate delivery can
vary dramatically. This cannot be tolerated by real time traffic for example video, voice
whenever it is communicated these cannot be tolerated, for example audio and video
frames. Moreover, as we shall see the frames used by audio and video are smaller in size
so the present day technology X.25 or frame relay does not really exploit the very small
size of audio and video frames so these limitations are overcome by using cell switching.

The cell is considered to be a fixed size block of information. It is not a variable size and
usually the size is very small. For example, in ATM the size of the cell is only 53 bytes
and all the frames are of the same size and the cell is used as the basic unit of data
exchange.

(Refer Slide Time: 10:05)

And also, ATM uses Asynchronous Time Division Multiplexing. We already discussed
about the limitations of Synchronous Time Division Multiplexing and how Asynchronous
Time Division Multiplexing overcomes the limitations and that is being used in ATM.

Here we see (Refer Slide Time: 10:27) these are the cells coming from different sources
and because of Asynchronous Time Division Multiplexing the A1 C1 then A2 do not have
the fixed slot and as a consequence these cells will go in different order. This means for
different frames there will be a different set of order because of asynchronous
communication. However, because of high speed and smaller frame size the cells coming
from different sources will reach the destination much more quickly. These are the
advantages.

In spite of interleaving, none suffers long delay. As we have seen in the previous slide
there is some interleaving A1 then C1 then A2 B1 then D1 then A2. For example A1 and A2
are appearing somewhere here and in between we have got several other cells. Therefore
in spite of these interleaving none suffers long delay because of high speed and also
because of small cell size. The particular link cannot monopolize the medium and as a
result none suffers long delay.

Cells reach different destinations in the form of continuous stream because of high speed
and small size of cells. That means the data frames or video frames coming from different
sources will reach the destination in the form of continuous stream of cells and they will
reach quickly with much less delay and as a consequence it is possible to support this
kind of real time traffic.

Switching and multiplexing by hardware at the cell level makes implementation


inexpensive and faster. These are the basic advantages of cell switching. However, there
are some limitations. For example, since the cell size is small the header has got 5 bytes
and the payload is 48 bytes. So this is the size of a cell and as a consequence the overhead
is 9.4% which is quite high so ATM has got high overhead in terms of header because of
bigger header relative to the size of the cell. But the advantages are actually is more than
the limitations that is being imposed by this large overhead.

(Refer Slide Time: 12:31)

Also, it has got different types of delays. For example, packetization delay at the source
for example data is coming from audio sources so they will be packetized in the form of
cells. Since the audio data will arrive at a smaller rate there will be some delay in
packetization, in making the packets, this is known as PD delay Packetization Delay at
the source.

Then we have the transmission and propagation delay. Whenever you are sending over a
long communication media if the distance is very long there will be some propagation
delay and that propagation delay is usually relatively higher compared to the transmission
delay because of smaller frame size. The transmission time will not be small because the
speed is high, it is about 155 Mbps and as a result to transmit 54 bytes will not take much
time, however, the propagation time can be quite long. Hence these two are added where
the propagation time is more significant than the transmission time. This time is known
as the TD or Transmission Delay although the propagation time is more significant.

Now we have the Queuing Delay QD at the each switch. Whenever the packets arrive at a
switch several such packets may arrive from different sources simultaneously so as a
result there will be some delay in serializing or queuing the packets which will lead to
some delay known as QD. Then we come to fixed processing delay at each switch. The
switches will take some time for the purpose of error checking lead to some delay.

and synchronization. This processing delay at the switch will Finally we have the Jitter
compensation or Depacketization Delay at destination.

Since the packets are arriving with different delays it is necessary to buffer them. For
example, when you are playing a audio cassette or any data or playing a video then it is
necessary to buffer them then send them at continuous rate, this leads to Depacketization
so that the jitter compensation can be done at the destination. Therefore whenever the
packet passes through the ATM network different kinds of delays are suffered by it but
these have to be taken into consideration in designing the network with the help of proper
architecture and design.

Let us look at the ATM architecture.

(Refer Slide Time: 16:19)

Here as you can see these are the user interfaces and this is your ATM network where
you have a number of ATM switches (Refer Slide time: 16:31). This interface between
the user and the switch is known as user to network interface so these are all user to
network interfaces and here these are all user to network interfaces. On the other hand,
the interfaces between a pair of switches are known as Network to network interface.
These interfaces are different because these UNI interfaces may be relatively slower in
speed compared to the NNI interfaces, they can be relatively of much higher speed
because of Time Division Multiplexing we use.

Thus two different types of interfaces are specified in ATM architecture; one is UNI
between User to Network and another is NNI between switches in the ATM network.

(Refer Slide Time: 17:35)

The communication takes place by using virtual circuits. There are three components
involved in it. One is the Transmission Path. Transmission Path is essentially the
transmission media, the physical media that is being used while doing the transmission.
On the other hand the Virtual Path and virtual circuits are two components with the help
of which the virtual connections are defined. And within each Transmission Path there
are several Virtual Paths.

I can give the example of two cities connected by highways. So all the highways refers to
the Virtual Path where you have got one city and where you have got another city. So
there can be several highways linking them. Therefore this link shown here is the
Transmission Path and within each Transmission Path you have got several Virtual Paths
VPs, several VPs you can have. So this is your Transmission Path (Refer Slide Time:
18:50) and you have got several VPs inside. Essentially all the VPs together form the
Transmission Path between the cities C1 and another city C2. Then within each Virtual
Path you can have several virtual circuits.

For example, a highway can have several lanes, five lanes, eight lanes etc which form the
virtual circuit within the Virtual Paths. So you find that this is the Transmission Path
which is essentially the physical medium between two nodes or between a user and a
node or switch and within this TP you will have several Virtual Paths and within each
Virtual Path there will be a number of virtual circuits. So this kind of hierarchy is used
for convenience. It will provide a number of advantages. These are the advantages of
Virtual Paths.

(Refer Slide Time: 19:59)

First of all it simplified network architecture, you can organize in a hierarchical manner.
This leads to increased network performance and reliability and it reduces processing and
short connection setup time. Because of this hierarchy connection setup time will be
small and the processing time also will be less and also enhance network services. Hence
these are the advantages provided by the Virtual Path Concepts.

(Refer Slide Time: 20:26)


Now let us look at the characteristics of Virtual Circuit Connections. First of all it has to
provide some quality of service such as delay, Jitter - variation in delay, bandwidth, burst
size so these are the parameters that has to be specified, then the switched and semi-
permanent virtual channel connections.

As we shall see we shall have Permanent Virtual Circuit as well as switched virtual
circuit that is being supported by the ATM. Then we have this cell sequence integrity.
Whenever the packets travel through the same virtual circuits the packets are delivered in
order, as a consequence this cell sequence integrity is [….21:33] and at the other end
there will be no need to put them together in order. That means out of order delivery will
not take place.

The traffic parameter negotiation and usage monitoring is also performed with the help of
these Virtual Circuit Connections. These are the typical characteristics. The virtual
connection is defined by a pair of numbers known as VPI and VCI. VPI stands for
Virtual Path Identifiers and VCI stands for Virtual Circuit Identifiers. So, in case of UNI
between the user and the network interface the VPI is 8-bit and VCI is 16-bit. On the
other hand between two networks or switches the VPI is 12-bit and VCI is 16-bit.

(Refer Slide Time: 22:00)

A particular Virtual Path is identified by 24-bit in case of user to network interface UNI
and it has got 28-bit in case of network to network interface NNI. Hence that provides
enormous number of virtual circuits to be created between user and network and also
between two networks and obviously this number is much larger.
(Refer Slide Time: 23:10)

Like X.25 and frame relay ATM uses two types of connections; one is Permanent Virtual
Circuit and another is Switched Virtual Circuit. In Permanent Virtual Circuit this is
established by the network provider and the VPI and VCIs are all provided by the
network provider in case of PVC.

On the other hand in case of Switched Virtual Circuits, actually the ATM has got no
network layer so it has to take the help of one upper layer, for example, internet protocol
is a network layer protocol, the ATM has to take the help of one network layer protocol
to set up the connection to establish connection and at that time the VPI and VCIs are
defined. Let use see how exactly it happens.

A set of packet goes from one side to the other side then the call processing message
comes to this end, then this signal connect comes to one end and then connect technology
signal goes to the other end. Therefore with this the connection is established and the VCI
and VPI are all defined and then data transfer takes place as you can see. Data transfer is
taking place from source to destination between two end users and as soon as data
transmission is finished the connection is released and the release connection signal is
sent and release comprehensive signal comes from the other end. Thus whenever a
particular connection is setup in Switched Virtual Circuit this is how it takes place.

Now let us look at the switching types used in ATM.

As we know the switches are used to route cells from source to destination end points so
it does routing and switching. There two types of switches used; one is known as VP and
another is known as VPC. So routing is done using VPI for switching, the VCI remains
the same.
(Refer Slide Time: 25:51)

Usually between two switches the switching is usually in the form of VP switch
normally. That means routing is done only by using the VPI but the VCI remains same.
For example, this is one interface, this is another interface, (Refer Slide Time: 26:10) and
this is ATM switch. The packet cell is coming to interface 1 and it is going towards
interface 4. This is already setup. Now you see that the VCI part 63 is sent so it is not
different but the only difference is the VPI part. So the VPI here is 75 at the put interface
and the VPI is 83 at the output interface so the switching is done, it comes here and by
looking at this table the VP switch identifies to which port it will be assigned to and what
will be the VPI port provided to that so the cell is forwarded in this form with the help of
these two numbers.

That means this cell header will have VPI VCI information and with the help of that
routing or forwarding will take place.
(Refer Slide Time: 27:22)

On the other hand in VPC, switch routing is done using both VPI and VCI and usually
the UNI interface uses this kind of switching where both VCI and VPI is being used. This
is the cell arriving at interface 1 with the VCI and VPI 78 and 83 where VCI is 83 and
VPI is 78 and the output will be forwarded to interface 4 and when it is forwarded it
assigns different VPI and VCI. So VCI is 83 and VPI is 78 and here VCI is 93 and VPI is
68. So it is being modified by this switch and then it is forwarded.

Therefore by looking at this kind of table the cell switching and multiplexing is being
done with the help of switches. Now you may be asking that this has to be done by the
hardware and we have to use some kind of switching fabric. One type of switching fabric
that we have already discussed about is your Banyan tree. This is the Banyan tree that is
shown, it has got 8 input and 8 output.
(Refer Slide Time: 28:59)

So eight input and eight output lines have been shown here. As you can see the cells are
arriving here and it can go to any one of these lines. Actually these are output lines and
these are the input lines (Refer Slide Time: 29:23). So whenever it comes to 1 it can go to
any one of the output lines and it is being buffered. So these are being buffered here, you
can see each line can have input and output, actually it can be full-duplex, it has to be
full-duplex.

Here it is coming then through the zero interface it can go to any one of the interfaces and
then switching is done with the help of these bits. So, on the inside it has got micro
switches, these micro switches are used to do the switching. That means depending on
this bit depending on 1 or 0 the input will go to this end or this end. Obviously at a time it
will be able to switch one of the two inputs either to this or to this. So, if they want to go
to the same output then there is collision. So, whenever the destinations are different even
then the collision can occur. However, switching can be done in this way with the help of
different bits. For example, if the destination is 6, let us assume that from 0 it has to go to
6, so from 0 this bit is 1 so it will go here in this path and the 6 is 1 1 0 so again it will go
here then it is 0 so it will go here to this particular output line. In this way the switching is
done, however the collision can occur.

Whenever it has got n inputs a Banyan tree switch will have log2n stages. So here the
number is 8 so it will have three stages as you can see and in each stage there will be
about n/2 micro switches. So n/2 is equal to 4 so we have four micro switches and three
stages in this 8/8 switch.
(Refer Slide Time: 31:45)

Unfortunately because of internal collision even for different destinations coming from
input sources there can be internal collision. The internal collision can be avoided by
using batcher banyan switch. It combines three different modules; one is the batchers
switch. What the batcher switch does is whatever input it receives it does the re-shuffling
in terms of the output link to which it is going. That means all the inputs are reshuffled
based on the destination lines. This batchers switch does that.

Unfortunately there is a possibility that some of the cells coming from some of the inputs
may have the same destination port/line so in such a case it cannot be sent simultaneously
to the banyan tree, hence this is being overcome by using the trap module. The trap
module will buffer them then partly one cell will be forwarded to the banyan tree. In this
way the batcher banyan switch can perform the ATM switching and it overcomes the
limitations of the simple banyan tree and this kind of switching can be used in
implementing the ATM switches. Now let us look at the different layers used in ATM.
(Refer Slide Time: 33:30)

As I mentioned ATM has got three different layers; physical, ATM and AAL, AAL
stands for Application Adaptation Layer. The physical layer defines the transmission
medium, usually it is the optical fiber, bit transmission encoding and signal conversion.
So it may be necessary to convert from electrical to optical and optical to electrical so the
physical layer does that. This was originally based on SONET. We have already
discussed about the Synchronous Optical NETwork. ATM was based on SONET and as
we know in case of ATM the lowest rate is 155.52 Mbps which is the minimum bit-rate
but it has got higher rates.

However, the ATM can not only use SONET but it has the provision for other
technologies. That means it may also use the T1 lines if necessary, but it has been
explicitly designed for SONET. Then the ATM layer performs important functions of
routing, traffic management, switching and multiplexing. On the other hand the AAL
Application Adaptation Layer accepts frames from an upper layer protocol and maps
them into ATM cells.

That means the packets or frames coming from different applications may be of different
sizes. The AAL layer does the mapping to the ATM cells. That means it will break them
up, segment them and also it will do the combination. Let us see how it is being done.
Here the functionality of nodes and stations are defined. As you can see here (Refer Slide
time: 35:40) the functionality of the user stations has got three layer functionality;
physical, ATM and AAL which is the Application Adaptation Layer. On the other hand
the switches have got the functionality of two different layers known as the ATM and
physical. Therefore all the switches have got ATM and physical layers within this ATM
network and only the user interfaces has got three layers of functionality; physical, ATM
and AAL.

Here it shows the functionality of different layers and how it actually happens.
(Refer Slide Time: 36:19)

Here the information is coming from upper layer. The AAL layer does this segmentation
of the frames in terms of 48 bytes so it breaks in terms of 48 bytes in the ATM layer puts
the header converts into ATM cells and it introduces the 5 byte header so with that it gets
converted into cells each of 53 bytes and then it passes through SONET layer.

The popular SONET layers are 12C and 3C which is commonly used for ATM and as we
know the data rates are 155.52 and for 12C it is 622.08 Mbps. Of course as I mentioned
T1 lines the DS 3 protocol can also be supported but primarily it uses these two. so here
(Refer Slide Time: 37:33) it is explained with the help of 3C so STS minus 3C is the
frame and in this frame as we know we have got the packet overhead header, SONET
overhead header, line overhead header so these headers are there so we have 9 bytes here
and here this is 1 byte for packet overhead and the remaining is used to packet the ATM
cells.
(Refer Slide Time: 36:20)

Therefore as you can see it is 9 into 260 byte this is the SPE this is the synchronous
packet and in this the ATM cells are packed where we can pack 44 ATM cells. Here that
5 byte header added by the ATM cells is elaborated so it has got two different types of
header for UNI user to network interface and network to network interface. The main
difference is coming from this GFC. GFC is the Generic Flow Control. This Generic
Flow Control was assumed to be between the users and here is your ATM network
between the switches.

This flow control was assumed to be used between the user and the network and that is
why 4 bits were left aside by the designer of ATM. However, this is not commonly used
so you may consider this GFC as a flaw in the design. Now we have the VPI Virtual Path
Identifier which is 8-bit between the user and the network layer for UNI. On the other
hand it is 12-bit for the NNI interface.
(Refer Slide Time: 39:43)

So whenever it is between two switches for that it can be 12-bit VPI and 16-bit VCI. So
this VCI is 16-bit for both cases. Now, the Payload Type has got three bits, the Payload
Type specifies what kind of payload a particular cell is carrying. The meaning is
explained here.

(Refer Slide Time: 41:46)

For example, if it is 000 the most significant bit specifies whether it is user or
management data. So, if it is 0 then it is user data but again user data can be two types
type 1 or type 0 depending on the least significant bit. And the middle bit is used to
specify the congestion. So whenever it is 000 it is user data that means the cell
corresponds to the user and there is no congestion in the network and it is a type 0
payload.

On the other hand whenever this bit is 1 this is user data and there is no congestion, and it
is type 1. Whenever this bit is set to 1 user data, congestion has occurred in the network
and it is type 0 and whenever it is 1 then this is type 1. When this is 100 it is maintenance
information between adjacent switches. That means two switches can exchange some
information and that management information is specified with the help of this Payload
Type 100.

On the other hand this maintenance information can be between source and destination.
In that case the Payload Type is 101. And this 110 the Payload Type is used for resource
management and 111 has been left for future. So this is your Payload Type (Refer Slide
Time: 41:52) and CLP essentially provides you cell loss priority, there is one bit specified
for that cell loss priority and the cell loss priority is used for congestion control.

As we know whenever congestion occurs it may be necessary to discard packet and that
is done with the help of this Payload Type bit. Whenever this Payload Type is 1 it means
that the packet is of higher priority and whenever PT is equal to 0 the cell is of lower
priority. That means when congestion occurs the cells with PT is equal to 0 it can be
discarded and as long as cells with PT is equal to 0 are available only those cells are
discarded. But the cells with PT is equal to 1 is not touched. However, whenever
congestion is high, during then these cells may be discarded.

There is 8-bit Header Error Control, we shall discuss about it in more detail. this Header
Error Control is a 8-bit field and the characteristic polynomial that is being used is shown
here: X to the power 8 plus X square plus X plus 1.

(Refer Slide Time: 43:12)


This is used for error detection and control correction. As we can see here normally when
there is no error it is in correction mode and whenever there is error it goes to detection
mode. So whenever you have got single bit error then it can be corrected. As you can see
whenever there is single bit error it can be corrected then it goes to the detection mode.
On the other hand whenever a multi-bit error is detected the cell is discarded because it
cannot be corrected. That means each switch does the error checking, so it will have
some overhead although it is done by hardware and whenever there is a single bit error it
can be corrected because it uses 8-bit for error correction purpose and it does the error
correction only for header part.

Here you have got 1 2 3 and 4 that means on this 32 bits or 4 bytes the error correction is
done and the CRC check bits are kept here. HEC operation performed at the receiving
end for error correction purpose and error detection purpose. Now the nodes make use of
fixed cell sizes and HEC to determine the cell boundaries implicitly.

(Refer Slide Time: 44:59)

One important feature of cell switching is that there are no flag bits to identify the
beginning or end of a cell. So, how the beginning or end of a cell can be identified is also
being done with the help of this Header Error Control bits in an implicit manner with the
help of cell boundaries.

The state transition diagram used for that purpose is shown here. Normally the switches
are in hunt mode then bit by bit synchronization is done and whenever correct HEC
header error correction is received then it does cell by cell checking. So it is in the pre-
synchronization phase and whenever it receives several consecutive Header Error
Controls correct HECs then it goes to synchronization mode. That means usually the
value of B is not less than 6 but it is 5.
Hence if you keep it as 5 it means 5 cells are received correctly. that means header error
corrections are corrected and then it goes to synchronization then each and every cell is
now synchronized with the help of this header error correction and as the cells arrive with
53 byte and 53 byte once it gets synchronized then all these subsequences are
synchronized.

However, whenever there is incorrect Header Error Controls it must have less than seven
consecutive incorrect Header Error Control flags that are received then again it goes back
to the hunt mode. So we find that the cell length delineation is performed the help of the
Header Error Control bits and synchronization is done with the help of this particular
field in an implicit manner.

Now ATM supports a number of service categories and these are divided in five different
classes.

(Refer Slide Time: 47:22)

One is constant bit-rate. For example, it is coming from the telephone network T1 circuit
so it comes in a constant bit-rate.

On the other hand the second type is variable bit-rate and it is real time
videoconferencing. Or it may be video data and it is coming in a streamed form without
any compilation then it is real time video or audio data. On the other hand whenever it is
real time it is variable bit-rate non real time. For example, multimedia e-mail then it can
be variable bit-rate non real time.

On the other hand available bit-rate browsing the web is the application. Some minimum
bit-rate is specified. However, if higher bit-rate is available that is being used in the
applications like browsing the web. Then we have the unspecified bit-rate so no specific
bit-rate is specified and whatever bit-rate is available can be used. UBR is Unspecified
Bit-rate which can be used for background file transfer and other purposes.

So real time videoconferencing is going on and in addition to that you can do background
file transfer with Unspecified Bit-rate. Hence these are the different types of service
categories identified by ATM designers and accordingly the AAL layer frames has
originally got five different types but now it is divided into four different types.

(Refer Slide Time: 49:43)

For example, AAL1 is for constant bit-rate stream, AAL2 is for short packets coming
from video or audio applications, AAL3 or 4 for conventional packet switching and
AAL5 for packets not requiring no sequencing and no error control. That means for
simple applications when sequencing is not required and error control is not required this
particular framing can be used.

And this AAL layer divided into two parts. One is the CS sublayer, sublayer that
performs convergence known as the convergence sublayer, segmentation and reassembly
sublayer. So these two functions are different and it has to perform segmentation and
reassembly and also it has to perform the convergence. Now let us look at the four
different types of AAL frames that are being used and find out how it is being done.
(Refer Slide Time: 50:56)

This is being used for constant bit-rate stream. The constant bit streams are coming and
the CS layer divides it into 47 byte packet and the SAR sublayer the segmentation and
reassembly sublayer puts a header which has got two fields namely SN and SNP
Sequence Number and Sequence Number Protection that means for checking purposes.
So these 8 bits are added then it goes to the ATM layer which forms that 55 bit cell with
5 byte header and then it is transmitted to the network.

(Refer Slide Time: 51:13)

On the other hand in case of AAL2 which is used for low bit-rate and short-frame packet
such as audio and video packets the packets are smaller in size which are 24 bytes so to
make it 47 byte you have to add a pad. So here as you can see the segmentation and
reassembly header is little different having different fields like LI, PPT, UUI and HFC.
The SAR header is SF. Here the two headers are added, the CS layer puts the header the
SAR layer puts a header so here it is 3 byte CS layer and it is 1 byte and remaining part is
filled up with pad to make it 48 byte altogether which goes to the ATM layer and ATM
layer puts the 5 byte header and sends it through the switch.

And here we have the AAL3 and 4 which are used both for conventional applications
which can support both connection oriented and datagram type applications. here you see
that the data packet up to 64 kilobytes can be supported and data is divided into packets
by the AAL layer each with a header and trailer so header and trailer are added and this
44 byte payload are taken from the upper layer and then it forms the 48 byte cell which
goes the ATM layer which forms the 53 byte cell and that is being transmitted through
the network.

(Refer Slide Time: 53:11)


(Refer Slide Time: 53:28)

And then AAL5 is a simple and efficient adaptation layer. As you can see this is the
frame format. Here also the data packets are up to 64 kilobytes and here there is no
sequencing and other flow control is needed and that is why this is known as simple and
efficient adaptation layer where there is no header or trailer no error control or
sequencing is done. Error detection is also not performed and that goes to the ATM layer
which forms the packetization then sends it. Let us see how congestion control is
performed in ATM.
(Refer Slide Time: 55:13)

Since conventional congestion control schemes are inadequate for ATM because of the
following reasons the majority of traffic is not amenable to flow control because it uses
real time traffic where you cannot enforce flow control, feedback is relatively slow due to
reduced cell transmission time because the propagation time is longer than transmission
time, variety of applications supported by the ATM network. As I already mentioned five
different types of packets are transmitted and it has a very high speed of switching and
transmission because of that the conventional congestion control cannot be worth.

Congestion control is performed in three different ways; admission control, resource


reservation and rate based congestion control.

Admission control: when a station wants a new virtual circuit it specifies the traffic to be
offered and services expected. if the network is unable to offer the service without
adversely affecting the existing connections, the cell call is rejected. So as we have seen a
particular circuit may not be allowed to setup.

Resource reservation: A setup message is earmarked, better bandwidth along the line
traverses and the bandwidth available is measured which is being used.

Rate based congestion control: this is performed where after each K data cells a sender
sends a RM resource management cell and as it gets back to the sender after reaching the
destination, it comes to know the minimum acceptable rate. Now it is time to give you the
Review Questions.
(Refer Slide Time: 56:00)

1) Why are the benefits of cell switching used in ATM? What are the benefits of cell
switching used in ATM?
2) What are the relationship between TPs, VPs and VCs?
3) How is an ATM virtual connection identified?
4) How cell boundaries are performed in ATM
5) How congestion control is performed in ATM?

Here are the answers of lecture - 23.

(Refer Slide Time: 57:10)


1) In what layer X.25 operates?

X.25 operates in the network layer.

2) What are the key functions of X.25 protocol?

Key functions of X.25 protocol are;


Call control packets are used for call set up. As we know multiplexing of virtual circuits
take place in packet layer and both link layer and packet layer performs flow control and
error control.

(Refer Slide Time: 57:05)

3) What limitation of X.25 are overcome in frame relay protocol?

In X.25 overhead on the user equipments and the networking equipments is very high it is
also slower which are overcome in frame relay as we have seen that X.25 can go up to 64
Kbps and frame relay minimum rate is 1.544 Mbps so it is much faster and lesser
overhead.
(Refer Slide Time: 57:53)

4) Distinguish between Permanent Virtual Circuit and switched virtual connection used
in frame relay protocol?

In permanent virtual connection the path is fixed and data transfer occurs as with virtual
calls but no call set up or termination is required. On the other hand in Switched Virtual
Circuit connection the path is dynamically established virtual circuit using call setup and
call clearing procedure and many other circuits can share the same path.

(Refer Slide Time: 58:25)


6) How congestion control is performed in frame relay network?
As discussed earlier, it uses two bits for congestion control backward explicit congestion
notification and forward explicit congestion notification these two bits in addition to that
it does the packet discarding if users do not response to congestion notices packets are
discarded by the switches.

With this we come to the end of today’s lecture on ATM and also the applications of
packet switched network, X.25, frame relay and ATM, thank you.
Data Communications
Prof. Ajit Pal
Department of Computer Science & Engineering
Indian Institute of Technology, Kharagpur
Lecture # 25
Medium Access Control - I

Hello and welcome to today’s lecture on Medium Access Control. This is the first lecture
on Medium Access Control. We shall cover this topic with two or three lectures. So far
we have discussed about the areas switch communication network where
communications were mainly done by using multiplexing, by using switching through the
transmission media. But here you will see that the communication will be done using a
broadcast media which will require some MAC technique. Let us have a look at the
outline of today’s lecture.

(Refer Slide Time: 02:12)

First I shall introduce to you the need for MAC technique and explain to you the various
broadcast networks in which MAC technique is relevant. Then we shall discuss various
issues in MAC techniques and consider various goals of Medium Access Controls. Then I
shall give an overview of various MAC techniques.

However, in this lecture we shall discuss only RANDOM access MAC techniques such
as ALOHA, CSMA/CD and CA. This one we shall cover in the next lecture.
(Refer Slide Time: 03:00)

And on completion the student will be able to explain the goals and requirements of
MAC techniques, they will be able to identify key issues related to MAC techniques and
they will be able to give an outline of possible MAC techniques. Then they will be able to
distinguish between centralized and distributed MAC techniques and classify various
contention based techniques such as ALOHA, CSMA, CSMA/CD and CSMA/CA. and
finally they will be able to compare performance of contention based techniques.

As I mentioned the networks can be broadly divided into two types. One is switched
communication network where pier to pier communication is performed with the help of
transmission lines, multiplexers and switches. We have already discussed about switched
communication network where you have seen how the multiplexing and circuit switching
techniques are used in networks such as telephone network, SONET network then we
have also discussed various packets switched networks such as X.25 then you have frame
relay and ATM. These are essentially switched communication network either packet
switching or circuit switching is used where techniques like multiplexing and switching
are used.

Now let us consider a different kind of scenario in which we have a medium which is
shared by a number of users, say this is the shared medium and here this is user one
(Refer Slide Time: 4:40), this is user two, this is user three, this is user four and so on. A
particular user will broadcast the data into the network. So any user will broadcast data
into the network. Now whenever it is broadcasted obviously there is a possibility that
several users will try to broadcast simultaneously, that is one problem. The second
problem is whenever a particular network is broadcasting, particular user is broadcasting
it has to go to proper destination so this is some kind of shared medium where essentially
broadcasting is being done.
(Refer Slide Time: 05:33)

Let us have a look at some of the examples. One important example is multi-tapped bus.
Say you have got a bus realized may be by using a coaxial cable and you can have taps to
which different users can be connected, you can make several taps and then n users can
be connected like user 1, user 2 and user n. So in this way n users can be connected to a
common medium that is coaxial cable.

Now if A launches some data on the medium it will proceed in both directions and it will
reach 1 and n and all the other stations connected to the medium. That means here a
particular user is broadcasting and it is being received by all other users which are
connected to the medium. So this is the multi-tap bus and you will see that this type of
multi-tap bus is used in Local Area Networks LAN and WAN.
(Refer Slide Time: 06:51)

Let us consider another example that is ring networks sharing a medium. You have got a
number of users say user 1 2 3 4 and here by the term users we mean they are essentially
the computers. They are connected with the help of point-to-point link and you see they
form a ring. So this particular user 2 is connected to its neighbors 1 and 3, 3 is connected
to 2 and 4 and 1 is connected to 2 and 4 so in this way they are connected and this
particular medium may be realized by using twisted-pair or it may be realized by using
optical fiber which is possible in this type of point-to-point communication. In both cases
you will see that this medium is being shared by all the users.

For example, whenever user 1 or station 1 wants to send to 3 it has to go through this part
of the medium. Whenever it sends to 4 it will go in this manner. So this part of the
medium is being shared by all the users for data communication. So this is ring networks
using a shared medium. This is another example of broadcast networks.
(Refer Slide Time: 08:24)

Third example is satellite communication. Here we see that sharing is being done by
using uplink and downlink frequency bands. For example, as you can see each ground
station is communicating with the help of a transponder and there are two frequencies one
is uplink frequency another is downlink frequency. You can see the uplink frequency is
being shared by all the users for communication with the satellite (Refer Slide Time:
8:56). On the other hand the downlink frequency is broadcast it is going to all the ground
stations. So, if this particular ground station wants to broadcast it will send it using uplink
frequency then the satellite will receive it and retransmit or broadcast to all the other
ground stations.

(Refer Slide Time: 09:16)


This is the example of satellite communication where the uplink frequency is being
shared by a number of ground stations. Of course the downlink frequency is broadcasted
so it is being received by everybody. But the uplink frequency is shared by all by using
some mechanism. So this is another example of broadcast networks.

Then apart from satellite, multi-tap bars and ring networks there is packet radio networks
as we shall see. There are also we use broadcast networks and wireless communication
stations sharing a frequency band. We shall see that in wireless communication such as
cellular network or packet radio and all such cases some frequency bands are shared.
These are all examples of broadcast networks.

(Refer Slide Time: 10:15)

Now question arises how different users will send through the shared media. It is
necessary to have a protocol or technique to orchestrate the transmission from the users.
That means at a time only one user can send through the media and that has to be decided
with the help of this Medium Access Control (MAC) technique. So the communication
has to be orchestrated using some protocol or technique and that is essentially MAC
technique.

So question arises, who goes next? This is the question to be answered. We have seen
there are a number of users of stations who are trying for communication, who are trying
to get hold of the medium for transmission, who will go next, and that will be decided by
the Medium Access Control (MAC) techniques. So the protocol used for this purpose is
known as Medium Access Control (MAC) techniques.
(Refer Slide Time: 11:25)

The key issues in MAC techniques are where and how, these are the two basic questions
which are to be answered whenever we consider MAC techniques. Let us see what these
two means.

By a fair mean who does the control, whether it is performed in a centralized manner or it
is done in a distributed manner. Let us see the case of centralized one.

In the centralized one a designated station has an authority to grant access to the network.
That means a designated station will grant access to the network to the remaining
stations. So it is somewhat like there is a meeting going on and there is a chairman and
chairman can permit any other person who is in the meeting to speak so it is somewhat
like that.

This centralized scheme has a number of advantages because the stations will have very
simple logic because the Medium Access Control is being performed by a centralized
station but the other stations will have very simple logic so when they will be permitted
they will simply transmit the data. So they will know Medium Access Control logic
necessary in each station, only the central station will have the logic. So the hardware or
the software whatever is used that will be minimum in case of this scheme in all other
stations except the central control.

It has another advantage; it is capable of greater control provided features like priority,
overrides and guaranteed bandwidth. By priority we mean, there may be a number of
users and some of them may have some important data to send so the central controller
can give priority to a group of users compared to others. So whenever a high priority user
tries to send data a low priority data transmission can be stopped. So it can be overridden
with the help of this centralized controller and it can also guarantee the bandwidth that
can be allowed to its user.
(Refer Slide Time: 13:50)

So these are the controls which can be very easily performed with the help of this
centralized controller and it can do a very easy coordination.

However, it suffers from lower reliability because if the central controller fails then there
will be nobody to control so the reliability is poor here if it is dependent on the failure of
the central controller. So if the central controller fails the Medium Access Control cannot
be performed by other users. This is the limitation of the centralized controller and that’s
why distributed control is preferred in many situations. In distributed control stations can
dynamically determine the transmission order. Here each and every station take part in
the Medium Access Control operation so it is done in a mutually agreed upon rule or
protocol, that’s why the distributed control performs it in a dynamic manner where each
and every station takes part. Obviously it is much more complex, however it is much
more reliable so it can tolerate some failure. If some of the stations fail it does not matter
but it will continue.

It is also scalable. By scalable we mean it is easier to have, whenever a station joins the
network or a particular station is switched off all these things can be taken care of in a
distributed MAC technique in easy manner and it is very scalable. You can keep on
increasing the number of stations very easily in a distributed environment. These are the
two questions to be answered.

The second question is how? Where and how?


It can be performed in a synchronous manner or it can be performed in asynchronous
manner. We have already discussed about that Synchronous Time Division Multiplexing
STDM where we have seen that specific capacity is dedicated. We have seen that each
slot is allocated to a particular user or station and as a result the bandwidth or capacity is
dedicated in Synchronous Time Division Multiplexing, it can be also done by using
Frequency Division Multiplexing, here these are all synchronous stations.

On the other hand in asynchronous technique the capacity is allocated dynamically. This
asynchronous technique is very important for data communication. The reason for that is
you know that the data is bursty in nature. So whenever the data is bursty in nature there
is no point in providing dedicated capacity. Because if dedicated capacity is allocated it
will not be utilized that’s why in bursty environment it is necessary to use asynchronous
technique such as Asynchronous Time Division Multiplexing ATDM where the capacity
can be allocated dynamically. And how means it is in an asynchronous manner. That’s
why most of the MAC techniques that we shall discuss will be based on asynchronous
techniques.

(Refer Slide Time: 17:32)

Now the MAC technique designer has a number of goals to satisfy. These are the various
goals or objectives.

(Refer Slide Time: 17:55)


First one is initialization. What you mean by initialization? Whenever you put the
network on in the beginning then it should go towards stable state. That means it should
know who will be transmitting first, who will be transmitting next and so on, there should
not be any chaos. As the network is first turned on it should go to a stable state that is
your initialization, the initial condition will be satisfied.

Second is fairness. We have seen that you have got a number of stations who are trying to
send data through a shared medium. Obviously each and every node has to be given equal
opportunity. By that we mean the time for access provided to each station has to be fixed
and the delay where after some amount of time each station has to wait for some time so
that delay should also be same for each and every station. This is your fairness which has
to be guaranteed by the MAC technique.

Then another important is limitation to one station. We have seen that there are a number
of stations trying to send so at the end MAC technique should ensure that only one station
transmits and also it is being received by one station and already one packet of data is
received and not multiple packets will be received by the station. And if multiple packets
are there it should be sent in an orderly manner. So, receipt of packets should be sent in
an orderly manner. It will also perform error limitation, it will do error detection and
correction.

Then we have the recovery. So, whenever there is failure the Medium Access Control
should be designed in such a way that it is able to recover from the failure and then
comes the question of reconfigurability. As I have mentioned stations will be added or
removed from the network so the Medium Access Controls will be able to reconfigure the
network so that it can perform the Medium Access Control in an effective manner.
(Refer Slide Time: 20:40)

Then comes the question of compatibility. Compatibility is important because there will
be some standard Medium Access Control (MAC) technique if different manufacturers
build equipment they should be able to operate with each other, so that is your
compatibility that has to be ensured.

Finally comes the question of reliability. The MAC technique should be able to work
under the condition of failure and other problems. So reliability has to be ensured by the
MAC techniques. Now let us consider the various MAC techniques that is being used.

(Refer Slide Time: 21:30)


As you can see it can be broadly divided into four types; random, round-robin,
reservation and channelization. These four categories are needed in different situations.
And in this lecture we shall focus on the random techniques. We shall discuss about the
other techniques in subsequent lectures. Let us focus on the random technique which is
very suitable for bursty nature of traffic.

These random MAC techniques again can be divided into four different types; ALOHA,
CSMA, CSMA/CD and CSMA/CA. We shall discuss each of them one by one. Let us
first consider the first one that is your ALOHA. This ALOHA was developed for packet
radio network by University of Hawaii. As you know this particular location comprises of
a number of islands and obviously you cannot setup wired network in these islands so in
the University of Hawaii there was a centralized computer central computer and there
were terminals distributed to different islands so it was necessary for the central computer
to communicate with the terminals and for that purpose Abramson developed a protocol
which is known as ALOHA for this kind of environment. The basic environment is
mentioned here.

(Refer Slide Time: 23:16)

This central node is essentially the central computer located in the University of Hawaii
and these are the various terminals. This is the terminal one (Refer Slide Time: 23:22)
this is terminal two and this is terminal n located in different islands. So they
communicate by using a wireless technique which is known as packet radio. Now the
basic technique is shown here. Each of these stations can transmit by using uplink
frequency f1 which is RANDOM access shared by all the terminals. On the other hand the
central node after receiving whatever is being received from all other stations is
retransmitted by using a downlink frequency f2 from the central computer and this is
being received by all other stations. So this is the basic environment for which the
ALOHA protocol was designed.
(Refer Slide Time: 24:25)

It works like this. Any station that has some data to send will send it. So it can be
considered as a free for all. So anybody who has data to send will send it. Now, in such a
case obviously Tf this is the frame transmission time, this is the packet duration. As you
can see the vulnerable period is 2 into Tf. so if a packet is transmitted anywhere within
this range will overlap with this. This packet will overlap with any packet transmitted
within the range 2Tf. Thus as a result what will happen is collision will occur and this
central node will send the garble packet and whenever the garble packet is received by all
other stations they will know that packet has not been transmitted successfully so the
terminals will perform retransmission. The retransmission technique is used here
whenever there is a collision.

Now let us see how it is being done.


(Refer Slide Time: 25:55)

As I mentioned this is the vulnerable period which is equal to 2Tf and a collision can
occur within this period whenever there are multiple stations where multiple terminals
sends data. There is a maximum propagation time. We have seen that from the terminal it
will go to the central node and from central node again it will come here. So this round
trip delay is essentially this 2 into T propagation time. This is the time out period. that
means after the vulnerable period you can see that T0 plus Tf and 2Tf so after a successful
transmission this is the vulnerable period and within this time the T0 plus Tf plus 2Tf and
all other stations will come to know that collision has occurred.

Now, if multiple stations start sending data at the same time then collision occurs. And if
they immediately send data one after the other again there are chances of another
collision. That’s why stations wait for some random amount of time known as back off.
So the stations wait for random amount of time then again they do the retransmission, so
this is your retransmission. This retransmission is being performed by all the stations but
the back off time is different for different stations. After a packet is transmitted it has to
wait for this much duration for when it is retransmitted and this back off time is random
in nature so that the collision does not occur second time in this.
(Refer Slide Time: 27:50)

Let us see how the performance can be improved in this case.

Now as we have seen in the previous case known as pure ALOHA the vulnerable period
is 2 into T that is your transmission time. Now Roberts proposed a technique which is
known as slotted ALOHA and in this technique the time is divided in equal slots. Then
the packet transmission can be started only in the beginning of the time slots like here,
here, here or here and not in between.

(Refer Slide Time: 28:30)


So in case of pure ALOHA transmission can start any time. So as a consequence there
will be no collision and say this is your sender A, this is your sender B, this is your
sender C so if there is overlap there will be overlap for the entire frame and not part of
the frame as it happened in case of pure ALOHA and as a consequence in this case the
vulnerable period is reduced from 2Tf and this will lead to a better performance of this
slotted ALOHA technique.

Let us see how the performance varies.

(Refer Slide Time: 29:18)

Now as you can see this G is the offered load and the number of attempts made per
packet time. and if the number of attempts made per packet time increases as you can see
if it reaches 0.5 in case of pure ALOHA then you get maximum throughput and
maximum throughput is S is equal to G into e to the power minus 2G and this happens
when G is equal to 0.5. So in this case the value of S is equal to 0.5 into e to the power
minus 2G where G is equal to 0.5 so it will be equal to 1 by e so value is eventually 0.184
roughly. That means we can see that the maximum throughput S that can be achieved is
roughly equal to 185 and which happens whenever the offered load is 0.5 that means
number of attempts per packet is 0.5 and as more packets are introduced the throughput
decreases and it may become 0 whenever the offered load is very high.

That means as the load increases there is a possibility that the throughput will become 0.
This ALOHA works only when the traffic is very small and the network is very lightly
loaded which is less than 18%.
(Refer Slide Time: 30:55)

However, by using slotted ALOHA we can see that performance improves and as we can
see here the throughput the maximum throughput is received whenever G is equal to 1.
So whenever G is equal to 1 the maximum throughput that you can get in this particular
case is S is equal to 1 by e is equal to 0.368, in the previous case for pure ALOHA it was
½. So we find that the maximum performance throughput you can achieve in this case is
roughly about 36.8% which is double that of pure ALOHA. So by using slotted ALOHA
the performance is doubled in slotted ALOHA compared to pure ALOHA.

(Refer Slide Time: 32:10)

Now let us look at the limitation of this technique.


(Refer Slide Time: 32:19)

When a station sends a packet others know about it within a fraction of packet
transmission time. So usually the packet transmission time is much longer than the
propagation time because these are very closely located. But in case of pure ALOHA or
slotted ALOHA the medium was not checked before sending the packet. If already a
packet transmission is going on then the station starts sending packets that is being
overcome in CSMA.

Therefore based on this observation that the other stations come to know about it within a
fraction of the packet transmission time why not monitor the medium. this lead to the
development of Carrier Sense Multiple Access protocol which is essentially based on
stations listen to the medium before transmitting or it is based on Listen before talking
(LBT). That means first you check the medium, if it is free only then you send it so the
collision will reduce. It has got two varieties nonpersistent and persistent. Let us see
what happens in non persistent.
(Refer Slide Time: 34:20)

In non-persistent each station senses a carrier and if it is busy then it waits for random
amount of time. On the other hand if it is not busy then the packet is sensed. On the other
hand in case of persistent protocol if the medium is busy it again senses a signal, it does
not wait for random amount of time but it keeps on sensing and then whenever it is not
busy it senses the packet with some probability. These are the two basic variations. Let us
see how exactly it happens.

Thus we have seen that in case of non persistent if the medium is idle the packet is
transmitted. On the other hand if the medium is busy wait random period then re-sense
the medium once again. So, after re-sensing if it is busy again wait for random amount of
time and if it is not busy it is transmitted. In case of that persistent there are two varieties.
In case of 1-persistent if the medium is idle it transmits and if the medium is busy
continue to listen until the channel is sensed idle, then transmit immediately so it does not
wait for probability rather it transmits the probability one.

On the other hand in p-persistent if the medium is idle transmit with a probability p. So it
may transmit or it may not transmit and that is the advantage of p-persistent. So these are
the three basic techniques used here and in this particular case let us see what the
vulnerable period is because vulnerable period plays a very important role so far as the
throughput is concerned.

In case of CSMA as you can see the packet is going from one end to the other end of the
network and by the time it reaches the other end and if no collision takes place then
within one propagation time a station captures the medium. In other words we can tell
that the vulnerable period is t propagation time and we know that the propagation time is
usually small compared to the transmission time. That’s why the vulnerable period is
relatively smaller in case of CSMA technique.
(Refer Slide Time: 36:16)

However, the throughput depends on a parameter known as ‘a’ where ‘a’ equals to the
‘propagation time’ by ‘transmission time’ of a packet. So this ratio propagation time by
transmission time plays a very important role and that decides the throughput. As you can
see we have not given any analytical expression for throughput but here it shows how the
throughput varies as the value of ‘a’ changes. As you can see whenever ‘a’ is going to 0.5
that means propagation time by transmission time is 0.5 that means propagation time is
relatively high compared to transmission time which is about half of the transmission
time then a is equal to 0.5 then this is the throughput it is about 0.36 which is very close
to the slotted ALOHA.

On the other hand as the value of ‘a’ becomes smaller and smaller the throughput
increases and that’s why that parameter plays a very important role so it is necessary to
have smaller propagation time and relatively larger transmission time to have higher
throughput in CSMA protocol.
(Refer Slide Time: 37:58)

Now we find that this CSMA protocol has a very serious limitation. the limitations come
from the fact that, suppose here a particular station starts transmission of a packet and this
is the maximum propagation time to a particular medium (Refer Slide Time: 38:39) say
this is the network, this is one station A and here at the other end we have got another
station B. Now B comes to know about it within this time but the packet transmission
time can be quite long.

(Refer Slide Time: 39:52)


So although a particular station transmits it and then the other end receives that packet it
can easily find out whether the medium is busy or not. but what happens is if this
particular station has started transmission somewhere here and even after detecting a
collision it will still continue transmission so during this point in time both the packets
will overlap and not only this part but also this part of the time so by this time this station
has come to know that collision has occurred in spite of that it has not stopped
transmission and neither the previous station stops transmission. As a result what happens
is the time is wasted. This wastage of time is minimized in CSMA/CD Carrier Sense
Multiple-Access with Collision Detection.

So, in Carrier Sense Multiple-Access with Collision Detection the station listens to the
medium while transmitting that not only it listens to the medium before transmission as
you do in carrier sense multiple access but it does some additional thing, that it listens
while transmitting, that’s why it is called listen while talking (LWT). And here also there
are three cases; nonpersistent, 1-persistent and p-persistent and as you know if the
channel is idle the packet is transmitted if nonpersistent or 1-persistent. For p-persistent
the packet is sent with probability p or delayed by the end-to-end propagation delay with
probability 1 minus p.

(Refer Slide Time: 40:58)

For p-persistent even when the channel is idle it may be sent it may not be sent. On the
other hand if the channel is busy the packet is backed off and the algorithm is repeated
for nonpersistent case. And the station defers transmission until the channel is sensed idle
and immediately transmits in case of one persistent case. For p-persistent CSMA/CD the
station defers until the channel is idle then follows the channel idle procedure.
(Refer Slide Time: 41:30)

We have to understand the channel idle procedure to understand the CSMA/CD protocol.
But the basic idea is this that whenever the channel is found to be idle packet
transmission is started and the stations keep on monitoring the medium and whenever a
collision is detected some jamming signal is introduced to inform all the stations that
collision has occurred. Therefore all the stations backs off and they wait random amount
of time before transmitting another packet, that is the basic idea, let’s see how it really
happens. But before that let’s see what is the minimum requirement of collision detection
in CSMA/CD.

(Refer Slide Time: 42:33)


Here suppose station A starts transmission at time t is equal to 0 and the packet moves
towards the direction of B which is at the other end and before this packet reaches B it
starts transmission again with another packet which goes towards another direction and
whenever this packet reaches it detects collision (Refer Slide Time: 42:45) and whenever
this packet reaches the other end it detects collision so we find that for detection of
collision by both ends the period that is required is 2 t propagation that is twice the
propagation time is required for collision.

(Refer Slide Time: 42:44)

Here what it means is the transmission of the packet should be longer than the two
propagation time, that is the requirement for detection of collision.
(Refer Slide Time: 43:20)

That means collision is detected when a particular station is transmitting a packet and if it
receives some other signal some other carrier then it detects collision. To do that the
requirement is that the transmission time that is T transmission time should be greater
than equal to 2 into t propagation time. If this condition is satisfied all the stations will be
able to detect collision. If it is smaller than that in this situation what can happen is the
packets may collide in the middle that means if the packet is small it may go in this
direction and before it reaches this the transmission is finished and whenever the
transmission starts from here it will send another packet and again both the packets will
cross each other in the middle and collision will only be detected by the middle station
but not by all other stations, not by the senders, so to ensure that this is the requirement.
(Refer Slide Time: 44:38)

Now I mentioned about the back off time. This is the binary back off algorithm that is
being used in CSMA/CD protocol.

(Refer Slide Time: 45:00)

Here we see that whenever the packet is ready it checks whether it is deferring or not
deferring, whether it is waiting for some time or not. If it is not waiting then it starts
transmission. And as I mentioned while transmitting it keeps on monitoring to detect
collision. If collision is not detected then the transmission is finished. However, then the
transmission is done, transmission is successful.
On the other hand if the transmission is not finished and collision is detected then it sends
a jamming signal. After sending the jamming signal it has to decide about how long it
will back off. For that purpose it has to find out for how many attempts it has already
done so it will increment the attempt from the previous one and it will check whether the
number of attempt has exceeded 50 or some maximum number that is allowed. If the
maximum number is allowed it computes back off and waits for the back off time.

Let us see how the back off time is collected. For that purpose a number ‘r’ is found out.
How it is found out? It finds out a value between ‘n’ and 10, ‘n’ is the number of
unsuccessful attempts. Whenever it is the first attempt then value of n is equal to 1. So in
such a case it will wait between 0 and 2 to the power 1 is equal to 2, the first attempt is
unsuccessful attempt which is 1 so it will be 0, 1 and 2. These are the three possible cases
and this is being multiplied with some time slot and that time slot is multiplied by delta t.
So the delta t is multiplied with this and that is the back off time.

Now after two back off say whenever n is equal to 2 this k is equal to 2 and value will be
0, 1, 2, 3 and 4 so it will choose between 1 and 4, 0 and 4 then it may send immediately
or it will send up to 4 delta t amount of time. So in this way it will keep on increasing and
maximum time can be 2 to the power k 2 to the power 10 that means 1024 delta t which
is quite high. That means the back off time randomly increases as the number of attempts
increase. So whenever the number of attempts exceeds fifteen then the packet is
discarded because excessive collision is occurring which means the network is heavily
loaded. So in such a situation the rate transmission is not performed. So you find that in
case of CSMA/CD protocol whenever the number of collisions exceeds 15 the packet
will not be transmitted.
so we cannot guarantee that packets will always be delivered to the destination.

However, whenever the traffic is less that means load is less the number of collisions will
be less then the packet will be delivered.

(Refer Slide Time: 48:38)


In other words that CSMA/CD protocol works fine whenever the load is less. The
CSMA/CD protocol has got three distinct states. First one is contention period. After a
successful transmission there will be some contention period. Contention period is
essentially during which all the stations will try to content to get hold of the medium and
we have already seen that binary back off algorithm which is being used to resolve the
contention and after the contention is resolved a particular station gets hold of the
medium and it performs the transmission.

However, if the network is very lightly loaded there may be some idle period and after
the idle period again there can be some contention period, then packet transmission, then
contention period, then packet transmission or idle period. So we find that in the
CSMA/CD protocol it will be in one of the three states, either it will be transmitting a
packet or through the medium or there will be some kind of contention algorithm going
on contention mechanism going on by using binary back off algorithm then all the
stations will try to get hold of the medium and ultimately one of the station will be able to
get hold of the medium and send the packet. This is how the CSMA/CD works.

(Refer Slide Time: 50:00)


Now let us look at the performance of various protocols. Let us compare the performance
of different protocols that we have discussed so far. Here we see that in case of pure
ALOHA the maximum throughput we can achieve is only 18% and that happens
whenever the value of G is equal to 0.5. in case of slotted ALOHA the performance
improves to about 37% which occurs when G is equal to 1 and for 1-persistence CSMA
we find that the throughput is roughly very close to 60% and for nonpersistent CSMA the
throughput is increasing. Here actually for these two values nonpersistent CSMA and
nonpersistent CSMA/CD we find that for non persistent CSMA/CD we get the maximum
throughput which is very close to 1.

However, whenever the load becomes very high then of course it becomes unstable and
throughput reduces and it reaches the thrashing situation in case of both nonpersistent
CSMA and nonpersistent CSDMA/CD. Here all these curves have been plotted for a
particular value of A which may be roughly equal to 0.01. That means the packet
transmission time is roughly hundred times that of propagation time.

(Refer Slide Time: 52:20)


The propagation time is assumed to be very small compared to the transmission time.
However, if the propagation time is not very small then there is a possibility that the
throughput will not be very high. In this lecture we have discussed about various
techniques based on contention or which are based on random techniques. There is
another protocol which is known as CSMA/CA which is also based on contention used in
wireless applications which I shall discuss in the next lecture. But let me summarize the
techniques that we discussed in this lecture.

here we have seen that ALOHA which is the simplest one works fine whenever the traffic
load is very very small and it is the simplest contention based technique, then the slotted
ALOHA which is an improvement over CSMA/CA, there the performance is little better
double that of pure ALOHA, however, in this case the packets are to be synchronized and
as a result some synchronization mechanism has to be devised, the central controller may
keep on sending some signal to all the stations and they will send them in a synchronized
manner.

On the other hand in case of CSMA that means by monitoring the medium, Carrier Sense
Multiple Access that means which is essentially listen before talk, by monitoring the
medium the performance can be significantly improved in this case over slotted ALOHA
or pure ALOHA and as you can see the nonpersistent CSMA performs better than 1-
persistent CSMA and the reason for that is the 1-persistent CSMA is prone to more
collisions because it is a greedy approach compared to nonpersistent CSMA.

By using Carrier Sense Multiple Access with Collision Detection throughput can be
significantly improved and as you can see it may reach very close to 1. However,
whenever the load is very high then they will not be stable and the throughput will
decrease and it may reach thrashing situation. Now it is time to give you review questions
based on this lecture.

(Refer Slide Time: 55:30)


1) In what environment it is necessary to have Medium Access Control for data
communication?
2) How performance is improved in slotted ALOHA compared to pure ALOHA?
3) Distinguish between persistent and non persistent CSMA scheme.
4) How does the efficiency of CSMA based schemes depend on ‘a’ which is the
parameter propagation time by transmission time?
5) How performance is improved in CSMA/CD over CSMA technique?

These are the five questions based on today’s lecture and these questions will be
answered in the next lecture. Here are the answers to the questions of lecture minus 24.

(Refer Slide Time: 55:57)


1) What are the benefits of cell switching used in ATM?

In the last lecture we had discussed the ATM network and we have seen that in ATM
network cell switching is being performed.
The key features of ATM are:

 Connection oriented service using virtual circuit


 Data transfer using 53 byte cells

So it does cell switching. By using cell switching we have seen that cells reach different
destination in the form of a continuous stream that means the delay is small because of
high speed and small size of cells. So the cell switching has a benefit of smaller delay in
ATM network and also switching and multiplexing by hardware at the cell level makes
the implementation inexpensive and fast. So ATM networks as you know is very very
fast, the minimum speed is 1545 Mbps and the other speeds are still higher and actually
the physical layer that is being used is SONET Synchronous Optical Network and this is
the minimum speed that is being used.

(Refer Slide Time: 57:08)


2) What are the relationship between TPs, VPs and VCs?

The connection between two endpoints is accomplished with the help of the TP
Transmission Paths, Virtual Paths and Virtual Circuits organized in a hierarchical
manner. As we know the Transmission Path is the physical transmission media. Usually
in case of ATM it is optical fiber used to link two points and in each transmission path
Virtual Circuits are logically divided in several virtual paths and each virtual path in turn
carries several virtual circuits, so it is a hierarchy so each transmission path will have
several Virtual Paths, this is your VP, (Refer Slide Time: 58:06) this is your TP and each
Virtual Path will have several Virtual Circuits, and this is your VC. This is the
relationship between TP, VP and VC.

(Refer Slide Time: 58:14)


3) How is an ATM virtual connection identified?

The virtual connection is identified by a pair of numbers VPI and VCI that is your virtual
path identifier and virtual circuit identifier. We know that this is a 12-bit umber and this
is a 16-bit number and with the help of this it is identified.

(Refer Slide Time: 58:47)

4) How cell boundaries are identified in ATM?

The nodes make use of fixed cell and HEC that is your header error control to determine
the cell boundaries implicitly.
(Refer Slide Time: 59:15)

5) How congestion control is performed in ATM?

 Exercising admission control


 Selecting the route of admitted connections
 Allocating bandwidth and buffer to each connection
 Selective dropping low priority cells
 Asking sources to limit the cell stream ratio

These were the five questions given in lecture - 24 and with this we come to the end of
today’s lecture. In the next lecture we shall discuss about that CSMA/CA protocol and
also we shall discuss about the control access techniques, thank you.
Data Communication
Prof. A. Pal
Department of Computer Science and Engineering
Indian Institute of Technology, Kharagpur
Lecture # 26
Medium Access Control- II

Hello and welcome to today’s lecture on Medium Access Control techniques. Today is
the second lecture on this topic. In the first lecture we discussed about contention based
protocols which are essentially random in nature and we have seen that there are
collisions and because of collisions it is necessary to do retransmissions after random
amount of time. And as you know whenever collision occurs it leads to wastage of
bandwidth and whenever the traffic load increases there are many collisions leading to
loss of throughput and ultimately it may lead to thrashing situation.

In this lecture we shall consider about collision-free protocols where collision is


completely avoided. Here is the outline of today’s lecture.

(Refer Slide Time: 02:20)

I shall explain what you mean by collision-free protocols, then we shall discuss one
important protocol which is primarily used in wireless communication that is Carrier
Sense Multiple Access with collision avoidance, here collision is avoided. Then there are
two more techniques which are essentially controlled access techniques, also known as
round-robin medium access control techniques known as polling and token passing.
Then we shall discuss reservation based medium access control techniques and they can
be broadly divided into two types distributed and centralized. We shall discuss several
examples of both the types. And on completion of this lecture the students will be able to
understand the need for collision-free medium access control techniques.

(Refer Slide Time: 03:06)

They will be able to explain the operation of CSMA protocol which we shall discuss and
they will be able to explain round-robin based medium access control techniques such as
polling and token passing. And finally they will be able to explain reservation based
protocol techniques which are primarily used in satellite network which are distributed as
well as centralized, both protocols are there so we shall discuss both of them. They will
be able to understand and explain both these types.

Here is the overview of the total picture. In the last class we have discussed the random
access protocols like ALOHA, CSMA, CSMA/CD, pure ALOHA and slotted ALOHA.
As we know these are all contention based where collision is allowed to take place
particularly even the traffic increases and collision occurs.
(Refer Slide Time: 03:52)

In this lecture we shall consider CSMA/CA, and then round-robin and reservation based
protocol. Question arises why you need CSMA/CA or Carrier Sense Multiple Access
with Collision Avoidance, why do you need it.

(Refer Slide Time: 04:15)

The need has arisen because of the proliferation of portable computers which requires
radio signals for communication. As we know the portable computers although they can
be connected but to maintain portability they should communicate with radio using radio
signals. And as you know all radio transmitters have some fixed range. So, as the range
increases the intensity decreases at the rate of d square. When a receiver is within the
range of two active transmitters the resulting signal will be generally garbled. That means
whenever a particular station is in between two transmitters it will receive signal from
both the transmitters and the signal will get garbled and it will be useless, that’s why this
type of a situation should not be allowed to happen.

And as you know CSMA/CD is unsuitable in such environment. The reason for that is in
wireless environment it is very difficult to detect collision because of attenuation, fading
and also in the radio level it is very difficult to detect collision, that’s why the CSMA/CD
protocol cannot be used here. Moreover, there are two problems because of the special
situation that I have mentioned, these are known as hidden station and exposed station
problems which we shall discuss in detail. We have to develop techniques which are
suitable for this kind of environment. Let us first discuss the hidden station problem.

(Refer Slide Time: 06:25)

Here as you can see this station A is transmitting some signal and this circle shows the
range over which the signal is receiving properly and beyond that the signal is very weak
and particularly the signal to noise ratio will be very small and signal cannot be detected
properly, that’s why this is the range of this transmitter A station A (Refer Slide Time:
6:48).

On the other hand, this C is also transmitting simultaneously and this range is shown with
the help of the other circle which is this circle. Therefore two circles are showing the
range of two transmitters that are transmitting simultaneously.

As you can see B is in between. as a consequence B will receive signal both from A and
C. As a result this signal here what will be received by B will be garbled and it will not
be able to get the packet or the message which is sent either by A or by C but
unfortunately the transmission sent by A will not reach C and also the transmission sent
by C will not reach A because of longer distances. As a result they will not know that
some kind of collision has taken place and B has not been able to get the signal. That
means in such a case both the signals will reach B and collision will occur but A and C
will not be able to detect the collision so A and C will not know, this is the hidden station
problem as this is hidden between two stations. The other problem is known as exposed
station problem.

(Refer Slide Time: 08:52)

Here as you can see station B wants to send a frame to A which is also which is also
listened by C. That means both C and A are within the range of B. As a result whenever
B transmits both A and C will listen to it. And as a consequence C will decide that C is
not able to transmit to D because it will assume that the signal sent by B will also reach D
and collision will occur. This is known as exposed station problem.

Although in practice when B is transmitting to A, C can simultaneously transmit to D it is


possible. But if we normal technique like CSMA or CSMA/CD this cannot be done.
Hence what I intend to say is, in this situation B can transmit to A and C can transmit to
D simultaneously but our protocol will not allow it unless we develop a suitable protocol.
That can be avoided by using a special protocol known as Carrier Sense Multiple Access
with Collision Avoidance. Here collision is completely avoided by using suitable
protocol. Let us see how it is done.
(Refer Slide Time: 09:56)

Here the sender sends a short frame called Request to send RTS which is about 20 bytes
to the destination. The request to send also contains the length of the data frame. This
means request to send packet will contain information about the length of the data frame.
This destination station will respond to it with a short frame which is of 14 bytes known
as clear to send frame and after receiving the clear to send the sender starts sending the
data frame.

Here all the stations which will receive this clear to send will defer transmission. So, if
collision occurs of course there is a possibility that there will be multiple clear to send
frames present in a particular situation and in such cases there is a possibility that clear to
send frame will suffer collision. So, in such a case that back up technique has to be used
so that the clear to send message or packet is received by the sender. Let us see how
exactly it happens.
(Refer Slide Time: 10:57)

So here A sends a Request to send frame which is of 20 bytes and when it reaches B
which sends a 14 byte Clear to send frame.

(Refer Slide Time: 11:06)

Now after hearing this clear to send as you can see here A is sending request to send, B
will send the clear to send and that particular signal will be listened by C as well as by A.
As a result C will defer transmission and of course if both of them request simultaneously
then there will be some kind of collision but clear to send will be sent either to A or to C
one of them will be sent and the other one will defer transmission. And as a consequence
as one of them will defer transmission this exposed station problem will be overcome.
After receiving this clear to send data will be transmitted by either A or by C depending
on who receives the clear to send and after the data is sent the acknowledgment will come
from the receiver.

(Refer Slide Time: 11:45)

(Refer Slide Time: 12:03)

Now, the exposed station problem also will be overcome here. For example, whenever B
sends a packet to A then request to send packet will go from B to A and A will send clear
to send. Let us start once again. Here it is B and B is sending to Request to send to A and
A will respond with a clear to send.
(Refer Slide Time: 12:40)

Now this signal whenever A sends to B obviously it will not be listened by B. So if C


requests a clear to send to B that clear to send will not be listened either by A or by B. As
a consequence if C sends a request to B then that clear to send will be received only by C,
as a consequence B will be able to send to A and C will be able to send to D
simultaneously. So we see that parallel data transmission between B and C and D to C C
to D and B to A is possible by using this protocol. This is known as four way
handshaking protocol and by using this four way handshaking protocol it means request
to send, clear to send data and acknowledgement. This is known as carrier sense
multiplexes with collision avoidance. Therefore the data packet does not suffer collision
in any situation.

We have seen how collision is avoided in wireless communication. Particularly this


protocol is used in wireless LAN as we will discuss in detail later on. Now we shall go
back to the round-robin techniques. The one important approach used in round-robin
technique is polling. Polling is a very common technique used in many practical
situations.
(Refer Slide Time: 14:10)

For example, in classrooms we do the roll call by polling or we ask questions to the
students by using polling. The teacher calls the students one by one by their roll number
or questions the students one by one and passes the answer so a similar thing is done
here. Here you have got stations take turn in accessing the medium. That means in such a
case one station is termed as primary, it can be considered as teacher or the chairman in
practical applications and other stations are called secondary. There are two distinct
nodes.

So whenever this primary wants to send data to one of the secondaries then it can be
select mode, first the primary has to select a particular secondary by sending its address
and whenever the secondary responds, sends an acknowledgement then the data transfer
is performed. On the other hand, in the polling case data is coming from the secondary to
the primary. In such a case primary will send a polling signal to one of the secondary,
secondary will send an acknowledgment and then the data transfer will take place. Let us
see how exactly it happens. First let us consider the polling mode. In this case the polling
signal goes to all the secondary.
(Refer Slide Time: 15:35)

(Refer Slide Time: 15:53)

So it has got no data to which the address was given then no acknowledgement signal
will come and another polling signal will be sent by the primary and if the polled
secondary has some data to send it will send the data then the primary will send an
acknowledgment to the secondary.
(Refer Slide Time: 16:02)

(Refer Slide Time: 16:02)


(Refer Slide Time: 16:15)

This is how in the poll mode one by one the polling is done by the primary and it gets
data from the secondary in round-robin manner.

(Refer Slide Time: 16:30)


(Refer Slide Time: 16:36)

Next one is the select mode in which data goes from primary to the secondary. So
selection goes with the address and that particular station and one of the secondary
stations will respond will respond with an acknowledgment packet and then data transfer
is data is sent by the primary to the secondary and then the secondary will send an
acknowledgment.

(Refer Slide Time: 16:39)


(Refer Slide Time: 16:52)

This is how the polling is performed here whenever a number of stations are connected to
a bus which is shared by all the stations.

Polling can be done in wireless communication also, there can be a central controller
which may use a frequency bane to send outbound signal. So you can see (Refer Slide
Time: 17:36) this controller is sending outbound signals and this is some kind of
broadcasting which is received by all other stations and other stations here at different
frequency are to send the inbound messages. So others will send some signals which are
essentially shared by all the stations.

(Refer Slide Time: 17:52)


In this kind of a technique the bandwidth is divided into two parts; one is outbound and
another is inbound. This is your outbound, this is your inbound and this technique is
called Frequency Division Duplex approach or FDD approach. In such a case that
centralized controller will send message to all other stations by using this outbound
frequency band and all other stations will send message by using inbound messages and
then the polling is performed one after the other the these stations will be asked to send
that. Because since it is a broadcasting situation all these stations will know and only the
poll secondary although it is shared will be sending the data to the central controller. So
all the data transfer will take place through this central controller.

Polling can be done without using the central controller also. Here is a situation where
you have got a number of stations and there is no central controller. Here all stations
receive signals from all other stations. Here if this station transmits all the other stations
will be able to receive it because it is some kind of broadcasting.

Whenever one of them transmits all the other stations will receive it and the stations will
develop a polling order list using some protocol. The details of the protocol is not
mentioned here. But in this distributed fashion all the stations will use some kind of
protocol by which one station will send data to the others. Of course, only based on the
address the destination station will receive the data. So in this way one after the other
each station will transmit although it will be broadcasting but the packet meant for
destination station will accept the data, others will ignore it. This is how polling can be
done without using the centralized controller. It can be done in a distributed manner. Here
it is shown in detail as how the polling messages are performed to send data one by one
with the help of a central controller.

(Refer Slide Time: 20:30)

Here it is the primary station and these are all the secondary stations. This is the time;
first this one is mentioned that this is the work time required for propagation of message
from the central controller to reach this station one. This is the station one so this station
one will receive the message and then it will respond. So it will take some time known as
walk time. So, time required for propagation of message and the station begins
transmission. So the duration from this to this time is known as the walk time for polling
station 1. Then the data transmission from station 1 takes place then it will poll station 2
in the same manner, it will require some walk time for polling two then station 2 will
transmit data.

In this way [a…m 21:18] station will transmit one after the other and again station 1 will
get it start. So all the stations will get their turn for sending a message to the central
controller and of course central controller can also send data in this manner. And total
work time can be considered as the overhead in the polling process. that means this is
essentially the overhead for transmission and you can calculate the throughput based on
the total walk time required and the total time spent for sending the messages and here
the overhead can be made very small because this propagation time is usually small
whenever the stations are located nearby not satellite communication or of that kind so
this walk time can be quite small and throughput can be quite high.

Of course there will be some transmission time involved that means propagation and
transmission time of the packet coming from the central controller to the station and also
packet transmission time of the stations going to the central controller, the
acknowledgement or no acknowledgment packets. This time is known as walk time and
total walk time is the total overhead. In this way you can calculate the total over head and
find out the throughput.

Now we shall discuss the second approach used for round-robin technique known as
token passing. In a token passing approach all stations are logically connected in the form
of a ring.

(Refer Slide Time: 23:14)


You note down this particular term logically. Here we have shown the stations are not
only logical but physically connected in the form of a ring. Here for example this
particular station is connected to its neighbor D and B, B is connected to A and C and C
is connected to D and B so in this way all are connected in the form of a ring. This
particular interface is connected to its neighbor by using point-to-point links. As I
mentioned this point to point link can be either twisted pair or if the distance is very long
it can be optical fiber. So either twisted-pair or optical fiber is used for point-to-point link
between the stations.

Now this ring is essentially the shared medium used for communication and that control
of the access to the medium is performed using a token.

What is a token?

A token is essentially a special bit sequence or it can be considered as a small packet.


This packet circulates around the ring and whoever is the owner or whoever captures a
token gets the right to transmit. So, token is circulated in a round-robin manner and
holder of the token has the right to transmit. Here (Refer Slide Time: 24: 46) somehow a
token is introduced. say suppose A is holding the token so it will send the data, data
transmission will take place and it will come back to A then it will release the token it
will go to B and whenever it receives the B captures the token it will then send the data, it
will go around the ring, it will come back to it, it will take out the data frame and then
introduce the token it will go to C. So in this way the token circulates around the ring and
whenever a particular station has no data to send it simply passes on the free token to the
next station. This is explained with the help of this flow chart.

(Refer Slide Time: 25:35)


We have also sent that there is some kind of a network interface. this is a station and this
is the network interface (Refer Slide Time: 25:58) so all stations wait for the token and
whenever a token passes by this network interface it is then captured and it checks
whether it has got data to send, if the answer is yes it sends the frame and it is now
possible to send multiple frames and not one. If the time allocated is long multiple frames
can be sent before releasing the token.

So it keeps on sending data until the time is over and after the time is over it releases the
token and it stops. This is what goes on in a particular station.

(Refer Slide Time: 26:38)

So in this way each station waits for the token, captures it, sends the data and after all the
data is transmitted or the time is over it comes out of it, it releases the token and then it
goes to the next station. This is how the data communication takes place in a round-robin
manner in producing token passing protocol. Let us see the performance of token passing
protocol.
(Refer Slide Time: 27:15)

The performance can be explained by using two parameters; one is throughput, another is
delay. So this throughput this is a measure of the successful traffic that means how many
packets have been transmitted. As you know it gives you the measure about how many
packets has been transmitted per unit time or per second, how many frames have been
transmitted that is your throughput and delay is a measure of the time between when a
packet is ready and when it is delivered.

As you have seen in the previous diagram it waits for a token, so what is the waiting
time? The waiting time can be very long if there are a large number of stations in the
ring. And on the other hand, if the number of stations in the ring is small then the waiting
time can be small. So this is the measure of the delay. It is the measure of the time
between when a packet is ready and when it is delivered. Let us see on what parameters it
depends on.

A station starts sending a packet at t is equal to 0 t0 and completes transmission at t is


equal to t0 plus 1 and receives the tail at t0 plus 1 plus a. You can see at t0 it completes
transmission and then this ‘a’ is the ratio of the propagation time by transmission time. In
this case transmission time has been considered to be 1. So it is normalized with respect
to the transmission time and ‘a’ is the ratio of the propagation time by the transmission.
So this station will receive the tail at t0 plus 1 plus a.

Now what is the average time required to send a token to the next station? It is a by N. as
I mentioned this is dependent on N, propagation time as well as N. So, for larger value of
N means it will depend on the ratio of ‘a’ and N and whenever the number of stations is
large there can be a long delay and throughput is expressed as S is equal to 1/1 plus a/N
whenever a is less than 1.
And as you know there are two situations so A has started sending the frame so it goes
around the ring. Now if the total propagation time is smaller than the packet transmission
time before it completes transmission the front will be reaching the transmitter. And
another situation is whenever the transmission time is small and propagation time is large
in that case a station will finish the transmission of a frame and only after transmission is
completed the front of the frame that it has sent will reach the station. So this is being
expressed by this parameter, and ‘a’ less than 1 means that the propagation time is small
compared to the transmission time , so propagation time is lesser than transmission time,
in that case this is the expression for throughput (Refer Slide Time: 31:00).

On the other hand, whenever the propagation time is very large in such a case s is equal
to 1/a into 1 plus 1/N for a greater than 1.

(Refer Slide Time: 31:15)

This is the situation when either the transmitter is sending very fast or the loop ring is
very long, in such a case there is a possibility to have multiple token and multiple frame.
Normally in this particular situation it is not possible to have transmission of multiple
frames or multiple tokens but in this situation it is possible to have multiple tokens and
multiple frames. We shall discuss about this technique in more detail whenever we
consider the standard LANS.

We shall see that in the normal token ring based on coaxial cable the length of the ring is
small and speed is small. In such a situation the propagation time is relatively smaller
compared to the transmission time. On the other hand, when we are transmitting at a very
high speed the transmission time is small and relatively propagation time is longer so in
such a case it is possible to have multiple tokens in a single ring and parallely multiple
frames can be sent. We shall discuss about it in more detail later on when we shall
consider the LAN network based on token passing, propagation.
(Refer Slide Time: 33:20)

It is now necessary to compare the CSMA/CD which is one of the popular protocol used
in Local Area Network with respective to token passing. So, if we compare these two we
find that the token ring is least sensitive to workload and propagation effects. As you are
increasing the load we have seen that CSMA/CD is affected everywhere, we have seen
that curve, initially the throughput increases and after reaching a peak it starts dipping
and it falls down. But in case of token passing you will see that it will increase linearly.
That means there is no degradation in throughput as the load increases so it is least
sensitive to workload and propagation effects.

However, whenever the number of nodes is very large the delay will be long in case of
token ring. That means the CSMA/CD has the shortest delay under light load conditions.
We have seen that in case of CSMA/CD if the load is light a particular station transmits a
frame and if it does not suffer collision delay is very small. On the other hand, in case of
token ring a particular station has to wait for a long time to get the token. As a
consequence there is a long delay in the token ring. On the other hand, the delay is very
small in CSMA/CD.

However, if there is heavy load then it will suffer collision and as a result delay can be
long. So, CSMA/CD has shortest delay but it is more sensitive to variations of load
particularly when the load is heavy. These are the two performance comparison between
CSMA/CD and token ring protocol or token passing protocol.

Another important observation based on the protocol, in case of CSMA/CD you have
seen that there can be multiple collisions and whenever an unfortunate frame suffers
fifteen collisions it is discarded. So, in case of CSMA/CD which is based on stochastic or
probabilistic technique there is a possibility that an unfortunate frame is never
transmitted. In other words the delivery of a packet to the destination is not guaranteed.
On the other hand, in case of token passing protocol the delivery is guaranteed. The
reason is that even when the load is very large delay can be long but the packet will be
ultimately delivered, that’s why CSMA/CD is not suitable for real-time traffic. In real-
time traffic we cannot afford to afford to discard a packet or frame.

(Refer Slide Time: 36:05)

So, in real-time applications this CSMA/CD protocol is good only in burst situations
when the traffic is small. On the other hand, the token passing protocol is very suitable
when you have got stream traffic. So whenever you have got streams of data coming at
regular intervals from all stations it is not bursty in nature then the token passing protocol
works better or round-robin technique works better. These are the relative comparisons
between the contention based protocol and the round-robin protocol.

Now we shall come to the reservation protocol which is essentially based on the satellite
networks.
(Refer Slide Time: 36:55)

(Refer Slide Time: 37:40)

Let us look at the unique features of satellite networks. As we know there is long round-
trip propagation delay which is about one fourth of a second. Here we have got a satellite
and here at the ground stations the satellite is located at about 36000 Km away above the
ground and the round-trip delay is about one fourth second. So round trip delay is one
fourth of a second. This is relatively a very long time one fourth of a second. in today’s
context it is quite long time. This has to be taken into consideration when we develop a
protocol. After the round trip delay a station comes to know whether a packet
transmission was successful or it has suffered collision. That means if a ground station
say a transmitter sends a packet then as we know the satellite will receive it and send it
using the downlink frequency and it will do the broadcast. So, if multiple stations sends a
packet which is essentially shared then there will be some kind of collision or the signal
will be garbled and the transmitter will come to know about it only after one fourth of a
second but not before that.

As a consequence CSMA/CD protocols are unsuitable in such a situation. The reason for
that is in a particular station whenever a collision is detected by station it knows that
actually it has happened one quarter of a second earlier and not now and as a
consequence we can say that the value of ‘a’ the ratio of the propagation of time by
transmission time is very large and as we know whenever ‘a’ is large then the throughput
or the efficiency of the CSMA based protocol is very poor and that’s why the CSMA/CD
or CSMA based protocols cannot be used in situations where the propagation delay is
very long particularly in satellite communication.

In satellite communication as we have seen the propagation time is quite large and as a
consequence the schemes based on collision detection cannot be used. Let us see whether
we can use polling. If we use polling then that polling signal has to go from the central
controller to all the stations. Sending the frame and getting the acknowledgment from the
stations will take about two round-trip delay that means half of a second which is very
large. That means just for polling which is part of the walk time will be quite large as I
mentioned earlier and as a consequence this polling will be very inefficient.

Similarly, the token passing also will not be suitable for the satellite networks because to
send the token the station will take at least one quarter of a second to send the token to
the next station and as a consequence neither CSMA/CD based protocol, contention
based protocol nor the round-robin type protocols can be used in satellite networks. In
such a situation a new technique have evolved which is known as reservation based
protocols.

(Refer Slide Time: 41:02)


The reservation based protocols can be broadly divided into two types. The first one is
distributed and the second category is centralized. We shall consider several protocols
based on distributed as well as centralized techniques.

First one is R-ALOHA which essentially means reservation ALOHA.

(Refer Slide Time: 41:27)

There are some basic assumptions. The basic assumptions are the time is divided into a
number of slots. That means the time is divided in terms of slots and this is synchronized.
Secondly there is another assumption, the round-trip delay, this is the frame time (Refer
Slide Time: 42:03) the propagation time, this frame transmission time should be at least
greater than the propagation time. In other words by the time one frame is transmitted this
time should be longer than the propagation time. These are the two assumptions which is
made.

In reservation ALOHA frame length must be at least as long as the bit length, bit length is
essentially the propagation time. That means this time should be more than one quarter of
a second that is the propagation time. This is known as the bit length. So, bit length
should be more than the frame length.

Now here a station contends for a slot in the next frame. So here the situation is, the total
number of frames, here the number of K is greater than the number of slots that mean the
number of slots is larger than a number of slots smaller than the number of stations. K is
the number of stations is larger than the number of slots.

It works in this manner. A particular slot that means a particular frame is transmitted and
each station is receiving whether a particular slot is empty or shielded. So whenever a
frame is found to be empty then a station sends a frame to transmit in that particular slot
to book it. However, it may suffer collision or it may not suffer collision.

Let us assume that there are eight stations and four slots for simplicity and let’s assume
initially that station 1 is transmitting here, this is unused (Refer Slide time: 44:12) and
this station 2 is transmitting here and this is also unused. Now this will be received by all
the stations. Now station three wants to decide in this slot so in the next frame station 1
sends here, station 3 sends here, 2 sends here and let’s assume 4 and 5 decides to send in
this slot so there will be some collision here. So we can see here that collision will occur
in this frame.

Now the third frame can happen in this way. One will send transmission here and three
may continue and suppose two has finished transmission then it will remain unutilized
and this frame again will remain unutilized because of collision.

(Refer Slide Time: 45:00)

So we find that there can be three situations, a particular slot may be used in this case or a
particular slot may be unused, remains unoccupied or there may be collision.

Now if a particular station keeps on sending continuously then it may behave like a
TDMA Time Division Multiple Access when the station sends long streams of data. This
is the situation of the slot, as you can see in this slot the station A is continuously sending
signals.

On the other hand, if the traffic is bursty and the number of packets sent by different
stations is small in that case it will appear to be slotted ALOHA. This particular slot
(Refer Slide Time: 46:06) it is 2 and then unutilized and then 1 so in this way
alternatively different stations can transmit in different slots so it may appear like a
slotted ALOHA. Hence this is the reservation ALOHA protocol used and here it has been
assumed that the number of slots is smaller than the number of stations.

Let us consider another situation.

(Refer Slide Time: 46:29)

In binder’s scheme the number of stations is smaller than the number of slots. So in this
case the number of stations is smaller, earlier the number of stations were larger than the
number of slots. So in this case since the number of stations is smaller than the number of
slots you can see each station is allocated a particular slot. That means here some kind of
implicit reservation for slots are provided because the number of stations is smaller and
extra slots are available in which transmission can be reserved using that slotted ALOHA.

So let’s assume there are only four stations and there are six slots. So we have got six
slots and four stations. This is reserved for 1, this is reserved for 2, this is reserved for 3
and 4 and (Refer Slide Time: 47:30) these two are unreserved, these two are free. In the
first frame a particular station 1 sends her, 2 sends here, if 3 has no data it remains unused
so it is unused and 4 sends here and now if station 4 has got more data to send it may
send here by sending data using the slotted ALOHA and station 2 also can send here.

now in the next frame what can happen is, since this is unutilized a particular station may
capture it by sending frame in it, 1 can send here, 2 can send here, and this column
although it is allocated to 3 and since 3 has no data it was unutilized so another station 4
can block it and send data here. So in this way the transmission can go on. So here you
see it is superior to reservation ALOHA for stream dominated traffic.

However, whenever the nature of traffic is bursty then of course it is not very efficient.
So here you see, we are doing some kind of implicit reservation here,
it is a mixture of implicit reservation and it is a combination of implicit and explicit
reservation. So, for these stations the reservation is implicit and these excess slots the
reservation is explicit and explicitly it has to be done by using slotted ALOHA.

(Refer Slide Time: 49:15)

So this is binder’s scheme and as I mentioned it will perform better when the traffic is
stream dominated.

There is another interesting stream proposed by Robert and here explicit reservation is
necessary by sending request in a minislot which acts as a common queue in each frame.

(Refer Slide Time: 49:35)


So here we see this is a frame. Now this frame is divided into a number of slots and not
only it is divided into a number of slots but one of the slots is divided into a number of
minislots. So it is divided into a number of minislots. Now in these minislots the stations
can send data and reserve the transmission. Suppose the number of minislots is equal to
the number of senders, so if it sends here then this is blocked, if it sends here then this is
blocked in this way blocking can be done that means successful transmission in a
minislot allows reservation. That means it is not explicitly for a particular station.

If a particular stations is successful in transmission of a particular minislot, suppose in


minislot 1 a particular station is transmitted then it books it for slot number 1, this is the
minislot and here are the other slots let us assume, so if a station is successful in sending
a minislot 1 this is reserved for that station, if another station is successful in sending in
this minislot it is reserved for this station, in this way by sending reservation frames in
the minislots the reservation can be done explicitly for the subsequent frames.

Now, for lengthy stream there can be considerable delay. Why there can be considerable
delay? Because we find that in this case each station has to explicitly book for sending a
frame and each time it has to be done. So, for each frame explicit booking is necessary.

Now, if a particular message is divided into large number of packets it can suffer long
delay because to send this packet explicit reservation will be necessary and to send next
packet another explicit reservation will be necessary likewise in this way for each of
these packets there will be a need for explicit reservation which will lead to a
considerable delay. This is the disadvantage of this Robert’s scheme.

These are the distributed schemes which we have discussed in detail where we have seen
the stations are doing their reservations by sending packets by using slotted ALOHA.

(Refer Slide Time: 53:15)


The distributed scheme suffers from the disadvantage of higher processing burden of
each station. That means each station must have the processing capability, they should be
able to send data and reservation packets, as a result the high processing burden is
necessary.

On the other hand, distributed schemes are vulnerable to loss of synchronization. That’s
why centralized scheme can be used. We shall discuss two centralized schemes. One is
known Fixed Priority-Oriented Demand Assignment and second one is Packet Demand
Access Multiple Access. This is centralized FPOMA Fixed Priority-Oriented Demand
Assignment.

(Refer Slide Time: 53:25)

It is an extension of Robert’s scheme. It has got six stations and one of the six stations
acts as a controller. Here it is a centralized scheme. So there should be a controller, the
stations can send their requests in minislots, here there are minislots, we have assumed
that there are six stations so these are the six minislots in which request can be sent and
the priority of that packet whether it is interested in sending a normal packet or a bulk
data that also has to be specified and the controller maintains a queue of requests and
allocates six variable length slots.

As you can see (Refer Slide Time: 54:11) here the slots are not fixed, so based on this
request the controller will allocate the slots of variable size and all the six stations will be
able to send variable size frames in these slots. This is your centralized FPOMA.

And this PDAMA has got four different types of slots. There is a leader control slot, a
guard slot, a reservation slot and data slot. The leader control slot is used by the master
station to communicate acknowledgement.
(Refer Slide Time: 54:48)

So this is used for communicate to communicate acknowledgement to all the other


stations and guard slot helps other stations to hear the leader control slot and prepare
further reservation and also it can be used for ranging. The minislots for reservation are
used for reservations. Here the other stations can send their request using slotted ALOHA
and the data subframe is used for variable length data so this subframe is used for sending
variable length data by other stations.
So this is a centralized scheme.

Therefore, we have discussed different types of reservation protocols. Here are the review
questions.
(Refer Slide Time: 55:51)

1) What are the advantages of collision-free protocols over random access protocols?
2) How does CSMA/CD differ from CSMA/CA
3) Compare CSMA/CD and token passing protocols
4) Why CSMA/CD or token passing MAC protocols cannot be used in satellite
communication?
5) Compare the distributed and centralized reservation protocols

Here is the answer to question 1 of lecture minus 25.

(Refer Slide Time: 56:20)


1) In what environment it is necessary to have MAC for data communication?

In broadcast networks a single transmission media is shared by all the users and
information is broadcast by an user into the medium. In such a case a protocol is
necessary to orchestrate the transmission from the users. The protocol is known as
Medium Access Control (MAC).

(Refer Slide Time: 56:34)

2) How performance is improved in slotted ALOHA compared to pure ALOHA?

In slotted ALOHA packets are transmitted in fixed time slots as you have seen. This
reduces the vulnerable period from Tf from 2Tf of pure ALOHA. This, in turn improves
the performance of throughput from 18.4 to 36.8.
(Refer Slide Time: 57:04)

2) Distinguish between persistent and nonpersistent CSMA scheme.

In case of nonpersistent CSMA medium is re-sensed after a random amount of time when
the medium is found to be idle. On the other hand, medium is continuously sensed in
persistent schemes.

(Refer Slide Time: 57:11)

3) How does the efficiency of CSMA based schemes depend on ‘a’?


The parameter ‘a’ decides the vulnerable period in case of CSMA as well as CSMA/CD
schemes. The longer the propagation delay the longer is the value of ‘a’ and longer is the
vulnerable period, longer the vulnerable period lesser is the efficiency, as we have seen.

(Refer Slide Time: 57:36)

5) How performance is improved in CSMA/CD over CSMA technique?

In case of CSMA technique packets are transmitted till the end even when it suffers a
collision. As collision can be detected within a fraction of packet transmission time
medium is monitored during packet transmission. So, in CSMA/CD technique packet
transmissions are stopped as soon as collision is detected by the station. This results in
more efficient utilization of the bandwidth of the medium.

This concludes today’s lecture. In the next lecture we shall discuss about the
generalization technique of Medium Access Control, thank you.
Data Communication
Prof. A. Pal
Department of Computer Science and Engineering
Indian Institute of Technology, Kharagpur

Lecture # 27
Medium Access Control- III

Hello and welcome to today’s lecture on medium access control. This is the third lecture
on this topic. In the first lecture we discussed about the contention based medium access
control protocols like ALOHA, CSMA and CSMA/CD. In the second lecture we have
discussed about the collision-free protocols where we have discussed the token ring and
other protocols like round-robin protocols and reservation protocols. In this lecture we
shall discuss about another very important approach based on channelization.

Here is the outline of the lecture.

(Refer Slide Time: 01:44)

First I shall discuss about the basic concepts of channelization and I shall give an analogy
based on cocktail party theory which gives you some idea how the channelization
medium access control techniques work. Then we shall discuss the three important types
of channelization based medium access control techniques such as frequency-division
multiple access FDMA and time-division multiple access TDMA then code division
multiple access CDMA. This code division multiple access CDMA is very complex and
we shall discuss how it is used and particularly discuss different aspects of it like the
transmitter and receiver, how the multiplexing and demultiplexing is done, then how the
chip sequences which is used in CDMA as used here and how the chip sequences are
generated with help of the Walsh table.

and on completion of this lecture the students will be able to explain the basic idea of
channelization, they will be able to explain FDMA technique, they will be able to explain
the TDMA technique time-division multiple access technique and understand the basic
concepts of CDMA and they will be able to explain how multiplexing and demultiplexing
occurs in CDMA and explain the orthogonal property of the chip sequences which are
used in CDMA.

(Refer Slide Time: 03:23)

Finally they will be able to explain how chip sequences are generated with the help of
Walsh table. Here is the basic overview of the medium access control techniques.

(Refer Slide Time: 04:15)


As I mentioned in the first lecture I discussed about the random access type protocols
like ALOHA, CDMA and CDMA/CD and again the ALOHA has two different varieties
pure ALOHA and slotted ALOHA and in the last lecture we have discussed about the
collision-free protocols where the collision is avoided and there are a number of
techniques such as CSMA/CA collision carrier sense multiple access with collision
avoidance then there are two other basic approaches like round-robin where you have got
two techniques polling and token passing which we have discussed in detail.

Then we have the reservation based protocol the two distinct categories of protocols
which are based on centralized scheme and distributed scheme. All these things we have
discussed in the last lecture. In this lecture as I mentioned we shall discuss about the
channelization approach based on FDMA, TDMA and CDMA. Let us first try to
understand what we really mean by channelization technique.

(Refer Slide Time: 05:28)


It is essentially a multiple access method in which the available bandwidth of a link is
shared in time, frequency or using codes. So there are three basic approaches. The
bandwidth is shared in time, in frequency or using code by a number of stations. So these
three alternatives that means sharing in time, sharing in frequency, sharing using code has
led to three different approaches like FDMA where the bandwidth is divided into separate
frequency bands and then different frequency bands are used by different stations.

Then we have TDMA where the bandwidth is timeshared that means at different instants
of time different users or stations use the medium. On the other hand, in CDMA the data
from all stations are transmitted simultaneously and are separated based on coding
theory. So here we find that it is neither frequency shared nor timeshared but here it is
based on coding theory so all are transmitting simultaneously in terms of frequency and
time and they are separated by using coding theory.

The three basic approaches can be best explained with the help of a very simple cocktail
party theory.
(Refer Slide Time: 06:33)

Suppose a cocktail party is going on, say banquet dinner of a conference is going on and
it is going on in a big area then there are three possible alternatives. In one case when all
the people group in widely separated areas and talk within each group. That means in this
case normally we have seen different people having different kinds of interest and groups
in different locations where the cocktail party is going on, they talk among themselves
and when all the people group in widely separated areas and talk within each group we
call it FDMA. That means as if we have assigned different frequency bands to each of
these groups and each of these groups are talking using different frequency bands. So
this is your FDMA approach.

Another alternative is when all the people are in the middle of the room but they take turn
in speaking. In the second situation some very important event is going on and all the
people have gathered at a central place and each of them is taking turn in talking. So, in
this case it is equivalent to TDMA or time-division multiple access. Here we have the
basic approach, in terms of time the sharing is taking place.

Now the third approach is very interesting. When all the people are in a middle of the
room but different pairs speak in different languages. It is not unusual that in an
international conference when the banquet dinner is going on people from different
countries will arrive there and obviously when they group together they may be speaking
in different languages.

Let’s assume that all of them have gathered near the central area but small groups are
talking in different languages, a group is talking in English, a group is talking in French,
a group is talking in German, another group is talking in may be Bengali, another is
Hindi, so you can see that each of these groups will start talking simultaneously; and
since they are speaking different languages the group talking in English although they are
hearing the voices of people speak in Hindi or French or German will not interfere with
their discussion. So here simultaneously all of them are talking but they are not
interfering with each other because they are talking in different languages. This is
equivalent to CDMA or Code Division Multiple Access.

Here we shall see how all the people can talk simultaneously at the same time they will
not interfere with each other or each of them can separately talk within their group.
Hence, this is the analogy of the three channelization approaches based on cocktail party
theory. Now let us first focus on the Frequency Division Multiplexing.

(Refer Slide Time: 10:50)

In Frequency Division Multiplexing we have already seen we have got a number of


sources, you have got n sources as shown here and each of them is sending some data.
The signals coming from each of these sources are modulated by using different carrier
frequencies f1 f2 and fn and as a consequence we are creating signals in different
frequency bands and they are combined together and the composite signal as you can see
here (Refer Slide Time: 10:33) you have the centre frequency f1 which is the carrier
frequency of the modulator one and f two and as you can see each of these signals
coming from each of the sources are now channelized in terms of frequencies, that’s why
it is called Frequency Division Multiplexing.

Here the overall bandwidth is divided into separate frequency band and each of the
channels is using one of the channels to transmit. This is how Frequency Division
Multiplexing occurs. We already know that you have to use guard bands in between the
frequency bands of a pair of adjacent channels. So channel one and channel two are the
adjacent channels so their carrier frequencies are fc1 and fc2, you have to use some guard
band between the two so that they are separated by strips of unused bandwidth to prevent
inter-channel cross talk.
(Refer Slide Time: 12:00)

Whenever you perform frequency modulation the bandwidth increases and you have to
overcome the cross talk so inter-channel cross-talk has to be avoided and to do that guard
bands are used. So, whenever you have got a number of channels this particular equation
shows what is the number of channels that can be used. That is, number of channels that
can be used N is equal to Bt minus 2Bguard where Bt is the total bandwidth available and
that is being divided into a number of channels, the bandwidth allocated to each channel
is here and Bt the total bandwidth minus 2Bt/B. This is how we get the total number of
channels that is possible in case of Frequency Division Multiplexing.

Now you may be asking, what’s the difference between Frequency Division Multiplexing
and Frequency Division Multiple Access?

(Refer Slide Time: 14:06)


Here as you can see the signal sent by different channels have been shown in a three
dimensional graph where the three dimensions are time, frequency and code. So as you
can see the signals coming from different channels are only varying in terms of frequency
here. However, in case of bursty traffic the efficiency is improved in FDMA by using a
dynamic sharing technique to access a particular frequency band. Now, normally for each
of these channels a frequency is statically allocated but if the traffic is burst that means
all the channels do not have data to send all the time so in such a case there can be under
utilization of the channels because a channel is statically or permanently allocated to a
particular station or user.

Therefore, in such a case if the traffic is bursty that particular channel may not be used.
So what can be done to improve the utilization? So, instead of statically allocating a
particular channel to a particular station the channels can be assigned on demand. That
means depending on the requirement of different stations or users a single channel can be
allocated to different stations or users. So that makes FDMA Frequency Division
Multiple Access. That means not only the overall bandwidth is divided into a number of
channels but each channel can be allocated to a number of stations or users. So, if you
have got N channels now since each channel can be shared by more than one user the
total number of stations that can be provided a service can be greater than N if it is
statically allocated than the total number of number of stations that can be used in service
is equal to N.

However, since this is allocated or assigned dynamically on demand the total number of
stations can be larger than the number of channels. Obviously this is possible only when
the traffic is bursty in nature. If the traffic is streamed that means continuously sent then
of course it cannot be done. This is the difference between Frequency Division
Multiplexing and Frequency Division Multiple Access.
In case of Frequency Division Multiple Access as we have seen each channel can be used
by different users based on the assignment done on demand. So this is your Frequency
Division Multiple Access. Now let us focus on the Time Division Multiplexing.

(Refer Slide Time: 16:25)

As you know the incoming data from each source are briefly buffered. We have already
discussed the Time Division Multiplexing at length and we have seen that each buffer is
typically either one bit or a character in length. The buffers are scanned sequentially to
form a composite data stream and the scan operation is sufficiently rapid so scanning is
done very fast so that each buffer is emptied before more data can arrive. Here it is shown
how it is being done.

(Refer Slide Time: 18:10)


Here are the buffers coming from different sources and here you have got some kind of
switch so a connection is established between S1 then S2 then in this way up to Sn but it
is done very fast and some kind of framing is done and composite signal is framed where
you have got data from S1 S2 up to Sn likewise it is repeated so again another frame
comes up as S1 S2 and Sn.

Of course the composite data rate must be at least equal to the sum of the individual data
rates. So if you sum up the individual data rates say you have got n channels, n into data
rate is B then this has to be n into B that means here the rate will be at least n times of the
individual data rates when you have got n stations or channels. Then the composite signal
can be transmitted along with synchronization bit. Just like the guard bands are used in
Frequency Division Multiplexing and guard band is essentially some kind of overhead
similarly here the synchronization bits are overhead bits as you have seen in case of Time
Division Multiplexing.

And here also the problem arises if these stations do not have data all the time. So here
for example the first frame is filled up because there is data A B C D. On the other hand,
in the second frame there is data from A B C but there is no data from D so it goes empty.
Similarly, in frame 3 A B C D the C D has no data to send and this is the transmitter end
and in the synchronous manner it is received at the other end and AA, BB, CC and DD
are collected.

(Refer Slide Time: 18:50)

But as you know Synchronous Time Division Multiplexing leads to under utilization of
the channel if the data is bursty in nature that’s why we developed another approach
known as Asynchronous Time Division Multiplexing or ATM where instead of allocating
a fixed slot for each of the channels these slots can be different. But in TDMA it is done
in a little different way.

(Refer Slide Time: 20:00)

So, as you can see here the frequency and code remains same for each of the channels
however the time slots are varying. In this case the number of channels is equal to N
where m is number of bits for each of the stations multiplied by B total that is the total
bandwidth available minus two into the guard bandwidth by Bc the bandwidth of each of
the channels so you have got separate channels.

Now, just like in the case of frequency division multiplexing here also it is possible to
assign the slots to different stations or users dynamically. So, channel allocation can be
done dynamically. Whenever you do that for example this particular channel can be
statically allocated to a single station or user in that case we call it Time Division
Multiplexing.

On the other hand, if it is done dynamically based on demand then we call it time division
multiple access. That means a particular channel can be shared by a number of stations or
users. So, it is not only we are sending but we are dividing into different time slots but
each of these time slots can be shared by more than one station or user. That is essentially
known as TDMA or Time Division Multiple Access. We have discussed the two basic
channelization approaches.

Now let us focus on the other one which is the most complex one the Code Division
Multiple Access CDMA. In CDMA as you can see it is neither divided in terms of time
nor divided in terms of frequency, each of the channels occupy the same frequency and
time simultaneously. However, they are separated in terms of code so you have got
channel 1, channel 2 and channel N using different codes. How exactly it is being done
by using different codes will be explained very soon.
(Refer Slide Time: 21:57)

Here are the basic differences between TDMA, FDMA with CDMA.

(Refer Slide Time: 22:05)


In TDMA and FDMA the transmissions from different stations are clearly separated
either in time or frequency as you have seen. They are sent in different time slots or from
different users where different frequencies are used. But in CDMA the transmission from
different stations occupy the entire frequency band at the same time as you have seen in
this particular diagram (Refer Slide Time: 22:37). All the channels are transmitted in the
same time using the same frequency band. Now it is possible to have multiple
simultaneous transmissions separated by coding theory. So we have to use some kind of
coding theory so that it is possible to have multiple simultaneous transmissions which can
be separated by coding theory.

What is being done is, each bit is assigned a unique m-bit code or chip sequence that is
the basic idea. That means each bit of a channel is assigned a unique m-bit code or chip
sequence. This diagram explains to you how multiplexing is being done.

(Refer Slide Time: 24:20)


Here the binary data is coming from four different sources that is your station 1, station 2,
station 3, station 4 so station 1 0 is coming, station 2 1 is coming, from station 3 0 it is
coming, from station 4 1 it is coming and so on. Now for pedagogical purposes this 0 is
represented by minus 1 and on the other hand, 1 is represented by plus 1. Hence you can
see here 0 is represented by minus 1 and 1 is represented by plus 1 for pedagogical
purposes.

Now each of these stations is having a unique chip sequence. So this chip sequence for
channel 1 is plus 1, plus 1, plus 1, plus 1 essentially it is 1 1 1 1. On the other hand, for
station 2 it is plus 1, minus 1, plus 1, minus 1 it is different from the chip sequence of
channel 1 and then for channel 3 the chip sequence is plus 1, plus 1, minus 1, minus 1
again it is different either from station 2 and station 1 then finally you have got the chip
sequence for station 4 which is plus 1, minus 1, minus 1, plus 1. As you can see all these
four chip sequences are unique. Each of them is different from the other three and then
this binary input is multiplied with the chip sequences so plus and minus leads to minus 1
so whenever it is multiplied with minus then plus 1, plus 1, plus 1, plus 1 becomes minus
1, minus 1 minus 1, minus 1 and minus 1.

On the other hand, for channel 2 it is multiplied with plus 1 it becomes plus 1 it remains
same so plus 1, minus 1, plus 1 and minus 1 it remains same. Then minus 1 when it is
multiplied it becomes plus 1, plus 1, minus 1, minus 1 the chip sequence becomes minus
1, minus 1, plus 1, plus 1 and from station 4 it is multiplied with plus 1 that is because the
binary data is 1 so it becomes plus 1, minus 1, minus 1, plus 1 which in turn becomes
plus 1, minus 1, minus 1, plus 1.

(Refer Slide Time: 27:34)


Now these are added bit by bit so here you see plus 1, minus 1, plus 1, minus 1 becomes
0. For the second bit minus 1, minus 1, minus 1 becomes minus 4, you have to add all the
four then minus 1, plus 1, plus 1, minus 1 becomes 0 and minus 1, minus 1, plus 1, plus 1
becomes 0. Thus the composite signal corresponds to 0, minus 4, 0, 0 so this can be sent
over the medium. This is sent and it is being received at the other end so here it comes
from the medium and after it is received these same chip sequences (Refer Slide Time:
27:00) which were used for different channels are used for demultiplexing so it is
multiplied with the same chip sequences.

Therefore, after multiplying with these chip sequences this is 0, minus 4, 0, 0 it becomes
0, minus 4, 0, 0 and this is 0, minus 4, 0, 0 multiplied with plus 1, minus 1, plus 1, minus
1 which becomes 0, plus 4, 0, 0 and similarly 0, minus 4, 0, 0 multiplied with plus 1, plus
1, minus 1, minus 1 becomes 0, minus 4, 0, 0 similarly 0, minus 4, 0, 0 multiplied with
plus 1, minus 1, minus 1, plus 1 which is a chip sequence of channel 4, it becomes 0, plus
4, 0, 0.

Now, if you add together you have to perform the addition, here you are doing addition
(Refer Slide Time: 27:55). So, if you add then you get minus 4, here if you add all the
four you get plus 4, here if you add it becomes minus 4, here if you add all the four you
get plus 4, here if you add it becomes minus 4, if you add 0, plus 4, 0, 0 you get plus 4
then you divide by 4 because you have got the total number of channels as 4 then you get
minus 1 here, plus 1 here, minus 1 here and plus 1 here. Thus, after dividing by 4 you get
minus 1, plus 1, minus 1, plus 1. So, as you know minus 1 is nothing but 0, plus 1 is
nothing but 1 so whatever was transmitted that 0 1 0 1 is now recovered so all of them
were sent simultaneously and the demultiplexer can be separated out again by
multiplexing with chip sequence.

So in practice this is being done in this way so the binary information comes here which
is the 0 or 1 and that is being multiplied with the help of random sequence. If R is the bit
rate of data then this spreading factor is selected such that it occupies the entire frequency
band of the medium.

(Refer Slide Time: 29:13)

That means the number of bits in this chip sequence is chosen such that it occupies the
entire bandwidth of the channel. So it can be 8, it can be 16 or it can be 128 depending on
the bandwidth of the medium. Therefore, the signal that comes out becomes m times
which is the number of bits in the chip sequence the data rate of the medium.

After multiplying this is the signal that is being generated and now you can perform
digital modulation like QAM, QPSK this kind of modulation you can do then you can
transmit by using an antenna. This is how the transmission is performed and as you can
see the bandwidth here (Refer Slide Time: 30:15) is at times the bandwidth of each of the
channels.

Now at the receiving end signals from all the transmitters are being received by this
antenna then the composite signal is multiplied with the digital demodulator. Digital
demodulator means that the decoding is being performed as seen in the case of digital
modulation the demodulation is performed and after demodulation we get the composite
signal and that is being multiplied with unique pseudo random binary sequence and after
multiplying the same pseudo random binary sequence we get back the signal. Of course it
will have some noise because of interference and other problems but you get back the
binary information.
(Refer Slide Time: 31:00)

Now you may be asking how you generate these chip sequences. This is being done by
using linear feedback shift register. You may recall that for error detection using cyclic
redundancy code to generate that cyclic redundancy code and also to check it you use
some linear feedback shift register to generate a random sequence. So, that kind of linear
feedback shift register can also be used here to generate those unique pseudo random
binary sequences. So, unique pseudo random binary sequences are used in transmitter as
well as in receiver.

Of course there is one problem here. You have assumed that the signals coming from
different stations have equal strength. That means this necessitates a power control
mechanism at each transmitter to overcome the near-far problem.

Essentially the near-far problem arises because, if there is a receiver which is receiving
signal from a number of transmitters depending on the distance if it is very near then the
signal strength coming from that signal will be very high and the signal strength from the
other signals will be low and as a result if the signal strength is very low that will be
ignored so that will be considered as 0 and if the signal strengths are not equal then the
noise will increase here, that is, the interference will increase. In other words the
summation and then subsequent deduction is based on the assumption that they are of
equal amplitude. That’s why some kind of power control mechanism is used at each of
the transmitter to overcome the near-far problem as it is used in cellular telephone
network. We shall discuss about it later on. Now let us discuss more about the chip
sequences.

As I mentioned each station is assigned a unique m-bit code or chip sequence. Obviously
these are not randomly chosen sequences. You don’t generate them in an arbitrary
manner. If you generate them in an arbitrary manner then the property of orthogonality
will not be satisfied so you have to satisfy the orthogonal property.
(Refer Slide Time: 34:00)

Let us use the symbol Si to indicate the m-bit vector for station i and Si bar is the
component of Si. Now all the chip sequences are pair-wise orthogonal. That is, the
normalized inner product of any two distinct codes will be 0. So, for example, one code is
plus 1, minus 1, this is your minus 1, plus 1, minus 1 and S2 is equal to plus 1, plus 1,
minus 1, minus 1. If you take the inner product of these two that means you multiply plus
1 with plus 1 you will get plus 1, then plus 1 with minus 1 you get minus 1, then if you
multiply with minus 1 with plus 1 you get minus 1 and minus 1 and minus 1 if you
multiply you get plus 1, so if you add them this is the inner product, if you take the
summation of these you will get 0.

That means if you choose any two distinct codes it will be 0. That means S1 into S2 it is 0,
S1 into S3 will be 0, S1 into S4 will be 0, on the other hand, if you multiply the same code
S1 into S1 if you do the multiplication plus 1 with plus 1 becomes plus 1, minus 1 with
minus 1 will become plus 1, then minus 1 with minus 1 will become plus 1, plus 1 with
plus 1 will become plus 1, if you add them it becomes 4 and if you divide it by the
number of bits you get 1.

So we find that if you multiply S1 with S1 that means for the same chip sequences you get
1. On the other hand, if you multiply with the complement you get 0 again. That means if
a code S1 or Si is multiplied with Sj where i is not equal to j then it is 0. And also
whenever it is multiplied with its complement then again it is 0. This is the orthogonality
property that is to be satisfied by the chip sequences and only then the multiplexing and
demultiplexing is possible.

In other words transmission and subsequent recovery at the receiving end is possible only
when this orthogonal property is satisfied. So, as I mentioned, the orthogonal property
allows parallel transmission and subsequent recovery at the receiving end. I have already
explained how exactly it is being done with the help of these two diagrams (Refer Slide
Time: 37: 02). Here you are doing the multiplexing and the chip sequences satisfy the
orthogonal property. Then you can sum them together and transmit and then at the
receiving end you can separate them out. This is how the chip sequences are to be
designed.

Now let us take up several examples. Suppose the S1 S2 S3 and S4 are assigned to channel
one A B C and D, now let’s assume at a particular instant channel A B C D, so let’s
assume A is not transmitting, D is not transmitting, B is transmitting 0, C is transmitting
1. And another example say A is transmitting, B is not transmitting, C is transmitting 0
and D is not transmitting. Third example is C is not transmitting, A is transmitting 1, B is
transmitting 0, C is not transmitting and D is not transmitting.

Now what will be the composite signal in the four cases? For B you have to multiply with
this minus 1 so it becomes minus 1 minus 1 plus 1 plus 1. Then you have to add with that
C which is plus 1 plus 1 minus 1 plus 1 so that is your plus 1 plus 1 minus 1 minus 1
(Refer Slide Time: 38:56) so if you add them it becomes, this is your 0 0, this is 0 and
this is minus 1 here minus 1 minus 1 so it becomes plus 1 plus 1….. there is a correction
here (Refer Slide Time: 39:29) these two cannot be same S1 S2 S3 and S4.

(Refer Slide Time: 39:20)

Let us look at the S3 code. (Refer Slide Time: 39:34) S3 is equal to plus 1 plus 1 minus 1
minus 1 and that was plus 1 minus 1 plus 1 minus 1, these codes are not proper so let me
take up separate codes that is your S1 is assigned to A that is your plus 1 plus 1 plus 1
plus 1 then S2 is assigned to B that is your plus 1 minus 1 plus 1 minus 1 so in this way if
you assign and if you perform the multiplication and addition you will see that whenever
you are transmitting A and you are transmitting C or you are transmitting the A bar and C
or if other stations are not transmitting even in that case also it will work.
(Refer Slide Time: 40:21)

Whenever you have got some stations not sending even then it will work. That means in
this case if some station is not sending data, consider this is not here (Refer Slide Time:
40:37) that means B is not sending, this is dash so in that case it will be 0 0 0 0 even then
it will work. That means if it is 0 0 0 0 and if you do the summation you will find still at
the receiving end you will be able to detect by proper addition.

Later on we shall take some more examples to illustrate. If you take orthogonal codes and
take some stations transmitting some stations not transmitting still it will work. That
means if you have got A B C D it is possible that two stations are not transmitting, one is
sending 0 another is sending 0, another is sending one and all possible combinations are
possible, so even in this situation it will be possible to multiplex and demultiplex at the
other end.
(Refer Slide Time: 41:30)

Now you may be asking how the chip sequences are generated. What is the technique
used to generate the chip sequences? They have to be pair-wise orthogonal. That can be
done by using Walsh table in an interactive manner. That means Walsh table can be used
to generate orthogonal sequences in an interactive manner, how? The W1 is plus 1 as you
can see, it is a one dimensional matrix and W2 is a two dimensional matrix with four
entries, here you have got only one entry so 2N, 1, 2 and 4N can be generated, W1 W2 W4
can be generated in this manner that if WN can be used to generate W2N, so how do you
generate? W2N is generated from WN by putting WN, WN, WN and WN bar here to get
W2N.

(Refer Slide Time: 43:28)


For example, W1 is plus 1 and W2 is here we have substituted W1, W1, W1 and W1 bar
that means plus 1 plus 1 plus 1 and minus 1 then W4 can be generated from W2. This is
how it is being done, let me show here. So you have to put this matrix (Refer Slide Time:
43:42) that means plus 1 plus 1 plus 1 minus 1 then here also you have to put plus 1 plus
1 plus 1 minus 1, here also you have to put WN so plus 1 plus 1 plus 1 minus 1 now here
you have to put the complement of that, complement is minus 1 minus 1 minus 1 plus 1.
This is exactly what you have got. Similarly, W8 can be generated which will be equal to
W4 W4 W4 W4 bar.

(Refer Slide Time: 44:33)

So in this way you can generate the next bit sequences in an interactive manner. So, if the
table for n sequences is known the table for 2N sequences can be created and it can be
proved that these sequences satisfy the orthogonal property. So this can be used for the
purpose of CDMA and also the codes can be generated very easily and these sequences
can be also generated by using linear feedback shift register as I have already told.

We have seen how the chip sequences can be generated in a very convenient manner by
using Walsh table. We have discussed various medium access control techniques in three
lectures. In the first lecture we discussed about the random techniques; ALOHA, CSMA,
CSMA/CD and the next lecture was on collision-free protocols where we discussed
various multiplexes with carrier avoidance, round-robin and reservation techniques and in
today’s lecture we have discussed FDMA, TDMA and CDMA. Here we have seen that
the signals can be transmitted in three different ways by using frequency in terms of time,
frequency and code division multiplexing.
(Refer Slide Time: 46:19)

Now these medium access control techniques have many real life applications. In the next
few lectures we shall discuss about the applications of these medium access control
techniques used in broadcast communication and three important applications we shall
consider. First one is the Local Area Networks (LAN), second one is the Satellite
Networks, third is the Cellular telephone networks.

(Refer Slide Time: 46:55)

In local area networks we shall consider three lectures. First one will be based on the
IEEE802LANS and you will see there we have used CSMA/CD and token passing
protocol. These two protocols are being used in IEEE802LANS which I shall discuss in
the next lecture. We shall also discuss the high speed LANS where these medium access
control techniques are used in high speed LANS based on CSMA/CD which are
essentially first Ethernet, Gigabit Ethernet and also FDDI which is the Fiber Digital
Distributed Interface which uses token passing protocol.

Therefore, in high speed LANS both CSMA/CD and token passing protocols are used
and there will be another local area networks that is your wireless LAN where we shall
find the use of CSMA/CA protocol. So CSMA/CA protocol will be used in wireless
LAN. Then we shall consider the satellite networks and in satellite network we shall see
that there is use of FDMA and TDMA Time Division Multiple Access and Frequency
Division Multiple Access and primarily reservation techniques are used in these
applications.

Finally we shall consider another lecture where we shall consider the use of TDMA this
Cellular telephone network where the time division multiple access, frequency division
multiple access and code division multiple access are used. We shall also see different
standards like GSM and CDMA where these channelization approach of medium access
control technique is used and we shall see how this TDMA, FDMA are used and also
how CDMA is used in implementing in cellular telephone networks.

(Refer Slide Time: 46:55)

So in the next three lectures we shall discuss the applications of these medium access
control techniques and we shall see how these medium access control techniques are
used. Now this is time to give you the review questions.
(Refer Slide Time: 50:15)

1) In what way Frequency Division Multiplexing differs from FDMA?


We have already discussed the FDM Frequency Division Multiplexing earlier. In this
lecture we have discussed frequency division multiple access so the first question is
based on that.
1) In what way FDM differs from FDMA?
2) In what way CDMA differs from FDMA?
3) What happens when multiple signals collide in CDMA?

We have seen that in CDMA the various transmissions coming from different sources are
done simultaneously. So whenever collision occurs what happens in case of CDMA will
be the third question.

4) What is an inner product in the context of CDMA?


5) Compare and contrast FDMA, TDMA and CDMA techniques which have been
discussed in this lecture.

These are the five questions to be answered in the next lecture. Now let us consider the
answers to the questions of lecture minus 26.
(Refer Slide Time: 51:35)

1) What are the advantages of collision-free protocols over random process protocols?
In random access protocols there is loss of bandwidth due to collisions and
retransmissions. We have seen that in random access protocols based on ALOHA, CSMA
and CSMA/CD there are collisions. And whenever there are collisions as you know it is
necessary to do retransmissions and whenever the traffic increases the number of
collisions keeps on increasing and that reduces the throughput. This is the drawback of
this random access protocols. On the other hand, in collision-free protocols there is no
loss of bandwidth as there are no collisions.

We have seen that in CSMA Carrier Sense Multiple Access Collision Avoidance or
polling or in token passing or based on reservation, collision is completely avoided and as
a result there is no loss of bandwidth due to collisions. That is the advantage of these
protocols. Collision is completely avoided, there is no loss of bandwidth and as a
consequence the collision free protocols are advantageous particularly when the traffic
load is high. Then comes the second question.
(Refer Slide Time: 53:34)

2) How does CSMA/CD differs from CSMA/CA?

In CSMA/CD collision occurs as traffic increases in the network. Although binary


exponential backoff algorithm is used to minimize the number of collisions efficiency
drops with increased traffic. We have seen that in case of CSMA/CD whenever collisions
occur by using that binary exponential backoff the backoff average backoff time is
doubled so that minimizes the number of collisions. But in spite of that as the traffic load
increases the throughput decreases. As you have seen, particularly when the load is high
like this (Refer Slide Time: 54:22) the traffic in this particular region, the number of
collisions occur, the loss due to the number of collisions is more and as a result the
throughput decreases.

On the other hand, collision is avoided in CSMA using four way handshaking protocol.
The four way handshaking protocol is used by using Request to send, Clear to send and
by listening this Clear to send the other stations backs off not sending their data so by this
way the collision is avoided, this is particularly suitable in wireless environment where
collision detection is difficult.
(Refer Slide Time: 55:14)

3) Compare and contrast CSMA/CD and token passing protocols.

The token ring is the least sensitive to work load. In token ring although when the
network is lightly loaded the overhead is high but it is least sensitive to work load. As the
load increases the throughput does not decrease. On the other hand, CSMA/CD offers the
shortest delay under light load conditions but it is most sensitive to variations of load.
That means when the load increases in CSMA/CD we know that delay increases
significantly particularly when the load is heavy.

Another important difference is the token ring is suitable for real-time traffic because the
delay is deterministic. What is the worst case time? It is deterministic. But in CSMA/CD
it is non-deterministic, how much time it will take is not known. Therefore, as a
consequence in case of CSMA/CD some packets may take very long time to deliver and
there is a possibility that some unfortunate packets will not be delivered because in binary
exponential backoff algorithm after fifteen collisions a packet is dropped so it is not
guaranteed that a particular packet will be delivered so that’s why because of the
deterministic nature of token ring it is suitable for real-time traffic.

On the other hand, for the non real-time in the lightly loaded condition CSMA/CD
protocol is better.
(Refer Slide Time: 57:04)

4) Why CSMA/CD or token passing medium access control protocols cannot be used in
satellite communication?

As I told long round trip propagation delay in satellite communication makes it


unsuitable for both collision-detection based and round-robin based protocols. Delay here
is about quarter of a second so if collision is detected by a ground station that has
happened quarter of a second earlier. In other words the value of ‘a’ which is the ratio of
the propagation time by transmission time is very high so as a result the CSMA/CD based
protocol is not suitable. Similarly, round-robin based protocols also are not suitable
because it takes very long time to do polling or to pass the token, as a result these
protocols cannot be used here.
(Refer Slide Time: 58:44)

5) Compare the distributed and centralized reservation protocols.


As I have mentioned distributed schemes suffer from the disadvantage of higher
processing burden on each station. So each station has the responsibility to perform that
medium access control because it is done in a distributed manner. So, the processing
requirement is high on these stations. Moreover, the distributed schemes are vulnerable to
loss of synchronization because that synchronization is very much essential and in a
distributed scheme it cannot be done. It can be very easily achieved in centralized
schemes and that’s why centralized schemes have some advantages. However, distributed
scheme is preferred because if the centralized station fails then the medium access control
cannot be done so it is much more reliable.

So friends, in this lecture we have discussed the channelization scheme of medium access
control technique which concludes our discussion on medium access control technique
and as I have mentioned the next three lectures we shall discuss the applications of the
various other medium access control techniques. Thank you.
Data Communication
Prof. A. Pal
Department of Computer Science and Engineering
Indian Institute of Technology, Kharagpur

Lecture # 28
IEEE 802 LANs

Hello and welcome to today’s lecture on IEEE 802 LANs.

(Refer Slide Time: 01:00)

In the last lecture we have discussed about the various applications of medium access
control techniques. One of the important applications of medium access control
techniques is in local area networks. In this lecture I shall introduce to you the first set of
standards which were developed by the IEEE 802
Here is the outline of today’s lecture.

First I shall consider the basic characteristics of LAN particularly the topology,
transmission media and medium access control. These are the three parameters which
characterizes a local area network. Then I shall discuss about the three standards which
were developed by IEEE 802 (( )) which are known as IEEE 802.3 that is based on
CSMA/CD and the most popular version of it is known as Ethernet. Another is based on
token bus known as IEEE 802.4 and the third one which we shall discuss today is IEEE
802.5 which is token ring based.

(Refer Slide Time: 02:10)


So we shall discuss about these three different types of local area networks and compare
the performance of these three types of LANs at the end.

On completion, the students will be able to explain, the basic characteristics of local area
networks, they will be able to explain the operation of IEEE 802 local area networks, first
one is IEEE 802.3 based on CSMA/CD, the second one is IEEE 802.4 based on token
bus, the third one is IEEE 802.5 based on token ring and they will be able to compare the
performance of these three LANs.

(Refer Slide Time: 03:10)

First of all let us define what we mean by local area networks.


Earlier we have discussed about the packet switched networks where the network can
span over a very large geographical area known as wide area networks. On the contrary
local area networks cover a very small geographical area, it can be a room, a campus, a
building so it does not cover a very large area may be 5 km wide on both directions of
three kilometer wide on both directions so small geographic area is one of the key
features of local area networks.

Second important feature is high reliability. In wide area network in packet switched
networks we have sent that because the medium used is not very reliable the standard
telephone network using twisted-pair the reliability is very poor. On the other hand in
local area network we shall see it uses very reliable media like optical fiber, coaxial cable
and of course it uses twisted-pair but of a very small segment length so as a result it is
very reliable. So the need for error detection and correction is minimal here particularly
error correction is not used however error detection is provided as part of this scheme as
we shall see.

Then it allows high data rate. we have seen that the packet switched networks the wide
area networks do not have very high speed although over the years this speed has
increased but as we shall see the local area network will have very high speed starting
with may be 10 Mbps to nowadays 10 Gbps. So you see the local area networks offer
very high speed.

(Refer Slide Time: 05:25)

Another important characteristic is that local area networks are privately owned, wide
area networks are usually owned by state that is the government. Of course nowadays
many private companies are also owner of wide area networks but that is the normal
situation. On the contrary local area networks are privately owned that means it can be
owned by a single person or an organization or academic or government industry
whatever it is. So these are the typical characteristics of local area networks or typical
features of local area networks.

Question arises how do you characterize a LAN?

There are three important parameters. The topology, transmission media and medium
access control techniques. These three parameters characterize a LAN.

(Refer Slide Time: 06:25)

We shall see what are the various alternatives available so far as topology is concerned,
transmission media is concerned and medium access control technique is concerned. First
let us start with the topology.

The topology of a network defines how nodes and stations are connected. That means as
you know in a local area network you have to connect a number of computers or stations.
It need not be computers, it can be peripherals and other communication equipments like
(( )) cell phones and so on. Hence, there are three important topologies which are shown
here.

(Refer Slide Time: 09:00)


First one is bus in which there is a shared media and that shared media is shared by all the
nodes. That means all the computers or stations are directly connected to the bus. It is
similar to that electrical line distribution. All the electrical equipment are connected to the
electrical bus so it is somewhat like that. We shall see how different computers can be
connected to a common bus, so here all nodes are connected to a common media. So we
can say you require a single segment of the medium in this particular case.

Another alternative is star which is also commonly used. Here all nodes are connected to
a central node. As you can see here you have got a central node through which all the
computers are connected, that central node can be a hub or it can be a switch through
which different computers or other equipments can be connected. Later on we shall
discuss about it in more detail. So you have got a central node through which all the
communication take place and as you can see it appears like a star and that’s why it is
called star topology.

The third alternative is in the form of a ring where nodes form a ring by point to point
links to the adjacent neighbors. Here as you can see each computer or station is connected
to its neighbor with the help of point to point links and point to point links ultimately
forms a ring so all the computers are connected in the form of a ring. Later on we shall
see how they communicate with each other.

Now as we shall see there are varieties of transmission media that can be used, the most
popular is the twisted-pair although it has got the minimum bandwidth but it serves the
purpose in many situations. The second alternative is coaxial cable which is widely used
then the third option is optical fiber. The optical fiber is gradually becoming more and
more popular because of its very high bandwidth as you know, optical fiber is widely
used nowadays in local area networks and the last alternative is the wireless
communication. We shall see how different alternatives are possible for wireless
transmission.
There is a close relationship between the topology and transmission media. The
transmission media that we choose sometimes dictates what kind of topology that can be
used. Or you can say the other way, for a particular topology some special types of
transmission media is suitable. For example, for BUS, coaxial cable is the media which is
the most suitable and coaxial cable is used whenever BUS topology is used.

(Refer Slide Time: 10:35)

On the other hand, for ring it is possible to use twisted-pair, optical fiber but also it is
possible to use coaxial cable if coaxial cable is necessary. On the other hand, for star
topology it is possible to use twisted-pair and optical fiber; these are the most commonly
used medium. So we find that there is some relationship between the topology and
transmission medium and for different LANs these combinations are used.

Finally we come to the third important parameter that is the medium access control
technique that is being used. We have already discussed different types of medium access
control techniques. The most popular medium access control that is being used in
CSMA/CD, token passing, CSMA/CA and of course there are other medium access
control techniques like FDMA, TDMA, CDMA which are also used but for the three
different types of LAN that we shall discuss today we shall find that these are the two
techniques we shall use.

(Refer Slide Time: 11:56)


Hence, these are the various alternatives of medium access control techniques which are
used in different types of local area networks. Now let us focus on the IEEE 802 standard
LANs. Now IEEE 802 committee has developed three different standards namely IEEE
802.3, IEEE 802.4, IEEE 802.5. Now all these three standards share two common
sublayers. First one is IEEE 802.1. This IEEE 802.1 essentially introduces different types
of LANs and also it serves some kind of internetworking purpose. So it is essentially used
for internetworking and also as an interface to the upper layers. Then you have got logical
link control which is essentially part of the data link layer of the OSI model. Similarly,
the data link layer has been divided into two sub layers in IEEE 802; one is your logical
link control LLC and another is Medium Access Control MAC.

Then the lower part is the physical layer and here are the different functions performed
by the two different layers, the data link layer and the physical layer. The physical layer
performs the encoding and decoding. As you know, whenever you send digital signal you
have to perform some kind of encoding, Manchester encoding and different types of
encodings are used, so that encoding and decoding is done, then you have to perform
collision detection, carrier sensing if you use CSMA/CD or CSMA/CA.
(Refer Slide Time: 14:05)

Then of course you have to do transmission and receipt of packets since these are the
functions of physical layer which directly interfaces with the medium. Then the data link
layer which is above the physical layer performs station interface, it performs the data
encapsulation and decapsulation. As we shall see it will form some kind of frame so that
it is possible to perform synchronization and other functions, then it does link
management and collision management.

So, whenever collision takes place it does the management so that it comes out of the
collision and also it performs link management. These are the three different types of
standards.

First we shall focus on the IEEE 802.3 specification which is based on CSMA/CD. Let us
first focus on the physical layer. So, in the physical layer it supports different types of
transmission media. The types of transmission media that it supports varies from twisted-
pair to optical fiber. It is concisely represented in the form of 10Base5 or 10Base2 or
10Base3 or 10BaseF.
(Refer Slide Time: 15:37)

This 10 specifies the data rate that means data rate is 10 Mbps so this ten signifies the
data rate and this base specifies whether it is base band or broad band. So in this
particular case 10Base means baseband, however, there is a possibility of using
broadband in such a case it will be ten broad thirty six coaxial so in this case it is a broad
band communication.

On the other hand for all these options which are shown here it is 10Base5. What is a
significance of this number five? Here five signifies that each segment of the cable can be
500 meters in length so it signifies the maximum segment length. For example, for the
first option it is 10Base5 which is a thick wire coaxial cable, then 10Base2 which uses
thick wire coaxial cable, then 10Base2 which uses thin wire coaxial cable is also known
as cheapernet that can use a maximum segment length of 185 m which is rounded up to 2.

Then the third medium used is twisted-pair and the maximum segment length is hundred
meter. And the fourth one is optical fiber that is your multimode fiber is being used here,
and the maximum length is 2000 m or 2 km and on each segment you can have different
number of nodes or computers you can say. For example, in 10Base5 you can have
hundred computers, in 10Base2 you can have thirty computers on a single segment and
on 10BaseT you can have 1024 computers and on 10Base5 you can have 10024
computers. And whenever you are using broadband of course the number is 1024 and the
length is 36 m.

The signaling used in different situations are here. For baseband the signaling that is used
is Manchester. We know that Manchester encoding helps you to synchronize the clock at
the receiving end that’s why Manchester encoding is used in IEEE 802 or Ethernet which
is similar to IEEE 802.3. On the other hand, whenever broad band signal is used,
differential phase shift keying is used and of course the use of this is not very popular.
The baseband signal is most popular that’s why we shall mainly focus on the baseband
signaling and networks based on baseband signal.

Let us consider the four cases that is used in IEEE 802.3. First one is 10Base5, 10 stands
for 10 Mbps baseband transmission, the standard specifies 0.5 coaxial cable known as
yellow cable or thick Ethernet like the cable used as hose pipe or something that is used
for gardening. Each cable segment can be a maximum of 500 m long and this 5 (Refer
Slide Time: 19:22) signifies that and up to maximum of five cable segments can be
connected using repeaters with maximum length of 2500 m.

That means you can have a number of segments connected with the help of repeaters. So
you can put a repeater and then connect to such segments. In this way you can have four
such repeaters in between and connect five such segments in cascade to have a maximum
length of 2500 m. And as I mentioned at most 1024 stations per Ethernet network is
allowed however on each segment the number is only hundred. So on each segment you
can have hundred nodes or (( ))

(Refer Slide Time: 20:09)

Now let us see how exactly this 10Base5 works, and how the cabling is done. This is that
coaxial cable that yellow coaxial cable of 0.5 inch diameter which is running and to
connect one computer a transceiver is firmly attached to the cable and a vampire hole is
made called tap, the vampire tap is made which goes to almost half the coaxial cable.
That means it touches the inner portion of the core conductor so now the core conductor
is connected and the upper conductor is in the form of gradedness. So these two are
connected to a transceiver which is directly connected from the attached cable and from
that transceiver one cable Attachment Unit Interface AUI cable is connected comes to the
computer and in the computer you have got a Network Interface Card or NIC to which
the computer is either built in as part of the mother board or there is a separate card.
The AUI cable can be 50 m in length. This is how a computer can be connected in the
case of a 10Base5 standard. This is the 500 m segment (Refer Slide Time: 21:58) and at
both ends we have got a terminator, this terminator is very important as it prevents signal
reflection at the other end. This is one segment and whenever a computer is to be
connected a vampire type is made and a transceiver is attached and from that transceiver
AUI cable goes to the computer. This is how through this coaxial cable different
computers can be connected. In this diagram two computers connected with the help of
two transceivers.

(Refer Slide Time: 22:25)

So the transceiver does send and receive collision detection, electronic isolation and the
other function is done by the network interface card which is connected to the
motherboard of the computer. This is how the 10Base5 works.

On the other hand, the 10Base2 also supports 10 Mbps baseband transmission, the
standard specifies zero point two five inch coaxial cable known as cheapernet or thin
Ethernet. So here the coaxial cable is of cheaper variety which is used in cable TV which
is 0.25 inches in diameter and that’s why it is also called chapernet because of its lower
cost and also it is called thin Ethernet because this diameter is thinner than the standard
10Base5.

So here as I have mentioned actually 185 m is the maximum segment length and up to
five cable segments can be connected using repeaters with maximum length of 925 m. So
in this way with five repeaters you can have 925 m and total number of computers is the
same which is 1024.

This is connected in this manner (Refer Slide Time: 24:05). Whenever a computer has to
be connected the coaxial cable is cut and a BNC T type connector is attached at both ends
of the cut cable and that can be connected to the Network Interface Card which is
available in the form of T. So this T connector is essentially connected to the network
interface card and then these two which are connected to ends of the cable can be
connected to the network interface card.

(Refer Slide Time: 24:38)

So in this way this thin Ethernet cable can (( )) through the entire building or entire floor
and can go from one computer to another computer having a BNC T connector for each
of the computer.

Then comes the 10BaseT which again supports ten mega bits baseband signaling. Here it
uses twisted-pair. As I mentioned the twisted-pair can be used both for category 3 or
category five cables and it requires a hub or the hub in the centre node. The hub is
essentially multiport repeater. That means whatever signal is present here are also present
in all the ports and the stations are connected with the help of a RJ-45 connector. That
means the twisted-pair cable is connected with the help of a RJ-45 connector and
maximum length of each of these segments can be at most 100 m as you can see.

You may be asking why we have deviated from using coaxial cable. Actually in case of
10Base5 or 10Base2 there is always a problem of loose connection, cut and other
problems and for that purpose time domain reflectometry is used for detection of fault
which is very time consuming. So, that problem can be avoided in case of this hub based
10BaseT Ethernet network wherein it is very easy to maintain and diagnose a fault.
That’s why this particular topology has become very popular.
(Refer Slide Time: 26:25)

Now another alternative as I mentioned is 10BaseF where F stands for fiber. We can use
10Base fiber particularly when the distance longer. There are three alternatives;
10BaseFP a passive star topology is used which allows up to 1 Km length and 10BaseFL
which is the most popular asynchronous point-to-point link which gives you up to 2 Km
and third alternative is 10BaseFB a synchronous point-to-point link which also gives up
to 2 km with 15 cascaded repeaters that can be used in this particular case.

(Refer Slide Time: 27:15)

So we have seen various alternatives of the Ethernet or IEEE 802.2. So, as I mentioned
Ethernet and IEEE 802 are not really the same. Ethernet was the standard developed by
(……rox, (Derc… 27:34) and Intel and IEEE 802 was developed based on Ethernet.
sometimes whenever IEEE 802 is mentioned we refer it to Ethernet but they are not
exactly same, Ethernet was the standard developed by the three companies (……rox,
(Derc…and intel, on the other hand, IEEE 802.3 was developed by IEEE 802 committee,
however, they are very similar.

As you can see there is some dissimilarity in the same format, there is a preamble,
preamble is a sequence of alternative one zeroes and since it uses Manchester encoding
the consecutive one zeroes appear as 10 Mbps square ….s in the receiving end and the
receiver can do the synchronization.

(Refer Slide Time: 27:15)

Then in IEEE 802.3 there is a start frame delimiter which is opened by that means
10101011 which signifies the start of a frame. So synchronization is done with the help of
these 7 bytes. That means 7 into 856 alternate bits like the 1010 bits. Then it uses the
source address ,destination medium access control address, source media access control
address and this destination address and source address are essentially 48-bit as you can
see the total length is 48-bit, this is the MAC address.

The first two bits is meant for individual address, if it is 1 it is meant for group address
and whenever this cell bit is 0 then it is meant for global administered address and 1
stands for local administered address.
(Refer Slide Time: 29:45)

Therefore, with the help of these two bits it defines the nature whether it is meant for
unicast, broadcast or multicast and with the help of these 46 bits it allows large number of
global addresses that is 7 into 10 to the power 13. So it allows you have to so many
addresses to be used in case of Ethernet and this is a fixed address which is being used in
this case. Of course the IEEE 802.3 standard originally allowed both 2 byte address and 6
byte address but 6 byte address is the most popular and LEN stands for the length of the
number of data bytes. As you can see here (Refer Slide Time: 30:32) it differs from
Ethernet.

(Refer Slide Time: 30:34)


In case of Ethernet it defines the type of the higher level protocol. On the other hand, in
IEEE 802.3 it specifies the number of data bytes. So here is a number of data bytes. The
data bytes can vary from 46 to 1500. You may be asking why 46? The reason for that is,
there is a restriction on minimum length of the frame and if the data byte is 0 then a 46
byte pad is introduced. That’s why whenever the data byte is 0 there is a 46 byte pad. On
the other hand, if the data byte itself is more than 46 byte then it can be only the data byte
and no pad is necessary. And finally there is a frame check sequence which uses the CRC
4 byte that is 32 byte cyclic redundancy code for error detection. Therefore, as you can
see in LAN in IEEE 802 error detection was allowed once provided. There is another
important point.

(Refer Slide Time: 32:20)

As you can see in this diagram, between two frames there is a mandatory gap of 9.6
micro seconds. This gap is allowed which is essentially 96-bit time delays provided
between frame transmissions. This is provided to enable other stations wishing to
transmit to take over at this time. For example, one frame transmission is over and before
another frame can be transmitted this gap is allowed so that other stations can send their
frame.

Of course there is a possibility of collision. We have already discussed about how this
binary exponential backoff algorithm is used whenever there is collision or multiple
collisions and we now discuss it here at this point. Hence, using this binary exponential
backoff algorithm it comes out of the collision if possible, otherwise after sixteen
attempts that is 1 plus 15 collision attempts it comes out and the packet is discarded.
(Refer Slide Time: 32:54)

Now there are some important points to be discussed about the collision detection. As
you know a station sends a frame and while sending it senses the media and collision is
detected each station senses exceeded the signal strength. That means essentially it is
done by some kind of analog signaling by analog circuit. If the signal level is higher then
it detects a collision particularly in coaxial cable. On the other hand, whenever the
twisted-pair is used then of course you are using some kind of a hub, so, if there is signal
on more than one port that means there is collision. This is how collision is detected.

Whenever a collision occurs what the station does? The transmitting station sends a
jamming signal after collision is detected which can be either 32-bit jamming signal of
alternative ones and zeroes or 48-bit jamming signal.
(Refer Slide Time: 34:05)

So this jamming signal serves as a mechanism to cause non transmitting stations to wait
until the jam signal ends. That means the transmitting stations that have suffered collision
will send the jamming signal so that jamming signal will alert the other stations so that
they will wait until the jamming signal is over before starting transmission.

Now as I mentioned there is a concept of minimum frame size. How it occurs? The
reason for that comes from this particular situation as you can see. A starts transmission
at time t is equal to 0 and before it reaches the other end the B starts transmission and of
course when this end reaches there is a collision here and that collision is detected by B
and when this also reaches A there is a collision. So depending on the propagation time,
if tau is the propagation time a frame must take more than 2tau that is two times the
propagation time for detection of collision.

Now, in terms of slot time, this corresponds to 51.2 micro second, this is corresponding
to 512 bytes that means it assumes that it is whenever you have got a maximum number
of segments connected by repeaters, so this is a repeater here in other segment (Refer
Slide time: 35:47) in this way you can have a number of segments cascaded and then the
end to end delay is two times the end of delays so 51.2 micro seconds is assumed. So this
corresponds to 512 byte.
(Refer Slide Time: 35:55)

You require the frame size a minimum of sixty two bytes. Of course the other parts are
there. For example, you have got the other parts so 6 plus 6 plus 2 plus 4 so if you add
these two 12, 14 then 20 and if you subtract 18 that is 6 plus 6 is equal to 12, 14, 18 from
64 you will get 46, that’s how the 46 comes.

So we have seen the need for the minimum frame size of 64 bytes. However, if there is a
collision there is a possibility of late collisions that take place after 64 bytes so that can
happen because of excessive cable length or too many repeaters or faulty connector of
defective network interface card.

(Refer Slide Time: 37:00)


This can happen in abnormal situations otherwise all the collision will occur within the
transmission time of 64 bytes, that’s why the minimum length of the packet is provided
and during the transmission all the collisions will be detected. Now let us come to the
second important standard that is your IEEE 802.4 based on token bus.

(Refer Slide Time: 38:45)

Because of the non deterministic nature of the medium access control that is CSMA/CD
many people from industry were not satisfied with the CSMA/CD protocol that is your
802.3, particularly general motors who are interested in factory automation. They
suggested there should be an alternative where we can send real-time traffic or the time is
deterministic that is the maximum delay is deterministic and that’s how the token bus
standard was developed by IEEE 802 committee. Here (Refer Slide Time: 38:16) as usual
like Ethernet or IEEE 802.3 a bus is used and in two ways the computers are connected,
the way the computers can be connected in IEEE 803. However, they form some kind of
a logical ring as you can see. A is connected to B, and B is connected to E, and E is
connected to G but it is not necessary that they will follow the same order. So the order in
which it is connected to the cable can be different from the order in which this logical
ring is formed.

Then each station passes a token to its successor to gain access control of the bus. That
means there is a token, each station will get a token and whenever it gets a token it
transmits the data and in this way the data transmission is possible. And there are four
priority classes 0, 2, 4 and 6 with 0 as the lowest and 6 as the highest priority. That means
if a particular station has frames of highest priority it will first send those frames and then
other station then it will send the lower priority station. So we find that to support real
time traffic priority concept is introduced in token bus protocol. And here is the frame
format used in token bus standard.
(Refer Slide Time: 39:57)

As you can see this frame format is different from IEEE 802.3 frame. Here you have got
the preamble. Preamble is essentially for the purpose of synchronization. But instead of
seven bytes here it is only one byte, then there is a start delimiter which signifies the
beginning of the frame and also there is an end delimiter which is a special character used
for marking the start and end of the frame.

The packet length is not mentioned here and this is essentially limited by the start
delimiter and end delimiter. And as usual there are destination address and source
address. However, the token bus standard allows 2 byte or 6 byte address and 6 byte
address is very similar to that IEEE 802.3. Then the data size can be here from 028182.
Of course whenever 6 byte address is used you have to subtract 4 from here so it becomes
028178. It is the maximum whenever 6 byte address is used. Whenever 2 byte address is
used then it is 028182.
(Refer Slide Time: 41:20)

Now, there is a frame control bit, the frame control bit has got the priority bits and it
signifies whether it is a data frame or a token or a control frame and it performs various
types of control, apart from priority it is a control frame. As you can see there are several
types of frame controls which are mentioned here such as claim token,
solicit_successor_1, solicit_successor_2, who_follows, token, and set_successor. Let us
see how it is being done in a distributed manner in token bus which means ring
maintenance.

Ring maintenance is a very complex case in case of token ring. Possibly the medium
access control is the most complex here. As shown here this claim token packet is used in
the beginning at the time of initialization in case of a lost token, or whenever there is no
token at all. That means when a particular station is turned on, a system is turned on then
it will send a claim token packet and it will say that this particular station is holding the
token. That means in the beginning there will be no station active so whenever a
particular station turns on it sends the claim token packet and it sends the token, it
essentially is the holder of the token.
(Refer Slide Time: 43:30)

Now, one after the other the stations have to join the ring, how it can be done? That is
done with the help of this solicit_successor_1 frame. Whoever is holding the token
occasionally will send this solicit_successor_1 frame. So, whenever a solicit_successor
frame is sent the other stations waiting for joining the ring will respond and join the ring.

However, if there is collision there is a possibility that more than one station are waiting
and are wanting to join the ring then the resolve_contention packets are used to resolve
the collision. In this way one after the other rings can join.

Suppose a particular station has the predecessor P and successor S, this is the address,
that means if a new station joins then the new station’s will be the successors address so
it’s a new address and the new station will have X as the predecessor and the successor
will become the address of the new station. So in this way a station can join.
(Refer Slide Time: 44:40)

However, the addresses are arranged in a descending order. That means the packets are
transmitted in a descending order. that means here it goes from the highest address to the
lowest address and so on, in this way it goes (Refer Slide Time: 45:09). Then a particular
station may want to leave the ring in that case it sends a set_successor packet. It is very
easy to do.

For example, a station has the predecessor P and a successor S and if it wants to leave the
ring it will simply ask the predecessor to make X as its successor. So now X is the
successor of P but now X will request so X as a successor of P in this way X will come
out of the ring. This is done by using the set successor packet by sending this particular
this successor’s address.
(Refer Slide Time: 46:10)

Solicit_successor_2 is also necessary in situations where, suppose there is no response


from other stations in that case this is being used for new stations to join the ring. Now
this fault management is necessary, involves management of duplicate frame so it uses
who_follows packet when a successor does not respond in time. that means in this case
what is happening is a particular station has sent a token, its successor either should send
the data frame or it should send the token but if does not respond then this is being done
and who_follows packet is sent.

Who_follows means the next successor, that is, if the successor does not respond then the
successor of the successor has to respond. If the successor of the successor does not
respond then this particular solicit_ successor_2 is introduced so that whoever is in the
ring waiting for joining can join.

In this way the ring maintenance is being done in a dynamic manner. So here we find this
ring maintenance is performed in a distributed manner and any station that is holding the
token will act as some kind of a master and issue these control frames. On the other hand,
in IEEE 802.5 which is based on token ring the rings are organized in a logical way,
physical way.

As you can see, here it can operate in two modes. Either it is in the received mode or
monitor mode or in transmitter mode. So, if a particular station is simply looking at or
watching the frames to go by then it receives the packet it intrudes a delay of one bit and
then returns message. So in this way there is a delay of one bit as a token or a frame goes
by. But if a particular station gets a free token then it grabs it then it changes to the
transmit mode. So whenever it changes to the transmit mode it breaks the link as you can
see logically (refer Slide Time: 49:05) and it receives the token then it transmits the
frame into the ring.
(Refer Slide Time: 49:12)

This is how there are two possible modes. One is the received mode, another is transmit
mode. Now this is a very unreliable situation in the sense that the way it is being
connected. If there is a break anywhere then the entire ring collapses, no communication
is possible because each particular station has to take part in relaying the token or the
frame. So, to overcome that problem a wiring concentrator is used where all the cables
are connected to a central point which acts as some kind of central node, this wiring
concentrator, and there is a bypass relay for each of these stations.

So whenever a particular station becomes faulty then it can be bypassed by closing a


micro switch or relay for each of these stations. We can call this a third mode known as
‘bypass mode’ whenever this kind of wiring constructor is used. So, in this topology the
reliability of the topology can be improved by introducing a wiring constructor as you
can see.

Now here is the token ring frame format and as you can see there are three bytes. first one
is the starting delimiter, second one is the access control and third one is the frame
control and then you have got the destination address, sources address as usual, it can be
from two to six bytes. And surprisingly there is no limit on the data size in case of token
ring network, it uses only four byte checksum for error detection and there is an ending
delimiter and there is another byte at the end which essentially is used to indicate the
frame status.

That means as the frame goes by obviously it is going in this direction (Refer Slide Time:
51:22). first the starting delimiter, then access control, then the frame control, then the
destination address, source address, data etc so in this way goes and at the end when the
last byte goes by the frame status can be indicated with the help of two bits A and C bits.
(Refer Slide Time: 51:45)

Thus, if the destination is not present that means by looking at this address (Refer Slide
Time: 51:50) if it comes back then both are 0 0, initially these two bits are 0 0 then as the
ring covers them, if your destination address gets it but if there is error then it is does not
accept the packet so it changes the bit A to 1 but C remains to 0, then as the frame goes
by if it finds that the checksum is also correct then it sets both the bits A and C bits to 1.

So, in this case destination is present and frame is copied. That means as the frame goes
by automatically the frame status bit senses and it acts as an acknowledgment to the
source address. So when the frame goes to the destination it removes it and not only it
removes but it comes to know whether there is an error or the frame has been removed
and so on.

As you can see there is a token, whenever there is no data then there is no need for
destination address, source address or anything so the access control bit specifies that this
is a token and token is only three bytes comprising starting delimiter, access control and
frame control. That means whenever there is no data this token keeps on circulating. In a
lightly loaded token ring network most of the time the token will keep on circulating.
However, whenever a station has some data we will grab the frame and it will convert the
token into this kind of frame format and signal.

Now there is a monitor. One of the stations is designated as a monitor station which
performs the ring maintenance. Particularly it does duplicate address test, it does fault
location whenever there is some break in the network and whenever it finds that there is
no monitor it tries to claim the token to become a monitor this is necessary in the
beginning, and whenever there is a power frame packet circulating around the ring it
purges and also occasionally it sends a token frame indicating that the active monitor is
present and also there are some other stations that possesses the potential to become a
monitor so occasionally it sends this kind of frame so that when the monitor fails then
particular standby monitor will take over.

(Refer Slide Time: 54:45)

Here is a quick comparison of the three protocols. So far as the access determination is
concerned CSMA/CD uses contention, token bus uses a token, token ring also uses a
token passing.

(Refer Slide Time: 55:10)

In case of CSMA/CD there is a packet length restriction of 64 bytes which has to be


greater than twice the maximum propagation time. In case of token bus or token ring
there is no such limits. Then priority is not supported in CSMA/CD, it is both supported
in token bus and token ring and as sensitivity to work load is concerned this CSMA/CD is
the most sensitive, token ring is sensitive but token ring is the least sensitive. And the
principle advantage of CSMA/CD is the simplicity and it has got wide installed base.

Token bus has got regulated or fair access. The token ring has also got regulated and fair
access. The principle disadvantage of CSMA/CD as we know is the non deterministic
delay and token bus has the highest complexity and token ring is also complex but lesser
than the token bus. So here is the time to give you the review questions.

(Refer Slide Time: 56:15)

1) List the functions performed by the physical layer of 802.3 standard?


2) Why do you require a limit on the minimum size of Ethernet Frame?
3) Why token bus is preferred over CSMA/CD?
4) What are the drawbacks of token ring topology?
5) How the reliability of token ring topology can be improved?

These questions will be answered in the next lecture. Here is the answer to the first
question of lecture minus 27.
(Refer Slide Time: 57:00)

1) In what way FDM differs from FDMA?

In FDM channels are statically assigned to different stations which is inefficient in case
of bursty traffic. On the other hand, channels can be allocated on demand. The efficiency
is improved in FDMA by allocating on demand by using dynamic sharing technique to
access a particular frequency band. So this is your FDMA.

2) In what way CDMA differs from FDMA?

In FDMA the transmissions from different stations are separated in frequency.


On the contrary in CDMA the transmission from different stations occupy the entire
frequency band at the same time and multiple simultaneous transmissions are separated
by coding theory.

(Refer Slide Time: 57:35)

3) What happens when multiple signals collide in CDMA?

We know that as multiple signals collide in CDMA they are added to form a new
sequence which is used at the receiving end to demultiplex the sent data.

What is an inner product?

It is essentially two code sequences are multiplied element by element and the result is
added we get a number called inner product. For example, S1 and S2 are two codes and if
we multiply them and add them we get 0. So, if you multiply two different inputs we get
the inner product as 0. On the other hand, S1 into S1 is equal to 4 because you have got
four chip sequences, so inner product is 4.
(Refer Slide Time: 58:34)

Similarly, the inner product for S1 into S1 bar is equal to 0, also it can be S1dot S2 bar is
equal to 0.

(Refer Slide Time: 58:44)

4) Compare and contrast FDMA, TDMA and CDMA techniques.

In case of FDMA the bandwidth is divided into separate frequency bands, in case of
TDMA the bandwidth is timeshared, on the other hand, in case of CDMA data from all
stations are transmitted simultaneously and are separated based on coding theory. Unlike
FDMA, CDMA has got soft capacity which means that there is no hard limit. Particularly
in FDMA and TDMA it is band limited. On the other hand, in CDMA it is interference
limited. But CDMA offers high capacity in comparison to FDMA and TDMA. CDMA
also helps to combat multipath fading.

With this we conclude today’s lecture. In the next lecture we shall discuss about high
speed local area networks, thank you.
Data Communication
Prof. A. Pal
Department of Computer Science and Engineering
Indian Institute of Technology, Kharagpur

Lecture # 29
High Speed LANs

Hello and welcome to today’s lecture on high speed local area networks.

(Refer Slide Time: 00:59)

In the last lecture we have discussed the legacy LANS which have served the purpose for
many years but unfortunately with the need increasing because of the application pool
and technology push there has been need for high speed local area networks which is the
topic of today’s discussion. Here is the outline of this lecture:
High speed LAN categories
High speed LANs can be implemented based on token ring. Essentially it can be
considered as extension of token ring which is known as FDDI.

High speed LANs based on successors of Ethernet, we shall discuss two technologies in
detail, the Fast Ethernet and the Gigabit Ethernet and another alternative which is based
on switching technology that has also led to high speed local area networks.
(Refer Slide Time: 02:00)

And on completion the students will be able to explain different categories of high speed
LANs, distinguish FDDI from IEEE 802.5 token ring LAN. So although FDDI is based
on token ring LAN there are many differences. The student will also explain how FDDI
provides high reliability compared to IEEE 802.5 and then they will be able to explain or
distinguish between switched versus shared LAN. They will be also able to explain key
features of Fast Ethernet and also they will be able to explain the key features of Gigabit
Ethernet.

(Refer Slide Time: 02:50)


As I mentioned the LAN technologies of 70s the IEEE 802 committee’s LANs 802.3,
802.4 and 802.5 can be considered as first generation local area networks. As we have
seen their speed were in the range of ten to sixteen mega bits per second. But availability
of powerful computers, this is your technological push, and emergence of new
applications has created an urgent need for high speed LANs. Not only there is
application pull but there is also technology push.

As I mentioned, because of the advancement of VLSI technology it is now possible to


develop circuits which can offer it at a very high speed. Because of these two reasons
now it is possible to have high speed local area networks.

As I mentioned in this lecture I shall consider essentially three different categories of


LAN; one is successor of token ring which is essentially FDDI, second is successor of
Ethernet which is one of the most popular LAN technologies then based on switching
technologies. So these are the things which we shall discuss in this lecture.

(Refer Slide Time: 04:35)

First let us focus on the FDDI which stands for Fiber Distributed Data Interface. It is
based on token ring and this is supported by ANSI ISO LAN standard. As you can see
there is a close relationship between ISO five layers and the FDDI layers.

As you can see the physical layer is divided into two sublayers. One is the physical layer
medium defendant. As we shall see there are several mediums which can be used in
FDDI such as optical fiber and twisted-pair so that there is a need for separate physical
layer which is medium dependant so we call it PMD sublayer which decides the
transmission parameters, connectors, cabling so this is the function of this sublayer. Then
there is another sublayer which is a part of the physical layer that performs encoding,
decoding, clocking, then the symbol set that is used (Refer Slide Time: 6:02) which is
being done by the physical layer.
Also, there is a station management which encompasses both physical layer as well as
data link layer which performs ring monitoring, ring management, connection
management and also which performs the station management frames generated by this
sublayer. And as you know medium access control performs the addressing, frame
construction, token handling as discussed in the context of 802.5. Then the upper layers
have been kept the same that is logical link control which is part of this data link layer.

(Refer Slide Time: 06:39)

This sublayer acts as an interface with the higher layers of OSI layers that is network
layer and other layers. So 802.3 LLC is identical to 802.3, 802.4 and 802.5.

Now let us have a look at the key features of FDDI.


(Refer Slide Time: 07:25)

One of the important features is the data rate. As you can see the data rate has been
increased from 16 megabits, 4 or 16 which was used in case of 802.5 that is token ring
has been enhanced to 100 Mbps so this is almost an order or magnitude enhancement in
speed. And it can support a maximum length of 2 Km between stations using multimode
fiber and 20 km using single mode fiber.

So, we have seen that, in case of token ring the network span was very small and here the
network span can be very large because of the longer lengths of the cables and it can
support as many as five hundred stations and network span can be as big as 200 Km. so
we see that it cannot be considered just a local area network, it encompasses a much
larger area so you may consider it as some kind of a metropolitan area network. It can
cover a very big campus.

Some of the features of the physical layer are mentioned here (Refer Slide Time: 8:23).
There are two possible mediums which are supported; one is optical fiber. Here only the
multi-mode fiber is shown but it is also possible to support single mode fiber for longer
distance. Then the data rate in all the cases is 100 Mbps and signaling technique that is
being used is different for the two cases. In case of optical fiber it uses two levels of
encoding, four B by five B that is one of encoding and NRZ-I that is line encoding and
block encoding both are being used here to generate data at the baud rate of 1.25 mega
bits.

On the other hand, in case of twisted-pair, it generates MTL-3 encoding and maximum
number of repeaters that can be used is 100 in both cases. And as I mentioned when
multimode fiber is being used the distance that can be covered is 2 Km and for twisted-
pair category 5 unshielded twisted-pair it can be 100 m.
now as I mentioned it uses 4-bit by 5-bit encoding and 5-bit code has no more than one
leading 0and no more than two trailing zeros, this is necessary for clock recovery
purposes and normally that it is line encoded by NRZ-1.

(Refer Slide Time: 09:50)

So this block encoding and line encoding these two are combined to have the encoding
for FDDI. And this is the frame format (Refer Slide Time: 10:08) as you can see there is
a preamble which can be 8 bytes minimum and it is followed by a start of frame delimiter
that is 1 byte which specifies the start of the frame, then there is frame control of 1 byte
which performs various control operations just like that 802.5 and then this is the
destination address and this is the source address, this has the source address and
destination address, it can be either 2 byte or 56 byte and data size should be greater than
0.
(Refer Slide Time: 10:20)

As such there is no limit but there is some maximum limit that is quite high. Then there is
a frame check sequence, it is four byte, 32-bit cyclic redundancy code is being used and
there is an ending delimiter of 1 byte and frame status as we have already discussed in the
context of 802.5 is also used here. The token that is being used here has got a preamble
that is used for synchronization, there is SFD which is the start of frame delimiter then
frame control and frame status. So this frame status is appended. Hence, as you can see
the token is much smaller compared to the data frame.

(Refer Slide Time: 12:10)


It uses a dual counter rotating ring for the purpose of reliability. In the case of 802.5 we
have seen there is only one ring that rotates in the anticlockwise direction in this direction
as shown. But here as you can see there are two rings, one is primary ring, another is
secondary ring. So, the primary ring is normally in operation and secondary ring is
essentially standby. So here as you can see the arrows are showing the direction, the
primary ring rotates in the anticlockwise direction and secondary ring rotates in
clockwise direction. So the packets or tokens can flow in this direction. That means there
are two ports, two and two that is equal to four ports for each station which has to be
connected and form a ring.

Now this dual counter rotating ring provides you high reliability. For example, whenever
there is a break in fiber then some link has been disrupted then automatically the ring will
wrap off in this manner so this station should be wrapped off and that faulty part of the
fiber will be bypassed. This is how automatically it supports fault tolerance. that means it
detects where the breakage has taken place then it does the repair and now as you can see
here the ring is formed in this manner (Refer Slide Time: 13:12).

(Refer Slide Time: 13:20)

Therefore, by passing the fiber a ring is formed and it continues to offer it and it is
transparent to the user so user will not know that this kind of failure has taken place.
Similarly, if there is a station failure then the defective station can also wrapped off by
wrapping up the two separate rings.
(Refer Slide Time: 13:35)

The secondary now becomes part of the active network so a ring is formed in this manner
and this station can communicate with each other by using that token passing protocol
and defective station is bypassed.

Hence, it has got a built-in reliability with the help of this dual counter rotating technique.
Although the MAC protocol is closely rounded around 802.5 but there is one difference.
As you have seen, in case of 802.5 a packet starts going and it goes around the ring and
usually the bit length that means the time it takes to reach the transmitter that length is
smaller than the transmission time of a particular packet. And as a consequence this
particular station will receive the leading edge and then after receiving the leading edge it
will consume it and then after the transmission of this frame is over it will start a packet.
That means there is always some data on the ring. This means, after the transmission is
over and this particular data packet is removed inside by the transmitter then a token is
introduced in to the ring. So this is how it happens in case of normal token ring.
(Refer Slide Time: 15:16)

However, in case of FDDI there is a possibility that because of the very long length the
bit length can be longer than the transmission time, that possibility is there because the
transmission rate is high and also the length of the segment is high. If you are using
single mode fiber it can be as long as 20 km so each of these segments can be 20 km.

(Refer Slide Time: 15:53)

As a consequence whenever there is a big ring the bit length can be quite high. On the
other hand, whenever you are transmitting at the rate of 100 Mbps the transmission time
can be small. As a consequence there is a possibility that the leading edge will not reach
the transmitter before the transmission is over. Thus in this particular case the
transmission of a packet is started and it is already finished.

(Refer Slide Time: 16:23)

Now there is a possibility of introducing a token. So immediately after sending a packet a


token can be introduced which is known as hourly token delay.

(Refer Slide Time: 16:34)

After this token is received by this station it can start sending a data. So you can see there
is a possibility of having multiple frames in the ring simultaneously that is what is being
supported by a FDDI. In FDDI there is an early token release that means immediately
after the transmission of the frame a token is released and as a consequence multiple
frames can be simultaneously present in the ring.

The MAC protocol is a time token protocol to support both synchronous and
asynchronous traffic. So, in case of 802.5 that is your token ring network it is not a time
token protocol but using this time token protocol this FDDI supports both synchronous
and asynchronous traffic.

(Refer Slide Time: 17:35)

There are three timers; one is known as Token Rotation Timer TRT, another is Token
Holding Timer THT, another is Valid Transmission Timer VTR. These timers are used
for the purpose of sending tokens and receiving packets. As you can see a station waits
for a token, after a token is received it grabs the token then it sets the Time Rotation
Timer equal to that TTRT that is essentially target token rotation time which is usually
set by the network manager and which is larger than the total bit length that means
propagation time of the ring then it resets the Time Rotation Timer so THT is set equal to
this then it keeps on sending until this THT becomes 0. So this transmits packets until
THT-0 and no packet is left. However, if no packet is sent then the token is released.

Now, as you can see the time is decided in this way. D max is the total propagation time
of the ring and F max is the maximum time of a frame and this is the token transmission
time and this is the allotment of synchronous traffic. That means each token is allocated a
fraction of time for transmitting synchronous traffic and after transmitting synchronous
traffic if there is any time left asynchronous traffic is sent and all these parameters are
added together which must be less than this TTRT that is target token rotation time
usually set by the network manager.
Hence, by using this kind of time token protocol the asynchronous and synchronous
traffic is supported by FDDI. Now we move to some other Fast Ethernet or some other
high speed LAN which is essentially based on successors of Ethernet.

(Refer Slide Time: 20:20)

As I mentioned the successors of Ethernet uses two basic approaches. First approach is
increase the data rate keeping the same shared approach. As you have seen in case of
802.3 that is your Ethernet based on CSMA/CD the shared bus was used for transmission
or a hub was used which essentially acts as a some kind of shared multiport repeater with
the help of which data transfer was done.

Now, using the shared approach speed can be increased from 10 Mbps to 100 Mbps Fast
Ethernet and from Fast Ethernet 100 Mbps to Gigabit Ethernet 1000 Mbps and nowadays
10 GB network is also on the horizon that is 10 Gbps. That is also available. This is how
the speed can be increased and this requires replacement of network interface card. So
whenever you change the technology you go from Ethernet to Fast Ethernet or Fast
Ethernet to Gigabit Ethernet there is a need for changing the network interface card.

However, there is another alternative. Another alternative is used to use switching


technology and this does not require any change of network interface card but using this
switching technology you can achieve higher bandwidth than the standard 10 Mbps. How
it can be achieved is explained here.
(Refer Slide Time: 21:45)

As you see here, this is the shared approach where you are using a hub so it is forming
some kind of a physical star but logical bus. That means whatever signal is present here is
also present on all the ports. That means on all the ports the signal is present so
effectively you may consider it as a bus and all the ports are represented like this because
same signal is being present on all the ports.

In other words it also shares the bandwidth. So here it is an 8-bit port, eight ports are
there in this hub that means ten megabits per second is being shared by eight computers
which can be connected to each of these links. So you can say the bandwidth is being
shared and also we can say that all ports belong to the same collision domain. That means
if two of this computers sends simultaneously, say, computer here and a computer here
sends simultaneously then there is a collision. That means all codes belongs to the same
collision domain and this is being overcome in this switched networks. Let us see how.

The switch is nothing but a fast bridge. We have already discussed about the function of a
bridge. It functions not only as a physical star but also as a logical star, why, because if
you connect a computer here and also connect another computer here then it becomes
three computers or four computers, let’s assume they are connected here so since it is a
fast bridge this particular computer A can communicate with B and C can communicate
with D simultaneously because each port has a buffer and a separate collision domain.
That means the signal present here need not be present on all other ports. The signal
present here is only transmitted to the computer where the destination station is
connected and because of that each port has got separate collision domain, so collision
domain here is separate. Earlier the entire ring was having the same collision domain but
here the collision domains are restricted to each of these ports.

Also, each port has a dedicated bandwidth. that means if it is a 8-bit port essentially the
bandwidth is eight megabits per second and another advantage of this switched approach
is that whenever it is a hub we cannot full-duplex communication but here you can have
full-duplex communication. Suppose A and B are having two pairs of wires then it can
transmit as well as it can receive so both can transmit and receive simultaneously. So full-
duplex communication is possible whenever you are using this switching technology.

Moreover, there is no need for CSMA/CD. The reason for that is since there is no shared
medium there is no possibility for collision and as a consequence there is no need for
CSMA/CD protocol for collision detection to get out of collision.

(Refer Slide Time: 25:20)

So here since they can communicate simultaneously among themselves the CSMA/CD
function may not be needed. We know that in case of a switch ports are provided with
buffer then switch maintains a directory with the address number and port number so
based on the destination address the packet is forwarded to the particular port.

Each frame is forwarded after examining the address number and forwarded to the proper
port number. Three possibilities exist in this particular case. The three possibilities are;
in the first case it can be cut through. By cut through we mean, suppose this is the frame
(Refer Slide Time: 26:14) then as we know there are starting delimiter and various other
things then you have got a source address and destination address. Now, as soon as the
destination address is available immediately it can be forwarded to the port without
looking at the remaining part of the frame. This is known as cut through approach of
forwarding.

However, whenever a frame is forwarded in this manner you cannot say that a frame is
free from collision or from error. The reason for that is as you know for detecting
collision it is necessary that the frame must be of 64 bytes but 64 bytes will be
somewhere here so unless you receive 64 bytes you cannot give guarantee that the frame
will not suffer in collision. That means you are forwarding a frame before receiving 64
bytes and as a result collision may not be detected so you may be sending a frame which
will suffer collision.

Also, as you know the frame check sequence is present somewhere here near the end. So
unless you receive the entire frame up to that part you cannot really detect collision, you
cannot detect error. So whenever you receive up to 64 bytes then send and you can say
that it is collision free. But in this case, frames are forwarded with error detection because
you are not waiting till the end of the frame. However when you receive the entire frame
buffer it, it detects error then if you find that it is error free only then forwards it which is
known as fully buffered.

Therefore, depending on the three specific approaches it can be cut through, collision free
and fully buffered. Obviously here the delay is minimum and in this case the delay is
maximum.

(Refer Slide Time: 28:12)

The reason is that you have to receive the entire frame, perform error checking then you
can send it and this collision free has got some intermediate delays between minimum
and maximum. In this case you have to receive after 64 bytes then forward it so these
approaches are being used in case of switches, that is, whenever you are using the
switching technique.

Now let us focus to the other approach. As I said the speed can be increased by increasing
the speed of the data transfer rate. So, from 10 Mbps it can be enhanced to 100 Mbps.
This is your Fast Ethernet.
(Refer Slide Time: 29:10)

However, when the IEEE convened a meeting of the IEEE 802 committee to increase the
speed of the Ethernet there were distinct proposals. Many people were not happy with
their Ethernet because of the probabilistic nature of the transfer. As you know there can
be unbounded delay, it is possible that a frame cannot be forwarded and cannot be
transmitted, a particular frame may suffer many collisions so as a result it may not be sent
to the destination.

Hence, to overcome this problem one group suggested redoing everything. However,
another group suggested increasing the speed keeping everything the same and this led to
standard IEEE 802.3u which is essentially the Fast Ethernet, it was proposed sometime in
June 1995.

But another group suggested to simply discard the Ethernet scheme and propose a fully
new standard which was known as 802.12 which is essentially 100 VG AnyLAN, data
transfer rate is again 100 Mbps however it uses a completely different approach.
Unfortunately this particular standard has not become popular. On the other hand, the
Fast Ethernet has become popular because of the widespread deployment of Ethernet.

Let us look at the features of Fast Ethernet.


(Refer Slide Time: 30:45)

The IEEE 802.3u is the standard used for Fast Ethernet and it was not to be considered as
a new standard but an addendum to IEEE 802.3. That means people considered it as an
extension of IEEE 802.3 but not a new standard that means the frame format has been
kept same, the minimum length and maximum length which is used in Ethernet has been
kept the same so it preserves the backward compatibility. Since it preserves the backward
compatibility and the Ethernet is already time tested there is no unforeseen problem and
all the problems have been already sorted out so it uses the same technology at 802.3.
However, some new features were added, one of them is autonegotiation.

Autonegotiation means network interface cards were provided with some additional
functions. One is autonegotiation so it is backward compatible with Ethernet. That means
with the help of this autonegotitation one can decide whether it will operate at ten
megabits or at hundred megabits. That means a particular network interface card can
support both ten as well as 100 megabits. That means Fast Ethernet switch can
communicate with a computer which can operate at 10 and this can be operating at
hundred. So you can see that this kind of incomptability is overcome by using this
autonegotiation.

Also, it is provided multiple capabilities. It can transfer at 10 Mbps and also it can
transfer at the rate of 100 Mbps. However, some changes are being made. Obviously the
physical layer becomes little complicated so it added one additional sublayer known as
reconciliation sublayer which essentially receives the data from upper layers and then
transfers it to the medium independent interface in terms of 4 bits or and then the medium
independent interface does the encoding and decoding.
(Refer Slide Time: 33:18)

Then it has got a physical layer entity which is essentially in the medium independent
layer that is being used in Ethernet then of course you have got medium dependent
interface which essentially depends on the medium that is being used and here that the
different mediums that is being supported by the Fast Ethernet. So it can use CAT 3
unshielded twisted-pair, CAT 5 unshielded twisted-pair and also it can use multimode
fiber and so far as the topology is concerned it supports only hub or switch that means it
does not support the bus technology which is used in Ethernet so it is essentially hub or
switch based. Thus a hub or switch can have multiple ports and essentially it uses the star
type of topology, it does not support the bus topology.

Then it uses CSMA/CD protocol. However, there is a need for minimum size of the
frame or you have to reduce the size of the network so that the collision detection is
ensured. As you know 64 bytes is used in case of Ethernet as the minimum size. So since
it has retained the same size it had to reduce the number of repeaters so that using 64
bytes collision detection can be done only if two repeaters are being used in Fast
Ethernet. Five repeaters used in Ethernet cannot be used in case of Fast Ethernet.
(Refer Slide Time: 35:00)

Here you can see various implementations of Fast Ethernet. There are two versions. One
is a two wire and another is a four wire. So two wire version is mentioned as 100Base-X
that means data transmission is baseband, the data transfer rate is 100 Mbps and the X
stands for two wire communication.

On the other hand, there is 100BaseT-4 which is four wire communication. Let us see the
four cases. First we shall consider the 100Base-TX, this is your 1000Base-TX (Refer
Slide Time: 35:57) where that 4B by 5B block encoding is used. As you can see that
reconciliation layer is supplying four bits at the rate of 25 Mbps, it is receiving it and
converting it into 125 Mbps, this is the baud rate generated by the block encoder and then
it is performing MLT 3 line encoder. The MLT 3 line encoder is essentially multiline
transmission using three levels. So it uses plus 1, 0 and minus 1 levels and it is very
similar to t inversion encoding.
(Refer Slide Time: 36:30)

However, whenever 1 is encountered in the beginning of 1 it changes from plus 1 to 0 or


0 to minus 1 and so on. So it is very similar to that inversion encoding. However, there
are three levels being used here in case of MLT three line encoder and this signal is being
sent (Refer Slide Time: 37:12) in case of this 100Base-TX. The 100Base-TX uses CAT
category 5 unshielded twisted-pair which can give a maximum length of 100 m.

On the other hand, hundred base FX where optical fiber is being used the NRZ-I inverse
line encoding is used, the block encoding is same 4-bit by 5-bit encoding is being used so
the baud rate is 125 Mbps. You may be asking, why Manchester encoding has not been
used but instead of that four bit by five bit block encoding has been used. The reason for
that is that if Manchester encoding is used the baud rate will be 200 Mbps so the cost of
implementation will be very high. So to reduce the cost of implementation and to have
lesser bandwidth this encoding is being done. However, by using this encoding it is
possible to recover the clock, the clock synchronization is achieved as there is enough
number of transitions in the signal.

In case of fiber optic medium instead of using MLT 3 encoding NRZ-I encoding is used
as it is shown here so it has got two levels.
(Refer Slide Time: 39:00)

When you are using four wire you know there are many installations where category 5 is
not present so Cat 3 cable is already present so if you want to utilize the existing cables
then you can go for this 100BaseT-4. Here as you can see that 100 Mbps signal is
converted, each eight bit is converted into six bit so eight bit by six bit encoding is being
used and this is being sent where you will require four pairs of wires.

However, to reduce the cost of implementation it uses four pairs of wires and as you can
see there are two 25 megabit ports; one is exclusively sent or transmit, another is
exclusively received and two ports are bidirectional send and receive. So, these two are
used for collision detection purposes. On the other hand here (Refer Slide Time: 39:43)
you can use both transmit and receive. So, by using four pairs you can transmit at the rate
of 100 Mbps and you can use category 3 cable.

Therefore, these are the three different types of physical media that is being supported by
Fast Ethernet. In 1995 Ethernet captured 86% of the total market that means Fast
Ethernet became very popular and as a result there was a need for enhancing this
technology. You may be asking why this Ethernet became so popular. First the Ethernet
was proposed then Fast Ethernet was proposed, both were compatible they can be used
simultaneously and the result for their success was high reliability which is even today,
availability of management and troubleshooting tools, it is highly scalable, you can very
easily expand the network, cost of implementation gradually reduced and switch cost per-
port reduced continuously with time.
(Refer Slide Time: 40:55)

That means because of the widespread deployment the cost of implementation become
very small in case of Ethernet and as a result FDDI and ATM which were used in the
intermediate period gradually faced out taken over by Fast Ethernet technology. So FDDI
and ATM which can be considered as second generation high speed local area networks
were gradually replaced by Ethernet.

(Refer Slide Time: 41:36)

However, 100 Mbps were not found to be adequate so a new standard was developed
which is essentially known as Gigabit Ethernet. The key objectives of this Gigabit
Ethernet were higher speed network at the backbone level which can support data rate of
1000 Mbps. So, to retain backward compatibility again the frame format were kept same
minimum and maximum frame sizes and also it allows both half-duplex as well as full-
duplex operation. Normally full-duplex operation is used. However, to maintain
backward compatibility half-duplex is also supported. So it uses CSMA/CD with at least
one repeater with collision domain diameter of 200 m.

In case of Ethernet the number of repeaters were 5, in Fast Ethernet it was reduced to two
and in case of Gigabit Ethernet it has been reduced to only 1 so that the collision
detection is possible. It uses only star wired topology that means no bus topology is
supported here, it is again based on hub or switch and multiple ports are provided on this
hub or switch.

(Refer Slide Time: 43:08)

It supports fiber as well as copper, at least 25 m on copper and at least 500 m on


multimode fiber and at least 2 km on single mode fiber. And also it has added some
features which are not present in Ethernet or Fast Ethernet known as flow control. These
are the functional elements of the Gigabit Ethernet technology. So, as you can see the
physical layer has got several alternatives several layers medium access control MAC
layer, full-duplex or half-duplex, gigabit media independent layer then there are other
functionalities which are performed, it reduces 8-bit by 10-bit encode decode whenever it
is in that two wire transmission. On the other hand, whenever it uses four wire
transmission then it is 1000BaseT and encoding decoding is different.

In thousand base LX it uses long web optical fiber and in thousand base SX it uses short
wave optical fiber and thousand base CX it uses shielded balanced copper 120 ohm
balanced copper and in 1000BaseT it uses category 5 UTP so it supports four different
transmission medium and the standards are referred to as 802.3z physical layer for these
cases and in case of this thousand base T it is referred to as 802.3b physical layer.
(Refer Slide Time: 44:35)

Now, to provide more flexibility one important functionality known as Gigabit Ethernet
interface carrier has been introduced by the developers of Gigabit Ethernet and this GBIC
Gigabit Ethernet Interface Carrier allows network managers to configure each port on a
port by port basis. That means you can use 1000Base-CX on one port, 1000Base-T on
another port, 1000Base-SX on another port and 1000Base-LX on another port supporting
different types of LANs using different types of optical fiber and laser. As you can see
here the thousand base SX uses 780 nanometer laser and 1000Base-LX uses 1300
nanometer laser.

(Refer Slide Time: 46:05)


In addition to that there is some value addition. This value addition is done by providing
another standard, another medium that is long haul LH medium that supports a distance
up to five to ten kilometer by using single mode fiber. Therefore, even by sending data at
the rate of 10 Mbps you can send up to a distance, that is, you can send up to 5 to 10 Km
so this is definitely a very big advantage of Gigabit Ethernet network.

Now let us look at the encoding techniques that are being used in Gigabit Ethernet.

(Refer Slide Time: 46:57)

So you can see here this thousand base X as I was telling which uses two pair wire
networking either copper or optical fiber receives 8-bit from that reconciliation layer then
it performs 8-bit to 10-bit block encoding so that synchronization is possible. Here the
data rate you get is 12.5 Gbps so there is (( )) decimal point and this is the baud rate and it
uses energy at line encoding to send data on the optical fiber or copper whatever it may
be.

On the other hand, when this four wire cabling is being used that uses category 5 cable as
you have seen, the Cat 5 UTP. In that particular case, a very complicated encoding
technique is used known as, 4D four dimensional five level pulse amplitude modulation.
That means it receives the eight bits from the reconciliation layer at the rate of 125 Mbps
then this complicated encoding is done to transmit data on four wires and it can support
up to 25 m. So up to 25 m this four wire cabling can be done for short distances.
Otherwise we can use shielded twisted here which can also use 25 m that is 10000Base-
CX or you can use optical fiber to support up to 100 m or 500 m or 3 km or it can be 5 to
10 Km.
(Refer Slide Time: 48:35)

These are the key features of Gigabit Ethernet. It is not simply Ethernet running on 1000
Mbps so it is not just the extension of speed it has added many more features. First of all
it has to do carrier extension.

As I mentioned that although it uses a sixty four bytes of minimum packet size that is not
enough for detecting collision when you are transmitting at the rate of 10 Mbps. in such a
case you have to use 500 bytes. However, the frame can be of sixty four bytes and carrier
is extended to five hundred twelve bytes to achieve two hundred meter collision domain
keeping the minimum frame size as sixty four bytes.

And also another feature has been added which is known as frame bursting. After the slot
time it allows additional frame transmission because it is sending at a fast rate. Once a
token has been grabbed, instead of sending just one frame at a time it can keep on
sending multiple frames after the slot time is over. Then it can use buffer distribution, all
incoming frames are buffered in first in first out order which is another feature added and
also it does flow control using X-ON X-OFF protocol which is not present in Fast
Ethernet or Ethernet.
(Refer Slide Time: 50:20)

Since it is using full-duplex communication this X-ON X-OFF protocol can be supported.
For example, this is the X-ON X-OFF protocol.

(Refer Slide Time: 51:24)

So this is the switch and this is the server. Suppose the server is sending data at a very
high speed the data is going to the switch then unfortunately the switch is not able to
transfer the data that means it is buffering the packets but the other stations which are
connected to it are not able to receive at that rate so the switch is getting congested so in
the reverse direction it will send a false frame and after receiving that false frame the
server will end session and wait required amount of time before sending. It is somewhat
like stop-and-wait protocol that is being used for flow control. The same is being
implemented by using this full-duplex line which is known as X-ON X-OFF protocol.

Moreover, in Ethernet and Fast Ethernet there is no quality of service. the reason for that
is because of the CSMA/CD protocol it is probabilistic, there can be unbounded delay,
there can be loss of frame which cannot be accepted in many applications particularly in
real-time applications so quality of service has been added to this standard in case of
Gigabit Ethernet to support consistent bandwidth and jitter and the standards are named
as 802.1p and 802.1q. By following these standards the quality of service is maintained
by Gigabit Ethernet so that the Gigabit Ethernet can support real-time traffic like voice
and video which cannot be supported by Ethernet.

What are the typical applications of Gigabit Ethernet?


These are the areas where Gigabit Ethernet can be used. We can upgrade switch to switch
links. For example, you have got two switches which are Fast Ethernet switches, these
two can be linked with the help of a Gigabit Ethernet switch and these are known as
uplink ports. With the help of these uplink ports which are provided as Gigabit Ethernet
port you can do these kinds of links as shown here.

Upgrading switch to server links: for server which can transmit at a high rate you can
have Gigabit Ethernet network interface card and then link it to a gigabit port of the
switch. This is where Gigabit Ethernet can be used.

Upgrading the switched Fast Ethernet backbone: Suppose this switch (Refer Slide Time:
53:25) was earlier a Fast Ethernet switch it can be replaced by a Gigabit Ethernet switch
which will form as a backbone network.

Upgrade a shared FDDI backbone: As I said FDDI is more or less an outdated concept
now, not being used in practice so this can be replaced by Gigabit Ethernet switches.
(Refer Slide Time: 53:40)

These are the typical applications where Gigabit Ethernet can be used and migrated in the
Fast Ethernet network or FDDI network. Now it is time to give you the review questions.

(Refer Slide Time: 54:15)

1) How FDDI offers high reliability than token ring protocol?


2) Distinguish between a switch and a hub.
3) Why 4B by 5B encoding is used in Fast Ethernet instead of Manchester encoding?
4) Why what is carrier extension? Why is it is used in Gigabit Ethernet?
5) How flow control is performed in Gigabit Ethernet network?
Here are the answers to the questions of lecture minus 28
(Refer Slide Time: 54:40)

1) List the functions performed by the physical layer of 802.3 standard?


Functions of the physical layer are:
 Data encoding and encoding to facilitate synchronization and efficient transfer of
signal through the medium
 Collision detection
 carrier sensing and
 It also decides the topology and medium to be used, this is the function of the
physical layer.

(Refer Slide Time: 55:05)


2) Why do you require a limit on the minimum size of Ethernet Frame?
As I mentioned, to detect collision it is essential that a sender continues sending a frame
at the same time receives another frame sent by another station, that’s how collision is
detected. So, considering maximum delay with five Ethernet segments in cascade the size
of frame has been found to be 64 bytes such that the above condition is satisfied.

(Refer Slide Time: 56:05)

3) Why token bus is preferred over CSMA/CD?


CSMA/CD is a probabilistic protocol. A station which wants to send data may get access
after an unbounded time. Because of the multiple collisions a station may take enormous
time to access the media. If a station is unfortunate it may not get access at all to send
data which is not acceptable in factory automation. So CSMA/CD does not allow
priority. But priority is needed for the station so that ordinary data and real-time data can
be transmitted. So, lack of priority scheme is the drawback of CSMA/CD protocol. So to
overcome these drawbacks token bus protocol is preferred over CSMA/CD protocol but
not always.
(Refer Slide Time: 56:50)

4) What are the drawbacks of token ring topology?


Token ring protocol cannot work if a link or a station fails. There is no built in reliability
added provided in token ring so it is vulnerable to link and station failures. This is being
overcome in FDDI.

5) How the reliability of token ring topology can be improved?


Reliability of ring topology can be improved by implementing the ring topology using a
wiring concentrator as you have seen and this allows not only detecting fault but also
isolating the fault link or station with the help of a bypass relay.

So here are the answers of the questions given in lecture - 28 and with this we conclude
our discussion on high speed local area networks and this lecture we have discussed
FDDI, Fast Ethernet, Gigabit Ethernet and also the concept of switched local area
networks. In the next lecture we shall discuss the wireless LAN, thank you.
Data Communication
Prof. A. Pal
Department of Computer Science and Engineering
Indian Institute of Technology, Kharagpur
Lecture # 30
Wireless LANs

Hello viewers, welcome to today’s lecture on wireless local area networks.

(Refer Slide Time: 01:01)

In the last two lectures we discussed about two different types of LANs. In the first
lecture on LAN we discussed about the legacy LAN based on IEEE 802.3, 4 and 5 and in
the last lecture we have discussed about the high speed LANs using token ring that is
FDDI and also by using CSMA/CD particularly the Fast Ethernet and Gigabit Ethernet.
Today we shall discuss about the wireless LANs. Here is the outline of the lecture.
(Refer Slide Time: 02:30)

First we shall discuss why wireless LAN and then consider some of the points we will be
discussing dealing with limitation and challenges of wireless LAN then we shall consider
two important standards; one is IEEE 802.11 standard and various aspects of this
standard like transmission media, topology, medium access control and also various
extensions that have taken place. Then another very important technology is Bluetooth.
Although it is not LAN it is essentially used for private area network PAN. We shall
consider topology transmission medium and also medium access control technique used
in Bluetooth.

And on completion the student will be able to explain the need for wireless LAN and they
will be able to identify the limitations and challenges of wireless LAN, they will be able
to understand different aspects of IEEE 802.11 wireless LAN particularly the
transmission media, topology, medium access control and security techniques used in this
particular standard.
(Refer Slide Time: 03:20)

They will be able to understand different aspects of Bluetooth particularly the topology,
transmission media and medium access control. First let us focus on why wireless LAN.

In the last couple of years the number of portable equipments have increased
exponentially because of the advancement of VLSI technology and miniaturization
possible. Now small sized battery operated computer laptops, palmtops, cell phones,
PDAs all these are available. So these low-cost portable equipments are essentially the
driving force behind wireless LAN. Moreover, it offers many advantages. First of all it is
mobility. when a person is traveling, when a person is possibly taking food in a
restaurant, when a person is attending conference or a meeting even then he can
communicate with the other computers so the mobility is the main feature of this wireless
LAN. It allows mobility.

People are on the move nowadays and while moving, while going from one place to
another they can communicate with others. And also it has got other benefits. For
example, it is installation speed and simplicity. Wireless LAN can be installed in a matter
of hours. On the other hand, wired LAN will take very long time to install. You have to
do the wiring, cabling and also install equipments and various other things. It is quite
simple to install and deploy.

It offers installation flexibility. By installation flexibility I mean very easily it can be


configured in various ways. It can have different types of topology, you can configure it
and reconfigure it many ways so it gives you a very flexible way of deploying local area
network. Then it has got reduced cost of ownership. Initial cost of deploying wireless
LAN may be little high but in the long run whenever particularly you are in a dynamic
situation you are moving from one office to another, you are expanding the business so in
such situation it offers you reduced cost of ownership.
Scalability is another important advantage. It can be very easily scaled. Initially business
can start with few computers and as the business grows or whenever the need for more
and more computers and other equipment increase they can be very easily connected to
the LAN by using the wireless LAN technique. And you may be asking if it has got so
many advantages why it was not so popular in the past. There are several reason for that.
First of all earlier the cost was very high, thanks to the advancement of VLSI technology,
cost is gradually decreasing based on Moore’s Law, you may have heard of it, the cost is
decreasing at a fast rate and as a result this is one of the factors which is pushing wireless
LAN technology.

(Refer Slide Time: 07:44)

Of course it has got another limitation. It has got low data rate compared to the high
speed of wired LAN. We have seen that now, ten gigabits per second is available, LANs
are available. On the other hand you will see the data rate for wireless LAN is limited to
few Mbps so it is only few Mbps. but this few Mbps may serve the purpose in many
situations. Another important concern is occupation safety concerns.

People were not very sure whether wireless LAN is affecting their health or not, even that
controversy is going on. For example, whether these cell phones should be used or cell
phone has some effect on the health, this is not known so there were some occupational
safety concerns regarding the use of wireless LAN so there is a question mark but people
have started using it so far no ill effects have been reported.

Another important limitation is licensing requirements. Whenever the transmission is


done in a particular frequency certain frequencies cannot be used so you have to take
permission, you have to take license so there are disputes and restrictions or a restriction
on the use of different frequency bands and as a result there is a constraint from the view
point of licensing requirements. So these were the limitations. In spite of that wireless
LAN is gradually becoming popular.
Although it has got some advantages and disadvantages it offers a number of limitations
and also a number of challenges. First one is low reliability due to susceptibility of radio
transmission to noise and interference. In case of wired LAN this problem is much less
because you are sending data through some kind of guided media. On the other hand, in
case of wireless transmission it is susceptible to radio transmission to noise and
interference. Other devices may be working in the same frequency band so that will
interfere with the signal. As a result there is some reliability concern regarding the
susceptibility of radio transmission to noise and interference.

(Refer Slide Time: 9:44)

Fluctuation of strength of the received signal through multiple paths causing fading, so
these signals can come through multiple paths and that leads to fading. It is a common
phenomenon. In TV you must have seen ((ghost)) images and other things because of
multiple paths. That problem arises in wireless LAN.

Another problem is due to security. So it is vulnerable to eavesdropping leading to


security concern. So whenever somebody is transmitting some signal in wireless media
obviously it is some kind of broadcasting, anybody can intercept that and make improper
use of it. Thus, eavesdropping is a concern.

Finally limited data rate because of the use of spread spectrum transmission techniques
enforced to the ISM band users. As we shall see one important technique that is spread
spectrum is used in case of wireless LAN and this leads to smaller data rate compared to
wired LAN data rates. These are the limitations and challenges ahead of wireless LAN.
Let us see how some of these limitations and challenges are overcome.

First we shall focus on important standard that is IEEE 802.11. The most popular
standard is IEEE the 802.11b which is commonly used nowadays and like any other LAN
technology it uses two layers physical layer and data link layer and data link layer again
has got two sublayers one is medium access control layer and logical link control layer
which is common to any other LAN. So essentially these two medium access control
layer and physical layer will be different and other part will be the same like the upper
layers can be TCP, IP and other things.

(Refer Slide Time: 11:45)

As we have already mentioned there are three important parameters that characterize a
LAN; the transmission media, topology and medium access control techniques. So we
shall focus on these three aspects of the two wireless LANS that we shall discuss in this
lecture.
(Refer Slide Time: 12:05)

First is the transmission media.

(Refer Slide Time: 12:27)

As I mentioned the 802.11 in fact any wireless LAN technique uses spread spectrum
technique so spread spectrum radio in 2.4 GHz ISM band that is 2400 to 2483.5 MHz,
this is the ISM band (Refer Slide Time: 12:50). The ISM band is very popular which is
used by most of the household equipments.

What is the typical characteristic of spread spectrum?


In spread spectrum there are two popular approaches. Frequency Hoping, this is
Frequency Hoping spread spectrum and another is Direct Sequence Spread Spectrum. In
both the cases what is being done is the frequency spectrum is spread over a wider range.
For example, if the initial frequency spectrum is like this and after spread spectrum
technique it is spread over a much higher frequency band. It has got several advantages.

Power Density: It gets spread over a much wider frequency band. This has got many
advantages. Whenever it is concentrated on a small frequency band it adversely affects
other wireless equipments, so this is being overcome and also this allows some kind of
redundancy. Thus, there are two aspects; one is lower power density. For example, in this
frequency band power density is lower compared to this one and redundancy. These are
the two important benefits of this spread spectrum technique.

These two approaches Frequency Hoping and Direct Sequence Spread Spectrum use two
radically different approaches. For example in case of frequency hoping spread spectrum
technique the frequency uses 79 non-overlapping one megahertz channels to transmit 1
Mbps data signal. So this frequency hoping technique is somewhat like this.

Suppose you have to transmit some FM radio signal what can be done is a small part of
the music can be sent using one carrier frequency, next part can be sent by using another
carrier frequency, next part by using another carrier frequency and so on. So the receiver
also has to keep on changing the carrier frequency if it wants to listen otherwise it will
not be able to listen so it has got two benefits.

Number one is power density becomes smaller over a small spectrum frequency band.
Another advantage is that it provides you higher security. If it is used to a particular
frequency it will simply hear some noise so they will not be able to receive the music.
This is the first approach.

(Refer Slide Time: 16:02)


The carrier frequency keeps on changing and it sends 79 such frequency channels for
sending the signal. Then using Direct Sequence Spread Spectrum particularly in case of
802.11 it uses a simple eleven chip barker sequence, this is the barker sequence with
QPSK or BPSK modulation. That means first some kind of modulo 2 addition is
performed.

For example, using the sequence this plus 1 corresponds to plus 1, minus 1 some kind of
modulo 2 addition is performed with 0 or 1 and then QPSK or BPSK modulation is
performed before transmission. And third approach is based on infra red signal in the
near visible range of 850 to 950 nanometer; this is also used but not that popular. We
shall mainly focus on the spread spectrum techniques which are commonly used. This is
the basic schematic diagram of the transmitter and receiver used in frequency hoping
spread spectrum technique.

(Refer Slide Time: 17:50)

As you can see there is a pseudo random sequence generator which can select one of the
79 frequency channels. So you have got 79 frequency channels so this pseudo random
code will select one of the channel and with the help of the frequency synthesizer that
carrier frequency generated is being multiplied the modulator FSK BPSK data comes and
that is multiplied and it is passed through a band pass filter and it is transmitted so here
you get the spread spectrum signal. but at the other end the signal is received and by
using the same pseudo random binary sequence generator it generates the sequence in the
same order as it is done in the transmitter then the carrier frequency is generated with the
help of the sequence synthesizer and channel table and it is being multiplied with the
received signal and then it passes through the band pass filter and finally it is
demodulated whether it is FSK or BPSK and by performing demodulation here we get
the digital data. This is how the frequency hoping spread spectrum operates.
(Refer Slide Time: 18:50)

On the other hand, in case of Direct Sequence Spread Spectrum it uses eleven chip barker
sequence and as you can see this is the barker sequence (Refer Slide Time: 19:10) and
when a plus 1 is sent simply a 1 is converted into a sequence of eleven chips so it is
eleven chips. So this is the signal which is being sent and similarly for a 0 a eleven chip
sequence which is just opposite that is being transmitted bit by bit. So this is the
transmission of minus 1 and this is the transmission of plus 1. So plus 1 is essentially 1
and minus 1 is 0, this is used for pedagogical purposes.

Therefore, using this eleven chip sequence the transmitter and receiver schematic is
shown here. this is the carrier oscillator (Refer Slide Time: 19:58) that pseudo random
sequence generator which essentially generates that barker sequence which is multiplied
then the data comes and it is modulated by BPSK then this is demodulated and sent.
(Refer Slide Time: 20:12)

The spread spectrum signal is transmitted and at the other end it is received and using the
same sequence the multiplication is done then after demodulation you get back the output
data. This is how the Direct Sequence Spread Spectrum occurs. This is the transmission
media.

(Refer Slide Time: 21:00)

Let us look at the topology, 802.11 standard supports formation of two distinct types of
BSS Basic Service Set, BSS stands for Basic Service Set. The first type of Basic Service
Set is the ADHOC network in which as you can see you have got four mobile stations,
they can directly communicate with each other and form some kind of network.
Second type of Basic Service Set is known as infrastructure Basic Service Set and this
one is having a special type of device known as access point. Here you can see there is an
access point (Refer Slide Time: 21:17) where all the communication is done through the
access point. This access point can be connected to a distribution system. this distribution
system can be say Ethernet LAN, there is no restriction, so this can be a Ethernet LAN so
on this Ethernet LAN these access points are connected.

As you can see here this is a Basic Service Set, this is another Basic Service Set, this is
another Basic Service Set so the mobile stations can move from one Basic Service Set to
another Basic Service Set and also the access points can communicate with each other
through this distribution system and in this the advantage is the mobile stations can also
communicate with stationary stations.

(Refer Slide Time: 22:07)

For example, the systems which are connected to the Ethernet LAN connected to the
distribution system can communicate with any one of these Basic Service Sets. However,
here the communication is little difficult. apart from the address of the destination access
point source access point are to be incorporated so some kind of four addresses will be
involved as we shall see. This is known as extended services set which requires more
than one access points. This is the basic topology used in 802.11.

Now you may be asking how stations select their access points. you have seen in the
previous case, say under this Basic Service Set you have got several mobile stations,
under this also there are several stations (Refer Slide Time: 23:05), these access points
are somewhat similar to base stations used in cellular network. Nowadays cellular
telephone is very popular and people are familiar with base stations. These access points
are somewhat similar to base stations for these mobile hosts or stations.
Now as you can see this particular mobile host can move gradually from this Basic
Service Set to other Basic Service Set. So what is explained here is how they get attached
to a particular access point. This can be done in two ways. One is known as active
scanning. When a station joins a network or when it wants to discontinue association with
the existing access points what it does is, it sends a probe frame and all access points
within the reach reply with a probe response frame and the station selects one of the
access points and sends the AP an association request frame and the AP replies with
association response frame. Thus, these are the frame exchanges that take place and by
doing that a particular mobile host gets associated with a particular access point.

(Refer Slide Time: 24:30)

Second approach is passive scanning where access points beacon frames periodically and
stations may respond with association request frame to join an access point. So, in this
particular case a particular access point can advertise and ask, is anybody there to join
me. So in that way it sends the beacon frames and then association request frames are
generated by the mobile host to join a particular access point. This is how it takes place.
And once the station joins an access point communication is done through that access
point.

Now as I mentioned the medium access control in wireless LAN has got several
challenges, particularly it is prone to more interference and it is less reliable compared to
wired LAN and the wireless LAN is susceptible to unwanted interception leading to
security problems. We shall discuss how they can be overcome, moreover there are so
called hidden station and exposed station problems which I have discussed in detail while
discussing the CDMA protocol.
(Refer Slide Time: 25:40)

In fact this 802.1 uses little extended form of Carrier Sense Multiple Access Collision
Avoidance Protocol CSMA/CA protocol and here the sender sends a short frame called
request to send which is 20 bytes to the destination so request to send also contains the
length of the data frame.

(Refer Slide Time: 26:20)

So, destination station responds with a short clear to send frame and after receiving that
clear to send frame the sender starts sending the data frame.
Of course in this case collision can still occur as I have explained in detail and in such a
case clear to send frame is not received with a certain period of time and that it has to
come out from collision by using that binary exponential backoff algorithm used in the
case of wired LAN Ethernet, that’s why this approach is also known as wireless Ethernet
LAN. There are many similarities with Ethernet.

Now, apart from similarities there are some dissimilarities. One is carrier sensing.

In case of wired LAN or Ethernet the carrier sensing is very, very easy; all stations are
attached to the medium so from that the carrier sensing can be done. In this particular
case carrier sensing is done in two ways. One is known as physical carrier sensing,
another is virtual carrier sensing. So, in case of physical carrier sensing physical carrier
sensing is performed at the radio interface by analyzing the detected packets and strength
of the signal. That means all the mobile stations will be receiving signals generated by
other stations within that Basic Service Set and the signals will be analyzed at the radio
interface and after performing it will analyze the detected packets comparing the strength
of the signals. Obviously whenever there is collision strength of the signal will be ((less)).
This is one way of detecting collision.

(Refer Slide Time: 28:00)

Another alternative is virtual carrier sensing which is performed by the source station. By
informing the length of the data it intends to send to the stations within the Basic Service
Set. This is actually related to this protocol. Here as you have seen (Refer Slide Time:
28:38) whenever this request to send frame is sent as a part of that the length of data is
also mentioned. That means whenever this clear to send signal is generated all the
stations will withdraw except the destination station and all other stations know the
length of the data. Obviously for that duration all the stations will not generate any data.
These are the two approaches used for carrier sensing. This is virtual so without actually
sensing the carrier, carrier sensing is performed because a particular station will send for
a specific period of time based on the data rate and length of frame.

Coming to the security, wireless LANs are subject to possible breaches from unwanted
monitoring because in case of wireless LAN all the other stations within the range will be
able to hear so how to overcome this problem.

(Refer Slide Time: 30:35)

To overcome this problem IEEE 802.11 specifies an optional MAC layer security system
known as wired equivalent privacy. The basic idea is that it should have the privacy level
or security level same as the wired LAN. to do that what is being done is, with the help of
a 40-bit shared key authentication service, and by default each Basic Service Set supports
up to four 40-bit keys that has shared by all the clients in the Basic Service Set it provides
privacy but no integrity check. Later on we shall discuss about this 40-bit shared key
authentication service in more detail.

There is some extended version of the standard, Advanced Encrypted Standard AES that
is being proposed in this standard 802.11i. This is the revision for authentication and
encryption as a long term solution. So by using AES authentication and encryption can be
used as a long term solution. Obviously in this particular case there will be an overhead
for encryption and decryption but it will give you security as well as authentication.

Now here are the IEEE 802.11 frames. As we know there are three basic operations to be
performed by these frames. First of all we have the management frames.
(Refer Slide Time: 32:45)

Management frames are used for:

 Station association, disassociation with the access points


 Timing and synchronization
 Authentication and deauthentication

I have already explained the protocols in brief and obviously with the help of the
management frames these functions are performed. This is required in the initial phase
when a particular station gets associated with an access point.

Control frames are used for:

 Handshaking and
 Positive acknowledgement during data exchange

You have seen that CSMA Carrier Sense Multiple Access CA protocol uses a four way
handshaking approach so this handshaking and positive acknowledgment during data
exchange is going to be performed with the help of these control frames. Finally data
frames are used for transmission of data so there are three different types of frames
possible in IEEE 802.11 and this frame control bits decides the various types of control
frames.

Then MAC header provides information on the control frame duration addressing and
sequence control. This is in brief of what is being performed by the various data packets
various frames used in IEEE 802.11.
Now, as I mentioned the 802.11 has been extended in various directions. First it was
established in June 1997 which specifies 2.4 GHz ISM band and frequency of 1 Mbps or
2 Mbps data rate and it was extended to IEEE 802.11b in 1999 having the following key
features:

It is backward compatible with 802.11, it uses the same 2.4 ISM band then it supports
new data rates of 5.5 Mbps and 11 Mbps by using special coding and particularly two
coding techniques are used. The mandatory coding mode is known as Complementary
Coding Key CCK modulation and Packet Binary Convolution Coding.

(Refer Slide Time: 33:50)

With the help of coding high data rates are possible 5.5 Mbps and 11 Mbps using the
same ISM band. It is very popular because of backward compatibility with the original
standard IEEE 802.11. Subsequently IEEE 802.11a has been proposed which is the
successor of 802.11 b. It uses unlicensed 5 GHz band and it uses a special type of coding
known as orthogonal frequency division multi-carrier coding. It is very similar to
frequency division multiplexing but there are some differences and it supports a variety
of data rates starting from 6 Mbps to 54 Mbps that is 6, 12, 24, 34 and 54 Mbps and for
54 Mbps the typical range is small 20 to 30 m and for lower rates the range is 100 m.
(Refer Slide Time: 35:28)

Another standard IEEE 802.11g is actually compatible with IEEE 802.11b but 802.11a is
not compatible with IEEE 802.11b. The success of IEEE 802.11b has led to another
extension that provides 22 Mbps transmission. It retains the backward compatibility with
the popular 802.11b standard. Here are the various extensions shown here; 802.11 which
is a frequency hoping spread spectrum and Direct Sequence Spread Spectrum, 802.11a
which is a OFDM, then IEEE 802.11b which is a Direct Sequence Spread Spectrum and
802.11g which again uses a OFDM.

Now we move on to another important standard which is Bluetooth. This Bluetooth has
been developed for personal area networking particularly used for design to connect
computers, cameras, printers and various household equipments which are used in houses
so this is essentially used as a private area network typically to be used in your house.
Nowadays the number of intelligent equipments is increasing in your house which can be
networked with personal area network and it is an ADHOC type network operational
over a small area such as a room and it is based on IEEE 802.15 standard so you can see
that Bluetooth actually conforms to IEEE 802.11 standard.
(Refer Slide Time: 37:20)

Here is the Bluetooth topology, it supports two types of topology. First one is Piconet,
second one is Scatternet. Piconet is a very small ADHOC network with only eight
stations. As you can see it has got a master and others are slaves. So you can have one
master and seven slaves. Of course apart from seven slaves one slave can be kept as
standby or it can be kept as parked state.

(Refer Slide Time: 38:05)

However, if this particular slave wants to join then one of the active slave has to be taken
out. All slave stations synchronize their clock with the master. That means all the
communication is performed through the master and possible communication can be one
to one or one to many so there are two possibilities and there may be one station in a
parked state as I mentioned. That can join only when one of the active slaves is taken out.

The second technology that is being used is Scatternet. As you can say it makes one slave
as a master of another Piconet. Here as you can see this is one Piconet (Refer Slide Time:
38:58) and this is the master and this is one of the slaves and this slave is now made
master of another Piconet so here you have got eight stations out of which this is one
slave and this slave now becomes a master of another Piconet. So in this way you can
have Scatternet and number of such mobile stations can be more than 8.

(Refer Slide Time: 39:18)

So Scatternet is formed by combining several Piconets as it is shown in this diagram.


Now let us look at the transmission media.
(Refer Slide Time: 39:40)

The transmission media is same as that being used here, 2.4 GHz ISM band that means it
uses the same frequency band that is used in IEEE 802.11 so it uses the same band so
there is a possibility of interference. Hence, if you are using Bluetooth and IEEE 802.11
network in the same region there is possibility of interference because of the use of same
frequency bands and as you see here also it uses 79 channels each of 1 MHz and it uses
Frequency Hoping Spread Spectrum method in case of 802.11 and it hops 1600 times per
second so a frequency is used for 625 microseconds only. It uses a sophisticated version
of frequency shift key called GFSK for modulation. So the carrier frequencies can be
starting with 24002 MHz or 2.402 MHz to, as you can say add 1, add 2 and so on so
2.402 plus 78 means it can go up to 2.80 MHz.
(Refer Slide Time: 42:15)

There is a possibility of interference with IEEE 802.11b because they use the same
transmission media. so far as the medium access control is concerned it uses a special
form of TDMA called TDD TMDA this is essentially some kind of duplex medium half
duplex communication as we shall see it performs some kind of half-duplex
communication using this TDD TDMA approach and communication for each direction
uses different hops different frequencies so the possible alternatives are one slave one
master. Therefore, whenever there is one slave one master communication then master
uses even slots and slave uses odd numbered slots. This process is more complex when
there is more than one slave. Let us see how it is actually being done.
(Refer Slide Time: 43: 00)

As you can see here this is the master this is the dwell time, and dwell time is 625
microseconds out of which 366 microseconds is used for sending data and remaining is
used for hop control. So it uses a carrier frequency of F0 in the slot t0 which is the even
number of slot. Then the slave sends in slot number t1 the odd number slot and uses
frequency F1. So the frequency hopping is taking place then again master sends in slot t2
using frequency F2 so in this way alternately it is sent as master slave, master slave so
there is some kind of half-duplex communication takes place between the master and
slave using frequency hopping.

These are the frequencies that is being used as shown here (Refer Slide Time: 43:31) F0
is 2402 MHz; F1 is 2403 MHz and so on. On the other hand, here as I mentioned this is
single slave communication. When you have got multiple slaves the process is little
complex.
(Refer Slide Time: 43:40)

First the master initiates the communication, it sends a frame using frequency F0 in slot
t0. Then essentially it is some kind of select and polling approach. So the station which is
being selected or polled responds in the next slot t1 and uses frequency F1 and it sends
the data. In the next sequence again the master sends using slot t2 even number of slots
and uses a frequency F2.

Now it can select another slave then this slave two can respond and sends using
frequency F3 in slot t3. So in this way several slaves can communicate if they are being
selected or polled by the master stations. Here also half-duplex communication takes
place. However, the even frequencies are being shared by different slaves to send their
data one after another.

So far as the Bluetooth Frame Format is concerned there are three types; one-slot, three-
slot and five-slot. in case of one-slot it can send 366 bits plus 259 time is used for
hopping and control as shown (Refer Slide Time: 45:00) in this case. So this time is used
for sending data and this part is used for hopping and control. So, as a consequence the
total of 366 bits can be sent in 625 microseconds.

On the other hand, if it is a three-slot frame then as you can see it is divided into two
parts, this hopping and control time is fixed, however, during this time after subtracting
this much time the total number of bits that can be sent is shown here in three-slot frame.
On the other hand, in five-slot frame the total time you get is 5 into 625 microseconds so
it is within this time so you can send 2866 bits plus 259 microseconds for hopping and
control that is being used.
(Refer Slide Time: 46:20)

So you can see the frames can be quite slot comprising one-slot three-slot or five-slot so
any one of the frame formats can be used. The detailed frame format is shown here.

(Refer Slide Time: 46:45)

The access code has got 72 bits so it contains synchronization bits, identifier of the
master and so on using these 72 bits, then it has got a header of 54 bits so the header is 54
bits and here the eighteen bit is repeated three times so in this 18-bit you have got three
bits for address then four bits for type of data that is being sent then as you can see there
are three bits; one is flow control, acknowledgment and sequence number. Therefore,
here with the help of this approach Bluetooth performs flow control, error control and
also acknowledgment by using stop-and-wait control technique and there is a 8-bit
checksum that is used for error detection in the header field.

So, as you can see all the features required for reliable communication is provided in this
simple frame format. Now comes the data field. In this data field the number of bits can
vary from 240 to 2740 so for N is equal to 240 in one-slot frame, N is equal to 1490 for
two-slot frame and N is equal to 2740 for three-slot frame so this is the frame format used
in Bluetooth.

Now coming to the applications of wireless LAN, wireless LANs can be used in a variety
of ways. One is LAN extensions. We have already discussed about wired LANs and
different types of wired LANs; the legacy LANs based on IEEE 802 and also the high
speed LANs based on FDDI, Fast Ethernet or Gigabit Ethernet. Now they can be
extended. For example, here a simple example is shown.

(Refer Slide Time: 48:55)

Here, for example, this is one wired LAN, this is another wired LAN so two wired LANs
and this is in one building and this is another building so these two are located in two
separate buildings and in between you have got some kind of a road. Obviously it is very
difficult to dig through the road and setup wire connection. Instead of that one can setup
two wireless LAN regions on the roof top and the two LANs can be linked with the help
of this wireless LAN. So in this way the communication between two LANs can be
established with the help of wireless LAN regions. So you can see here we have used for
the extension of wired LANs.

Now, wireless LAN can be used in buildings having large open floor. Suppose in an
auditorium which has a large open floor it is not possible to setup wired LANs,
interconnections so one access point can be setup with it can be put somewhere then all
the mobile stations can communicate and particularly in auditoriums and other places
these wireless LANs can be used very easily. Then in small offices, when you have got a
very small office then it is not really necessary to have 5028 wired LAN, you can have
one small access point and then several desktops or even laptops having wireless LAN
interface can be interfaced and in that way it is very flexible to have a small office using
wireless LAN.

(Refer Slide Time: 50:49)

Another important application of wireless LAN is in historical buildings. You have got
some historical buildings which cannot be tempered, you cannot really deploy wired
LAN, by putting a wire you cannot really dig, you cannot make a hole in the wall, you
cannot do anything of that kind in such a situation like museum and other historical
buildings you can setup wireless LAN very easily and perform communication.

Since the deployment type of wireless LAN is very small it can be used in rescue
operation site. Suppose there is flood, earthquake and some kind of a disaster then in such
a situation you can very quickly setup a wireless LAN and establish communication with
one another. These are the possible applications of wireless LAN. Obviously I have given
a very small number of applications. The application domain is increasing with time and
it is possibly limited only by imagination. Now, here is the trend of wireless LAN.
(Refer Slide Time: 53:20)

Although initially wireless LANs were perceived to be a substitute of wired LANs now it
is recognized as an indispensable adjunct to wired LANs. That means when the wireless
LAN was announced people thought that wireless LANs will gradually replace the wired
LANs but this has not happened because of the limitations particularly lower data rate
and other limitations. But now what is being considered is, wireless LAN is being
considered as an adjunct to wired LANs so both of them will coexist and both of them
will interoperate so in this way they will work with each other.

And a hierarchy of complementary wireless standards which are designed to complement


each other has been established by IEEE. So a hierarchy of various wireless standards
have been developed which will help in the proliferation of wireless LANs so the
proliferation of wireless LANs is driving the demand for broadband connectivity back to
the internet so this is essentially a wireless metropolitan area network standard and this
can fulfill this limitation and last mile broadband wireless access can help to accelerate
the deployment of 802.11 hotspots and home and small office wireless LANs.

Nowadays in many public places like airports, hospitals and so on these kinds of hotpots
are being deployed where wireless LAN access points are available with which the
mobile users can communicate. and as I mentioned the hierarchy of standards for private
area network, local area network, metropolitan area network and wide area network IEEE
has a series of standards like IEEE Bluetooth which is 802.15, then 802.11 LAN, 802.16
metropolitan area network then IEEE 802.20 that is being proposed for wide area
network.
(Refer Slide Time: 54:25)

So a number of complementary wireless standards are being developed to meet the goal
anywhere any time, you will be able to communicate from anywhere at any point in time.
Now this is time for giving you review questions.

(Refer Slide Time: 55:05)

1) Why spread spectrum technology is used in wireless LAN


2) How hidden station problem is overcome?
3) What is network allocation vector?
4) What is WEP Wired Equivalent Privacy and how is it achieved?
5) Distinguish between Piconet and Scatternet used in Bluetooth technology.

So these are the five questions and here are the answers to the five questions given in the
last lecture.

(Refer Slide Time: 55:50)

1) How FDDI offers higher reliability than token ring protocol?

As I mentioned this dual counter ring topology that is being provided gives you high
reliability and if a particular link is disconnected or a station gets disconnected then a
secondary ring is used to reconfigure the topology and with the help of that
communication is done by wrapping up the primary ring with the secondary ring. This is
how high reliability is achieved.
(Refer Slide Time: 56:25)

2) Distinguish between a switch and a hub.

A hub is essentially a multi-port repeater and it works as a level one relay. On the other
hand, a switch is a fast bridge working in layer two. So, on receiving a frame on one port
it forwards it to a particular port contrary to the hub which sends to all other ports. Here it
is sent to a particular port based on a table inside the switch after checking the integrity.
The switch in general operates in the data link layer or layer to layer as I mentioned.

(Refer Slide Time: 57:05)


3) Why 4B/5B encoding is used in Fast Ethernet instead of Manchester encoding?
This is used to reduce the baud rate. Essentially if Manchester encoding was used then it
will require 200 mega baud but by using four B by five B it is possible to use only 125
mega baud and to achieve the data rate of 100 Mbps. This reduces the cost of
implementation.

(Refer Slide Time: 57:34)

4) What is carrier extension? Why is it used in Gigabit Ethernet?

For detection of collision a minimum size of frame is required. However, as the speed is
increased the size has to be increased. But instead of increasing the frame size carrier
extension is being performed up to 512 bytes so that the collision can be detected in
Gigabit Ethernet so essentially carrier extension is being done for collision detection.
(Refer Slide Time: 58:08)

5) How flow control is performed in Gigabit Ethernet network?

The full-duplex mode of communication is exploited in Gigabit Ethernet network to


implement X-ON X-OFF protocol which is somewhat similar to stop-and-wait protocols.
So you have got two stations and you have got full-duplex communication. So, if one
station is sending at a fast rate using the other link the receiver can request to send at a
lower rate or stop sending until it is ready. So this Gigabit Ethernet uses this X-ON
protocol.

With this we conclude our lecture on LAN technology. We discussed three different
types of LANs; legacy LANs, first LANs, high speed LANs and finally in this lecture we
have considered wireless LANs. And in the next lecture in the next two lectures we shall
consider other uses of medium access control techniques, thank you.
Data Communications
Prof. A. Pal
Department of Computer Science & Engineering
Indian Institute of Technology, Kharagpur
Lecture minus 31
Cellular Telephone Systems

Hello viewers, welcome to today’s lecture on cellular telephone systems.

(Refer Slide Time: 01:01)

In the last lecture we have discussed how wireless LAN can be used to communicate over
a small geographic area. In today’s lecture we shall see how wireless communication can
be used to cover a large geographic area. Here is the outline of today’s lecture.
(Refer Slide Time: 02:40)

First I shall give an overview of cellular telephone systems. Then there are several
important concepts that are being used in cellular telephone systems. First one is the
frequency reuse concept. I shall discuss about it which is very important for efficient
utilization of channels in cellular telephone systems. Then there several other concepts
like mobility management, handoff management and location management because the
users are now moving from one place to another so as a consequence you require these
features. Then I shall discuss about several practical systems.

First one is the first generation cellular network AMPS which was developed in North
America based on analog technique, then there are several second generation cellular
network like D-AMPS, GSM and CDMA. We shall discuss about them in detail then
finally discuss about the goals of third generation cellular networks. This third generation
cellular networks are still under development and also being deployed in some places.
(Refer Slide Time: 03:26)

On completion of this lecture the students will be able to explain the operation of cellular
telephone networks as how a cellular telephone network works and also explain various
other features of it, they will able to explain the operation of first generation cellular
telephone network based on analog technique that is your AMPS Advanced Mobile
Phone System, then they will be able to distinguish between first generation and second
generation cellular networks.

Second generation cellular networks are essentially developed for digital technique
compared to in contrast to the analog technique used in the first generation AMPS
network. Then they will be able to explain the operation of the second generation cellular
networks, the three types that I have mentioned, they will be able to explain the goals of
third generation cellular networks. First let us try to give the motivation behind cellular
telephony.
(Refer Slide Time: 04:52)

It is essentially a system level concept which replaces the single high power transmitter
with a large number of low power transmitters. Conventionally a very high power
transmitter is used when a large geographic area is used to be covered. But this concept is
not used in cellular telephony or cellular telephone networks.

What is being done?


A large number of low power transmitters are being used to cover a large geographic
area; that is one important difference between the conventional approach and the cellular
telephone approach. And here the goal is to provide wireless communication between
two moving devices. The moving devices are essentially the mobile stations or handsets
or between one mobile unit and a stationary unit usually referred to as landline. So,
communication between two mobile stations or between a mobile station and stationary
stations are to be provided. And of course as I said another important goal is to
accommodate large number of users over a large geographical area.

You will look at the two points. First one is the large number of users which will be
possible because of frequency reuse which we shall discuss and over a large geographical
area. So, these two are the important objectives to be satisfied.

And as I mentioned it uses a large number of low power wireless transmitters to create
cells. Cells are of a small area varying from 10 to 20 km across. Then the power level of
both the transmitters and the handsets are variable to allow cells to be sized according to
the subscriber density and demand. That means the geographic area to be covered under a
transmitter is no longer fixed, it is variable. We shall discuss how it can be done.
(Refer Slide Time: 06:06)

As mobile users travel from cell to cell their conversations are handed off. That means
when a user or a mobile station moves from one place to another it is very common that
they will move away from a particular cell and go to a new cell so it has to be done in a
seamless manner and the approach to be used to satisfy seamless transfer is known as
handoff. So, the conversation is to be handed off between cells.

Then, the channels used in one cell will be reused in another cell some distance away. As
I mentioned this concept of reuse of cell frequency channels leads to higher capacity of
cellular telephone network. So let us have a look at the simplified structure of a cellular
system. It comprises the following components:

Mobile Stations (MS): Large number of mobile stations which are essentially mobile
handsets which are used by the user to communicate with each other. Cell is a service
area where each cellular service area is divided into small regions called cell. So it can be
5 to 10 km across.
(Refer Slide Time: 07:40)

And there are base stations; each base station contains an antenna which is controlled by
a small office. In addition to that there is mobile switching center where each base station
is controlled by a switching office called mobile switching center, it will be clear from
the next diagram.

(Refer Slide Time: 08:18)

Here we have got a number of cells. This honeycomb like cells are shown here. In reality
it will circular but for the purpose of modeling this honeycomb like cells are convenient
that you model. You have got the transmitter, apart from this transmitter there is a
computer which controls this transmitter. so in each cell you have got a transmitter and a
computer which controls the transmitter and you have got a number of mobile stations as
you can see and a number of such cells are controlled by a single mobile switching center
which can be considered as an end office in a Public Switched Telephone Network and
this in turn is connected to the Public Switched Telephone Network so that the mobile
stations can communicate through the base station and the mobile switching center and
the Public Switched Telephone Network to a stationary phone or a landline phone.

These are the landline phones and these are the mobile phones (Refer Slide Time: 9:15).
This is the overall structure of the system. Now let us look at the concept of frequency
reuse which is very important from the view point of channel capacity.

(Refer Slide Time: 10:23)

The cellular systems rely on an intelligent allocation and reuse of channels. So each base
station is given a group of radio channels to be used within a cell and base stations in
neighboring cells are assigned completely different set of frequencies. This frequency
reuse concept is explained here.

For example, this is a cluster (Refer Slide Time: 10:09) this cluster is the area over which
all the frequency channels are being available. That means ‘n cells’ which are collectively
used in the available frequencies is called a cluster. So let’s assume S is the total number
of channels available which is been divided among ‘n cells’ and in this particular case N
is equal to 4 so it is A B C D that means each of these cells will get one fourth of the
channels available. That means S by K is the number of channels that can be used in each
of these cells.

As you can see here this A has appeared here, so A has appeared here, A has appeared
here that means A has appeared twice that means same set of frequencies are to be used
in these three cells. Similarly, same set of frequencies is to be used in cell B, cell B and
cell B so in all these three cells same set of frequencies will be used. Hence, this is the
frequency reuse concept.

(Refer Slide Time: 12:26)

At one point you must notice that two ‘As’ are separated in this particular case at least by
one cell. So, if you have larger number of cells two reusing cells will be separated by
several such cells. That means the transmitter as well as the transmitter power will be
restricted such that the signal does not reach another cell reusing the same set of
frequencies. So the reuse factor is used which essentially is the fraction of total available
channels assigned to each cell within a cluster which is equal to 1 by N.

As I said if S is the total number of channels then the number of channels to be available
in a particular cell will be equal to S by N is equal to K so K is equal to S by N so reuse
factor will be 1 by N so in this particular case the reuse factor is 1 by 4.

As I mentioned by limiting the coverage area called footprints within a cell boundary the
same set of channels may be used to cover different cells separated from one another by a
distance large enough to keep interference level within tolerable limits as explained in
this diagram (Refer Slide Time: 12:41).

This is the frequency reuse concept which is used in cellular telephony so that the
channel capacity total capacity is increased.

For example, here the capacity is M into K into N (Refer Slide Time: 12:55) that means if
in a area if it is replicated M times total capacity is multiplied by M. So a particular set of
frequencies may be replicated hundred or thousand times depending on the density and
the number of uses. Hence, the capacity is multiplied by M because of this reuse factor.
Now let us see how the frequency reuse is dynamically changed. For example, it is found
that a particular area may be urban area where the population density is very high,
densely populated, in that case the number of users is large and a particular cell is divided
again into a number of smaller cells so micro cells as you may call and again you can
reuse a number of the several frequencies.

(Refer Slide Time: 14:05)

In a rural area where the number of users is sparse then you can cover by using a single
cell. Obviously both the transmitters of the base station as well as the mobile station will
require larger power. On the other hand here the communication is restricted over a small
area which is shown in the magnified form here. This is for the densely populated area.
So in case of densely populated area you will be using smaller cells so you will require
lesser transmission power to cover. On the other hand, sparsely populated areas you will
be using larger size cells that means it will cover larger area so foot print will be bigger
compared to (). So, in this way in a dynamic manner the cells can be of various sizes
depending on the number of users or the population density. Now let us see how exactly
the transmission and reception takes place.

(Refer Slide Time: 16:00)


We are all familiar with cellular telephones nowadays, the caller enters a ten digit code
which is essentially the phone number then presses the send button. The mobile station
scans the band to select a free channel and sends a strong signal to send the number
entered. So mobile stations now does the transmission, the base station relays the number
to mobile switching center so after receiving from the mobile station the base station now
relays it to the mobile switching centre and the mobile switching in turn dispatches the
request to all the base stations in the cellular system.

So as I said a mobile switching center may control a large number of base stations so it is
broadcast over all the base stations then the mobile identification number is then
broadcast over all the forward control channels throughout the cellular system. This
particular technique is known as paging.

The mobile station responds identifying itself. That means the destinations responds by
identifying itself over the reverse control channel. There are two different channels;
forward channel and reverse channel as we shall see. The base station relays the
acknowledgement sent by the mobile station and informs the mobile switching center
about the handshake and the mobile switching center assigns an unused voice channel to
the call and call is established.

This is the sequence of events that takes place whenever a mobile station wants to
transmit to another mobile station. Now let us see what is the function of the mobile
station which is receiving the signal.
(Refer Slide Time: 18:12)

So, in the receive mode all the ideal mobile stations continuously listens to the paging
signal to get messages directed at them. When a call is placed to a mobile station a packet
is sent to the callee’s home mobile switching center to find out where it is. A packet is
sent to the base station in its current cell which then sends a broadcast on the paging
channel. The callee mobile stations respond on the control channel. As we shall see the
channel is divided into a number of data channels and controls channels so this
communication takes place over the control channel and in response a voice channel is
assigned and its ringing starts in the mobile station.
That means whenever a mobile station is in the receive mode this sequence of event
occurs and finally ringing takes place in the mobile station and a particular mobile station
can now receive a call. This is how receiving takes place.

Now let us consider how the mobility of different stations is managed. That technique is
known as mobility management.
(Refer Slide Time: 18:42)

As you have already seen a mobile station is assigned a home network over which it can
roam around and which is under the control of a mobile switching center called as the
location area. When a mobile station migrates out of the current base station into the
footprint of another a procedure is performed to maintain service continuity. This is
known as handoff management. How it is been done?

An agent in the home network called home agent which is some kind of software learning
keeps track of the current location of the mobile station. The procedure to keep track of
the user’s current location is referred to as location management. Now, handoff
management and location management these two together are referred are to as mobility
management.

Let us last see how the handoff management is performed then we can see how location
management is performed.
(Refer Slide Time: 20:20)

As we have seen at any instant each mobile station is logically in a cell and under the
control of cell’s base station. Now, when a mobile station moves out of a cell the base
station notices mobile station’s signal fading away and it requests the neighboring base
stations to report the strength they are receiving. The base station then transfers
ownership to the cell getting the strongest signal and the mobile switching center changes
the channel carrying the call.

As we have seen, two adjacent cells do not use the same frequencies so as a consequence
when a particular mobile station moves from one base station to another base station a
different frequency channel is to be assigned. That is the function of the handoff. There
are two handoff techniques commonly used. One is known as hard handoff and another is
soft handoff.

In case of hard handoff a mobile station always communicates only with one base station
and whenever it moves out from one base station to another base station first it terminates
communication with the present base station and then it starts communication with the
next base station to which it is moving in. So in this way there is a sharp transition which
takes about 300 milliseconds. So as a consequence in case of hard handoff the
communication is with one base station and as a mobile station moves it changes from
one base station to another base station and the channel frequency also changes.

On the other hand, in case of soft handoff a mobile station communicates with two
neighboring base stations. And as a consequence the base station can tell about the signal
strengths, the base station and mobile station will know about the signal strengths both
the stations are receiving and as a consequence a mobile station keeps on talking with
two neighboring stations as it moves from one base station to another base station which
is performed in a soft manner. That means while communicating with two at a particular
instant it switches from one base station to another base station that’s why it is called soft
handoff. That means at a particular instant the communication is performed with two base
stations.

Now let us focus on the location management. It requires two fundamental operations;
one is known is location update and another is paging.

(Refer Slide Time: 23:04)

When a mobile station enters a new location area it performs location updating procedure
by making association between a foreign agent and the home agent. That means the
software keeps track of that. So foreign agent is essentially the software running in that
host location area where it has moved and home agent to which the mobile station
belongs to. So one of the base stations in the newly located area is informed and the home
directory of the mobile is updated with its current location. So the home agent keeps track
of it and it modifies the directory.

And when a home agent receives a message destined for the mobile station it forwards
the message to the mobile station via the foreign agents. This means both the software
running in two different systems communicate with each other to keep track of the
mobile station. Of course whenever it is being done an authentication process is
performed before forwarding a message for a mobile station. In a nutshell this is how
location management is performed.

We have discussed about the basic operations and the functions to be performed in the
mobile telephone systems. Now let us look at the different implementations. First we
shall consider the first generation technique.
(Refer Slide Time: 24:51)

The first generation was mainly designed for voice communication. Although several
systems were developed the most popular is the Advanced Mobile Phone System AMPS
used in North America developed by the Bell Labs which was in operation in North
America. And AMPS is an analog telephone systems and it uses 800 MHz ISM band
which does not require any licensing. ISM stands for Industry Scientific and Medical
band which does not require licensing and two separate analog channels are used, one is
known as forward and another is known as reverse analog channel.

Thus, for duplex communication two separate channels are necessary. The band between
824 to 849 MHz is used for reverse communication that is from mobile station to the base
stations.
(Refer Slide Time: 26:00)

On the other hand the band between 869 to 894 MHz is used for forward communication
from base station to mobile station. Let’s see pictorially how it operates.

(Refer Slide Time: 27:00)

These are two different bands, this (Refer Slide Time: 26:20) is for forward
communication 869 to 894 mega band divided into 832 30 KHz channel so you have got
832 30 KHz channel distinct channels with different carrier frequencies and similarly for
reverse communication from the mobile station to the base stations you have got again
832 frequency bands each of 30 KHz.
And of course all of them are not used for data base communication. As I mentioned out
of this 832 42 are used for control. Another approach is used.

Usually each area is shared by two service providers. That means since two service
providers are operating in the same area that means the channels are also equally divided
between them. That means 832 is divided into 2 so each of them have 416 channels
available out of which 21 are used for the control and remaining are used for voice
communication and it uses the FDMA technique. Let’s see how it is being done.

(Refer Slide Time: 28:04)

So it uses the frequency division multiple access to divide each 25 MHz band into 30
KHz channel. Hence so many analog channels are broadcasted simultaneously and here
each of the 3 KHz voice channel is frequency modulated to generate 30 KHz analog
channel which is used for communication.

Therefore, out of 832 voice channels you have to subtract 42 that means 790 voice
channels are there each of 30 KHz so after frequency modulation you get 30 KHz. so 30
KHz channels which are analog channels are communicated between base stations and
the handsets using the frequency division multiple access.

Hence, these are essentially transmitted in parallel through air and from the mobile
station it goes to the mobile transmitter or receiver in the base station.

Similarly, for the forward channel from the transmitter instead of so many base stations
you will have the transmission of the base stations from which the signals are transmitted
by using different frequencies and again 25 MHz band is created. However, in that case
the frequency band is different. So two different bands are used for communication in
two directions so full-duplex communication is performed between the mobile station
using the base stations and two way communication is performed.
(Refer Slide Time: 29:56)

So, two mobile stations communicate with each other through the base stations. and as
you can see in AMPS it uses frequency reuse factor of 1 by 7 so this is one cluster and the
832 frequency bands are divided by 7 and each can be used in each of these cells so
frequency reuse factor is 1 by 7 so 832 by 7 that is the number of channels available in
each of these cells so you have got seven cells in a cluster so these are repeated in a
honeycomb like fashion that’s why it is called cellular communication because of these
cellular network as the same thing is repeated over and over in an honeycomb like
structure. This is AMPS based on analog communication.

(Refer Slide Time: 31:00)


Now in case of second generation communication it was based on digital technique. We
have seen AMPS uses analog communication and it is very easy to perform
eavesdropping. That means any receiver receiving a particular carrier frequency or when
the channel is tuned to that particular frequency then it will be able to receive the voice.

Here is an interesting story about how the communication of Princess Diana was
recorded as she was communicating using the cellular phone, that led to some problem.
Privacy is less; there is no privacy you can say because there is no encryption or any
other technique being used so very easily one can hear the conversation between persons.
Also, it is not very reliable and is not of very high quality. So the second generation
technique was developed to provide higher quality mobile voice communication and it is
mainly based on digital technique. Instead of using analog approach we have seen here
that the bandwidth of the voice is only about 3 KHz and that is the frequency modulated
to generate 30 KHz. instead of the analog technique here it is done in a digital manner.

Therefore there are three different approaches developed for that. First one is interim
standard 136 which is also known as digital AMPS. AMPS as you know is Advanced
Mobile Phone Systems used in North America in the analog form and the digital version
is known as interim standard 136 and it is based on TDMA and FDMA.

(Refer Slide Time: 33:34)

The second technique is known as GSM which was developed in Europe. before the
digital system came in different parts of Europe like France, UK and in other countries
different analog techniques were used so they wanted to use a common digital technique
and that common digital technique that emerged was given the name as GSM, again
which is based on TDMA and FDMA. Another technique that was developed based on
CDMA FDMA is known as interim standard 95. So we shall discuss these three digital
techniques one after the other.
First let us consider the D-AMPS which is essentially the digital version of analog AMPS
and it is backward comfortable with AMPS. By that I mean that, if somebody is having
an analog telephone and analog handset and if somebody is having the digital handset
they can communicate with each other that’s the basic objective of backward
compatibility. And as a consequence they had to use the same bands and same channels
as it is used in AMPS. So the bands and channels used are same and are not different and
it also uses the same frequency reuse factor 1 by 7. So, except it is digital but the rest of
the thing is same as we see in terms of frequency reuse factor, in terms of frequency
bands and channels it is identical to AMPS to maintain backward compatibility.

Here we shall see how it is being made digital. We notice that here the 3 KHz voice is
digitized using a very complex PCM and [compression… 36:00] technique to generate
7.95 Kbps. If you perform simple digitization obviously it will at least say 3 into 3 KHz
so you have to sample at 6 KHz then you have to multiply with 6-bit or 8-bit that means 6
into 8 is equal to 48 Kbps. But by use of suitable compression technique the digital signal
is having 7.95 Kbps.

(Refer Slide Time: 36:38)

So three such digital channels are Time Division Multiple Access so multiplexing is
performed in time division. Let me show you the next diagram to explain this (Refer
Slide Time: 37:10). Actually what is being done here is 25 frames are been sent each of
1994 bits divided into 6 slots and these 6 slots are shared by three channels that means
here you have got 6 slots say 1 2 3 and 1 2 3 so this is one frame and you have got twenty
five such frames per second each of 1994 bits and each slot is having 320 bits out of
which only data is only 159 bits, 64 control and 101 is used for error correction.

Thus, in this way you have got this 48.6 kilo bits and that 48.6 kilo bits is converted into
analog form analog signal by using QPSK to generate 30 KHz analog signal and that is
being sent from the mobile station to the transmitter receiver of the base station or from
the base station transmission receiver to the mobile station. So this part is same as the
analog AMPS and this part is different so that the digital technique is used.

The second technique that is being used is known as GSM. As I mentioned this GSM is
meant for global system for mobile communication that’s a European standard to replace
the first generation technology. The first generation technology that was used in various
parts of Europe was different but here it is combined to have global system for mobile
communication throughout Europe. It is based on duplex communication so it has the
same type of concept as D-AMPS but the frequency bands are different.

As you can see here it is more towards 900 KHz than the earlier one which was towards
800 MHz but here it is closer towards 900 MHz. So this is used for forward band from
935 to 960 each of 200 KHz, similarly from mobile station to base stations you have got
890 to 950 KHz and you have got 124 channels each of 200 KHz.

(Refer Slide Time: 40:00)

How different signals are generated is shown here. Each voice channel is digitized and is
compressed to generate 13 Kbps and then eight users can communicate using Time
Division Multiple Access and then they are converted combined to a multi-frame. As
shown here in the diagram below (Refer Slide Time: 40:40), this is coming from the user
this is your user data and the user data plus user error control bits is 114 bits and then you
have got some control bits these are added together to form a slot.
(Refer Slide Time: 40:52)

You have got eight such slot that is first user, second user….. and the eighth user so from
eight subslots data is coming and these eight slots form a frame and 20 such frames form
a multiframe and out of this 26 frames 24 are used for traffic that means for voice or data
communication and two are used for control and this is repeated in 128 milliseconds. So,
if you combine all these together so to send one multi-frame each multi-frame requires
120 millisecond so it is 1 by 120 so you have got 20 such frames and there are five eight
slots in each frame and each requires 126.25 microseconds as we have seen and that gives
you 270.8 Kbps.

In this way you get 270.8 Kbps digital data. Hence, this digital data is converted into
analog signal (Refer Slide Time: 42:10) by using this GMSK technique which is
essentially a modified version of FSK.
(Refer Slide Time: 42:33)

This (Refer Slide Time: 42:43) generates 200 KHz analog signal which is sent to the
transmitter from the mobile station to the base station. Similarly, from the base station to
different mobile stations it is sent except the carrier frequencies will be different. So you
have got 124 channels so each of these channels can support eight different users sending
digital voice or digital data which is also possible. This is how communication is
performed in GSM.

Now we have seen that it combines both TDMA and FDMA. So by combining both
TDMA and FDMA it is able to send 8 into 124 so many voice communication is possible
simultaneously, of course some of them will be used for control purposes. There is a
large amount of overhead in TDMA. We have seen that 114 bits are generated by adding
extra bits for error correction so a very insignificant portion is data so overhead is very
large in GSM and because of complex error correction it allows reuse factor as low as 1
by 3. That means here the reuse factor will be 1 by 3 and each cluster will be having only
three cells, this is the case of GSM.
(Refer Slide Time: 43:57)

Now let us consider the interim standard 95 that is your CDMA.

(Refer Slide Time: 44:35)

CDMA is quite complex in the sense the forward transmission and reverse transmission
is different. Forward transmission is from the base station to the mobile stations, this
transmission uses one type of technique on the other hand mobile stations to the base
stations uses a different type of signaling techniques. Here it uses CDMA on the other
hand from mobile stations to base stations it uses Direct Sequence Spread Spectrum and
FDMA.
Let us very quickly have a look at how it is being done in case of your IS minus 95. In IS
- 95 as you see you are using 3 KHz voice channel and by using digitization you are
generating 9.6 Kbps then by using error coding, repeating and interlinking you are
generating 19.2 Kbps you can say 19.2 kilo symbol per second and this kilo symbols per
second is being multiplied using some Pseudo Random Sequence generator. Here you
have got an electronic sequence number which generates a long code generator each of 2
to the power 42 long Pseudo Random Sequence generator PRDS.

(Refer Slide Time: 46:22)

So this is pseudo random sequence generator which generates 1.228 mega chips per
second. So 1 out of 64 is selected by this decimator, the decimator chooses one out of
sixty four and that’s how it gets 19.2 kilo symbol per second which is being multiplied
with the incoming data and to generate 19.2 kilo symbols per second. Then you are using
a CDMA technique which uses a 64 by 64 walls table you already know walls chip and
for CDMA it requires some kind of synchronization.

Here the synchronization is provided by the global positioning system to satellite. The
satellite provides the synchronization base station and that’s how the CDMA technique is
possible. And as you can see all will be transmitting simultaneously.

Here there is no serial communication, then these are performed using QPSK you are
generating 1.228 MHz and by using FDMA you get t25 MHz band which is being sent to
the mobile stations so it goes to mobile stations and you have got twenty five or twenty
four such channels.

On the other hand, the reverse communication is completely different. as you can see
(Refer Slide Time: 48:12) it uses a digitizer to generate 9.6 Kbps from the 3 KHz voice
and after error coding, repeating and interleaving you get 28.8 kilo symbols per second
and from this it selects two 64 symbol modulation. Actually it selects six symbols and
each of them is coded from 1 to 63 different numbers so 0 to 63 numbers and by using
electronic sequence number again you generate long code sequence generator that is
again 2 to the power 42 Pseudo Random Binary Sequence which generates 1.228 Mbps
and this is multiplied with this signal coming here (Refer Slide Time: 49:13) to generate
Direct Sequence Spread Spectrum.

So here you see CDMA is not being used so this Direct Sequence Spread Spectrum goes
to QPSK to generate analog signal and these twenty analog channels goes from the
mobile station to the base station. This is how the CDMA technique is used and reverse
transmission is performed in IS - 95 CDMA technique.

(Refer Slide Time: 49:30)

We have discussed the three different approaches used in second year generation. Now
let us have a look at the third generation technique.
(Refer Slide Time: 50:12)

The third generation technique uses both digital data and voice communication. So, in
case of your second generation either voice or data of very low capacity can be sent. But
here the objective is to send data of little higher rate and good quality voice. And one
objective is to provide universal personal communication that means one can listen to
music, one can watch a movie, one can access the internet so all these things are possible
by using third generation technique.

There is a report that in Japan where the third generation technique has been deployed
people are not using the laptops but instead of that they are using 3G cell phones which
perform most of the things which are being done using laptops. So these are the criteria
of the 3G technology. So the voice quality should be good as good as the Public Switched
Telephone Network.

There are three different types of data rates to be supported. For moving at higher speed
for example, you are traveling by car or train then the data rate can be 144 Kbps, on the
other hand, if you are moving slowly, for pedestrians, the data rate is 384 Kbps, and on
the other hand, for stationary objects, for example, sitting in a room you can
communicate at the rate of 2 Mbps. Thus you can see high data rate is supported in 3G
technology and it is to support for data communication through both packet switch and
circuits switch services and to provide 2 MHz bandwidth as I mentioned and interface to
the internet so that access to internet is possible. with this objective ITU developed a blue
print called Internet Mobile Communication for the year 2000 IMT minus 2000 and it
uses five different radio interfaces so you can say it has got five different standards using
five different radio interfaces and all these five have evolved from second generation
technologies.

The first two are based on CDMA. First one IMT - DS using Direct Sequence Spread
Spectrum, second one is multi-carrier IMT minus MC using multi-carrier, the third
technique is using CDMA and TDMA the IMT- TC then the fourth technique is based on
again TDMA IMT - SC using signal carrier and the last one is based on TDMA and
FDMA IMT - SC which uses frequency time. So these are the five different techniques.

(Refer Slide Time: 53:00)

I have just given you the goal and the overview of third generation technique. Now it is
time to give you the review questions.

(Refer Slide Time: 53:45)

1) What is the relationship between a base station and a mobile switching center?
2) What is reuse factor? Explain whether the low or high reuse factor is better
3) Distinguish between soft and hard handoff.
4) What is mobility management?
5) What is AMPS and in what way it differs from D-AMPS?
6) What is the maximum number of callers in each cell in a GSM?

Here are the answers to the questions of lecture minus 29.

(Refer Slide Time: 54:30)

1) Why spread spectrum technology is used in wireless LAN?


As mentioned, spread spectrum technology is used in wire less LAN because spreading
makes it difficult for unauthorized persons to make sense of the transmitted data. That
means it provides you some kind of privacy and security.

Moreover, it also reduces the power density and provides redundancy. So, power density
and redundancy is provided by using this spread spectrum technique.
(Refer Slide Time: 55:00)

2) How hidden station problem is overcome?


The hidden station problem is overcome by using a 4-way handshaking protocol known
as CSMA by CA. Therefore, by using this 4-way handshaking protocol collision is
avoided and also the hidden station problem and exposed station problem is overcome.
This has been discussed in the previous lecture.

(Refer Slide Time: 55:45)

3) What is Network Allocation Vector or NAV?


When a station sends a request-to-send frame which is one of the signals used in 4-way
handshaking it includes the duration of time that it needs to occupy a channel. All other
stations which receive the signal create a timer called a network allocation vector.

So, based on this information a timer is set and which is called network allocation vector
that shows how much time it passes before these stations are allowed to check the
channel for idleness. So this is essentially is referred to virtual carrier sensing which we
have mentioned in the last lecture. So this network allocation vector allows
implementation of virtual carrier sensing.

(Refer Slide Time: 56:30)

5) What is WEP and how is it achieved?


We have seen that because of the broadcast type wireless communication, wireless LANs
are subject to certain breaches from unwanted monitoring. To overcome this problem and
to get security similar to that of wired LAN IEEE 802.11 specifies an optional MAC
layer security system known as Wired Equivalent Privacy. It is achieved with the help of
a 40-bit shared key authentication service. By default each BSS supports Basic Service
Set that supports four 40-bit keys that are shared by all the clients in the Basic Service Set
so it provides privacy without any integrity check.
(Refer Slide Time: 57:57)

5) Distinguish between piconet and scatternet used in Bluetooth technology.


piconet is a small ad hoc network of at most eight stations. You can form a piconet
having a master station and a number of slaves up to seven slaves. Now a scaternet can be
created by using this as a master station which is a slave of this piconet of another
piconet. So here you can have a number of slave stations. So this slave of this piconet
becomes the master of this piconet (Refer Slide Time: 57:52) that’s how the scatternet is
formed by combining several piconets.

So with these we come to the end of the today’s lecture and in next lecture we shall
discuss about another wireless communication technique over much larger area by using
satellite in other words. In the next class we shall discuss about satellite communication
systems, thank you.
Data Communication
Prof. A. Pal
Dept. of Computer Science & Engineering
Indian Institute of Technology, Kharagpur

Lecture – 32
Satellite Communications

Hello and welcome to today’s lecture on satellite communications. In the last two lectures
we have discussed about wireless communication. First of all it was wireless LAN which
can give you coverage over a small geographic area, then we have discussed the cellular
telephone network which can give coverage over a larger area may be a country. Today
we shall discuss about another communication system which can give you global
coverage. And in all the cases as you have seen there is always broadcast communication
where it is necessary where a medium is shared by a number of stations so you have to
use suitable medium access control techniques.

(Refer Slide Time: 01:45)

Here is the outline of today’s lecture. first I shall discuss about some of the important
concepts related to satellite communication such as the concept of orbits, then the concept
of footprints, then we shall discuss about three different types of satellite LEO, MEO and
GEO lower earth orbit, medium earth orbit and geostationary earth orbit and I shall
mention about the frequency bands which are used in the context of satellite
communication then we shall consider one after the other different categories of satellites
like LEO satellites and its applications, MEO satellites and its applications GEO satellites
and its applications and finally I shall discuss about medium access control techniques
used in satellite communication.
(Refer Slide Time: 3:24)

And on completion the students will be able to explain different types of satellite orbits,
they will be able to explain the concept of footprint of a satellite, they will be able to
specify the categories of satellites and they will be to specify the frequency bands used in
satellite communication and they will be able to explain the uses of different categories of
satellites and finally they will be able to specify the MAC techniques used in satellite
communications.

So first we start with orbits. Although the number of orbits is very, very large in satellite
communications but they can broadly categorized into three types. The three types are
equatorial, as you can see here, this the equatorial plane (Refer Slide Time: 3:44), these
are the poles, this is your earth and here is your the orbit so it is on the equatorial plane,
then it can be inclined with respect to the earth polar line vertical axis or it can be polar
so orbit can be vertical in this direction that means polar orbits.
(Refer Slide Time: 05:08)

So you can have three different types of orbits and as you know the time required to
make a complete trip around the earth is known as period. That means the time required
to make a complete trip is known as a period. We are all familiar about the natural
satellite that is moon and the period is known to you but here we are talking about the
artificial satellites. And as you know the time required making a complete trip around the
earth which is known as period is determined by Kepler’s law of period. There are
several laws but the Kepler’s law of period specifies that period T square is equal to 4 pi
square by GM into r cube where t is the period, g is the gravitational constant, m is the
mass of the central body that is the earth and r is the radius.

This radius is different from the altitude from the surface of earth. Sometimes the radius
of the orbit is given sometimes the altitude is given so we have to you have to understand
the difference between altitude and the radius. Obviously the altitude will be lesser than
the radius. So far as the footprints is concerned the signals from different satellites is
normally aimed at the specific area called footprint. So here as you can see a satellite is
covering a very large area in this particular case. So it can cover a large area and power is
maximum at the centre of the footprint.
(Refer Slide Time: 05:55)

Obviously the power will be maximum here at the central part (Refer Slide Time: 6:00)
and as you go towards both sides it will decrease. That means it decreases as the point
moves away from the footprint.

Nowadays different types of antennas are used in satellites and these are all electronically
controlled. For example, if a phase direct antenna is used then the footprint can be
dynamically changed, it can be small, it can be large, it can be controlled, so the footprint
can be changed. For example, here the footprints are small it is visible here, this part, this
part and (Refer Slide Time: 6:40) this part, are all small and another possibility is that on
a particular spot the beam can be focused for some duration which is known as the dwell
time.

So the amount of time a beam is pointed to a given area is known as the dwell time. And
of course whenever it changes from one position to another it takes some time and that
time is of the order of microsecond so the dwell time should not be very short or it should
not be very large so dwell time is controlled electronically and it can be focused on some
parts for certain period of time and that time is usually of the order of few milliseconds.
Therefore, we find that there are two important concepts; one is footprint and another is
dwell time in the context of satellite communication.

Now let us look at the different types of satellites that are used for communication. This
is the surface of the earth and the first satellite that was launched by the Russians was the
sputnik and after sputnik was launched there was the discovery of Van Allen belts. Van
Allen belts are essentially where there is high energy charged particles high energy
protons are available and there are two such Van Allen belts as you can see, one is one is
here above 3000 to 5000 and another is above 15000.
(Refer Slide Time: 08:27)

So there are two Van Allen belts and obviously when satellites are placed in these regions
then they will be destroyed by the high energy particles. So satellites are placed avoiding
these two Van Allen belts.

For example, this is the earth (Refer Slide Time: 8:56) and the low earth orbit LEO
satellites are positioned between 500 to 2000 km as you can see this is the orbit. So the
altitude is 500 to 2000 km. On the other hand, the medium earth orbit which is between
two Van Allen belts is from 5000 to 15000 km and these are known as medium earth
orbits or MEO satellites medium earth orbit satellites.

There is the third geostationary earth orbit known as geo. That is at the precise distance
of 35786 km. So these are the three popular orbits which are used for placing satellites.
however there is another one in between MEO and the GEO stationary orbit that is your
GPS global positioning system which is another constellation of satellites which is placed
at a distance of 20000 km. Obviously it is above the second Van Allen belt but definitely
much below the geostationary earth orbit which is at a distance of about 35800 km.
(Refer Slide Time: 10.28)

As I mentioned, these orbits are chosen such that the satellites are not destroyed by high
energy charged particles present in the two Van Allen belts. So we find that we have
broadly three types of orbits.

(Refer Slide Time: 10.35)

Now let us look at the frequency bands used in the satellite communication, this is in the
microwave range where line of sight communication is performed. There are several
frequency bands L band which uses two frequency bands; one is for downlink and
another is for uplink so frequency is in the gigahertz range so downlink range is around
1.5 GHz and uplink frequency is around 1.6 GHz.
For the S band the downlink frequency is around 1.9 GHz and uplink frequency is around
2.2 GHz and this is the bandwidth available for data communication. C band is very
popular and widely used in geostationary satellites is 4 GHz and 6 GHz where 4 GHz is
downlink and 6 GHz is uplink frequencies. These bands are gradually becoming
congested and presently ku band is widely used and Ku band has got downlink frequency
of 11 GHz and uplink frequency of 14 GHz.

Once the Ku band gets exhausted or fully utilized then we have to go for Ka band which
is in the range of 20 GHz for downlink and 30 GHz for uplink and which can give you
very high bandwidth of 3500 MHz. hence, these are the frequency bands and in all cases
it is line of sight communication and it is in the microwave frequency range.

(Refer Slide Time: 12.35)

Now let us consider the low earth orbit satellites. There are several examples for the low
earth orbit satellites. Essentially a single satellite is not used, a constellation of satellites
very similar to cellular telephone network are used that work together as a network. So, it
is not a single satellite but a number of satellites are used in the low earth orbit to form a
network.

Usually in LEO satellites polar satellites are used in the altitude of 500 to 2000 km but
not the radius and typical value is 850 km and the period based on this altitude we can
find out the time period which varies from 90 to 120 minute and for example for 850 km
altitude the period is hundred minute and the satellite rotates at the speed of 20000 to
25000 kilometer per hour. And whenever a satellite is placed at this altitude the footprint
can be eight thousand kilometer diameter and of course you can focus over a narrow
earth area but this will be the maximum radius. Because of line of sight communication
the earth’s curvature the footprint cannot be varying at large. Higher the altitude the
diameter of the footprint the covered area can be large.
And the LEO satellites can be broadly divided into two types. First one is known as little
LEO. The little LEO communicates at less than 1 GHz range. For example, we have seen
that less than one Gigahertz range is they don’t use L band lesser than that then big LEO
uses 1 to 3 GHz, one notable example is Iridium.

The third category of LEO satellite is used for broadband internet access and one notable
example is Teledesic. I shall discuss more about these two systems iridium systems and
Teledesic systems.

(Refer Slide Time: 15.15)

Let us consider how they communicate. In case of low earth orbit there are three possible
types of transmission. First one is communication of transmission between two satellites
which is known as inter satellite link as you see two satellites can communicate to each
other so data transfer can be done communication can be done from one satellite to
another satellite.

Second type of transmission is gateway link. That means there is an earth station which is
used as gateway and here for example (Refer Slide Time: 15:58) for this satellite this is
the footprint and this is the footprint of another satellite. And as you can see this satellite
can communicate with this satellite another satellite and also the down stations or
gateway links so this is known as GWL, GWL that is your gateway link and third is your
user mobile link so directly the mobile phones can communicate with the satellite in this
case. So there are three different types of links which are used for communication ISL,
GWL and UML User Mobile Link, Gateway Link and Inter Satellite Link.
(Refer Slide Time: 16.36)

And let’s consider in little more detail the iridium system.

This project was started by Motorola in the year 1990 with the objective of providing
worldwide voice and data communication service using handheld devices. So the basic
objective was that with the help of the satellite people should be able to talk to each other
and that will cover a large geographical area.

And of course it took quite some time to materialize the project and after about eight
years it materialized with 66 satellites. And although the project started with 77 satellites
initially it started with 77 and because of this initial number of about 77 the iridium has
been taken because iridium is the 77th element in our periodic table so the name has
been taken from this but unfortunately the number of satellites got changed but the name
has been retained. So, using 66 satellites these are divided into six polar orbits so you
have got 11 satellites in each orbit.
(Refer Slide Time: 17:58)

As you can see each orbit has got 11 satellites and you have got 6 orbits 1 2 3 4 5 and 6
(Refer Slide Time: 18:04) so you have got six polar orbits rotating around the earth at an
altitude of 750 km which is not far away from earth’s surface and in each orbit you have
got 11 satellites and each satellite has 40 spot beams.

So the footprints of different satellites are possible here and the total of 3168 beams is
possible. However, all are not used but around 2000 beams are actually used although
possible spot beams are 3168 beams so it can cover small geographic area at a time and
by controlling the dwell time different spots can be covered. In this way the whole earth
can be covered by using this constellation of 66 satellites used in iridium system.

Another important notable system that is worth mentioning is the Teledesic system. It
started with a very ambiguous project by Craig McCaw and Bill Gates in the year 1990
with the objective of providing fiber optic like communication facility. We have seen that
the iridium project was developed with the objective of providing voice communication
and at most low bit rate data communication. But this Teledesic project was developed
with the objective of providing data transfer at a much higher rate very similar to fiber
optic like communication in other words to provide broadband service. And this
Teledesic system ultimately was designed with 288 satellites in 12 polar orbits.

There are 12 polar orbits very similar to the iridium and there are 288 satellites so you
have got 24 satellites in each orbit and you have got 12 such polar orbits and these are
stationed at an altitude of 1350 km compared to 750 km of iridium and the
communication is being done using Ka band and at a particular instant 8 neighboring
satellite communicate by using ISL Inter Satellite Link and GWL is used between the
satellite and the gateway and also the user mobile name is been used for communication
between the user and a satellite. And as we have already discussed satellite focuses its
beam to a cell during a dwell time and it allows data rates in the order of 150 Mbps for
uplink and 1.2 Gbps downlink.

(Refer Slide Time: 21.24)

Such a high data rate is quite sufficient for any type of communication both voice, video
and data and that’s why this Teledesic system is being developed to provide fiber optic
like communication but it has the potential to cover the entire earth with the help of the
288 satellites.

Now let us consider the MEO satellites. These MEO satellites are positioned between two
Van Allen belts as we have discussed. The typical altitude is 10000 km and the period is
6 hours and not minutes. As you can see as the altitude is increasing period is also
increasing so six hours is the period for the typical MEO satellite.
(Refer Slide Time: 22.20)

And as I mentioned Global Positioning System or GPS is a very popular satellite system
used nowadays and this is used for satellite based navigation system. And this comprises
a network of 24 satellites at an altitude of 20000 km with a period of about twelve hours.
That means twice in a day it is visible from a particular location and it has the inclination
of 55 degree. This was originally intended for military applications. That means this
particular satellite system was developed by the department of defense of USA and
original plan was to use it primarily for military applications. But in the year 1990 they
changed their policy and it was made available for civilian use. With the help of the GPS
system one can do global positioning from which it has derived its name.

Actually it allows land, sea, and airborne users to measure their position that means three
dimensional positions can be measured, velocity can be measured and the exact time also
can be measured with the help of a GPS receiver. So a GPS receiver has been found to be
very invaluable tool for the captain of the airplane, for the captain of a ship and also very
soon it will used by people traveling on road for vehicular traffic. So, every car owner, or
the captain in the ship or every pilot will be having such a receiver thus not only the
position but the time is very accurately measured and the accuracy can be within the
range of 15 m and that is the reason why is one of the most widely used satellite for
measuring the position, velocity and time.
(Refer Slide Time: 24.50)

In this global positioning system actually measurement is done for land and sea
navigation using the principle of triangulation. So here you have got three known
positions A, B and C. These are the three positions of the satellites known as ‘unknown
position’. It can be proved that if these three positions are known the point of unknown
position can be identified with the help of suitable device. So that is been done with the
help of a GPS receiver. So, by time stamping the signal is sent to A, B and C and the
delay is measured by the distance of A, B and C then the point at which these three
diameters coincide is the exact position. And another satellite is used to provide the time.
These GPS satellites each having four very accurate clocks based on that nuclear material
those clocks give very accurate times so the fourth satellite provides the very accurate
time and the other three satellites gives you the exact measurement of distance and with
the help of that one can very accurately position the distance, time and the speed of a
particular vehicle or a air plane or a ship.

So, requirement is that at any point of time at least four satellites is visible to the user
from any point on the earth. It is not necessary that it has to be on any point on the earth it
can be in space also. For example, from the aircraft also it can be used. So a GPS receiver
can find out the location of a map so nowadays there is no need to use the compass but
you can use GPS receiver for the purpose of positioning. In fact it was widely used in
Persian Gulf War for identifying the position of different vehicles used in war. So this is
your GPS.
(Refer Slide Time: 27.20)

Now let us focus on the GEO satellites. So, to facilitate constant communication the
satellite must move at a constant speed as the earth which is known as geosynchronous
satellite. We have seen, in case of MEO and LEO or even in GPS satellite the satellite is
moving around the earth and from a particular position if you look at it you find that its
position keeps on changing so you have to keep track of it, you have to compensate for
Doppler Effect and so on.

But if it is so positioned that its movement is stationary with respect to a point on the
earth then it is called geosynchronous satellite. There is a special situation where when
the geosynchronous satellites are on the equatorial plane. So if it is on the equatorial
plane then it is called geostationary satellites. That means geostationary satellites are
special case of geosynchronous satellites when the plane is on the equatorial region. So it
uses the equatorial plane to place the satellites.

Obviously you have got 360 degrees and with the present technology you can use up to
180 satellites in the equatorial plane and with the help of that you can do communication.
And as I mentioned the altitude is 35786 km and the radius is above 42000 km that
means from the center of the earth the radius is 42000 km but the altitude from the
surface of the earth that is from sea level it is 35786 km. and period is exactly same as the
rotation of earth that is roughly 24 hours.

And as I mentioned the orbit is on the equatorial plane and you can have at most 180
GEO satellites based on the present technology by using Ku band. Whenever Ku band is
used you can have up to 180 satellites but as you go for Ka band the number of satellites
can be increased. Although you can have 180 geostationary satellites the whole globe can
be covered only by using three satellites.
(Refer Slide Time: 30:00)

As you can see here (Refer Slide Time: 30:05) a particular satellite roughly covers one
third of the earth. So if one satellite covers one third of the surface of the earth then by
using three satellites the entire earth can be covered. In other words with the help of three
geostationary satellites it is possible to have global communication. From any place on
the earth to any other place on the earth communication can be done with the help of
three geostationary satellites.

Now let us look at the key features of the geostationary satellites. One important feature
is long round trip propagation delay. As you know the distance is about 36,000 km and
even with the speed of light the electromagnetic radiation goes from the surface of the
earth to the satellite and comes back and the speed is 3 into 10 to the power 8 meter per
second. Even with that speed it takes about 270 milliseconds between two ground
stations. That means the round trip delay is 270 milliseconds. Then, while designing
systems you have to take into account this parameter particularly when we consider the
medium access control technique.

Secondly, it is an inherently broadcast media so it does not cost much to send to a large
number of stations. Now we are familiar with our conventional telephone systems, we
use local calls then as the distance increases the cost of communication increases. That
means whenever you use STD calls or whenever you call to the USA you have to pay
more that means the cost of communication increase with distance. But that is not true in
the case of satellite communication.

The cost of communication is same irrespective of the distance between two points.
However, it suffers from low privacy and security and for that reason suitable encryption
technique is essential to ensure privacy and security. This is a very important aspect of
communication. Then finally as I mentioned the cost of communication is independent of
distance.
(Refer Slide Time: 32.33)

You have to take into account these features or you have to exploit them and one of the
systems where these features are widely used is in the VSAT which stands for Very
Small Aperture Terminal. Normally the antenna of a ground station has to be several
meters in diameter and as a result the cost of the antenna and the communication system
is quite high. But to make it affordable to common people that is this VSAT terminal was
developed to make access to the satellite more affordable and without any intermediate
distribution hierarchy. That means communication can be done directly with the satellite
and at the same time it has to be made affordable to the common people and with that
objective the Very Small Aperture Terminals were developed and for example most
VSAT systems operate in Ku band with an antenna diameter of only 1 to 2 m which is
relatively small compared to the bigger antennas that is being traditionally used. and it
requires transmitting power of only 1 to 2 watts for communication so neither the power
is very high nor the diameter is high it can be very easily used in houses.

There are several implementation approaches particularly they can be categorized into
three types such as one-way, split two-way and two-way. We shall discuss in more detail
about one-way and two-way. The split two-way is a little special situation where VSAT
does not require uplink transmit capability which significantly reduces the cost.
(Refer Slide Time: 35:17)

In split two-way what is been done is the satellite sends data in the downward direction
using broadcasting or some other technique. Then there is no uplink transmission from
the ground station to the satellite and that communication is done with the help of some
other system it can be PSTN Public Switched Telephone Network or cellular telephone
network or whatever it may be. So that’s how it is a split two-way system which does not
require uplink transmit capability that makes it simpler, this is used in many situations.
Let us consider the other two the one-way and two-way communication systems.

(Refer Slide Time: 35:30)


In one-way VSAT configuration it is essentially simplex communication. We know that
in simplex communication the communication is only in one direction and that is what is
being done in this particular case. Here for example, there is a master station it sends data
to the satellite and the satellite relays it over a large geographic area so it is essentially the
broadcast coverage area and all the ground stations in this area can listen to what is being
transmitted by the master station through the satellite. So the satellite here just acts as a
relay in the sky and it is used for broadcast satellite service that means whenever
something has to be broadcasted then this type of simplex communication can be used.
For example, the satellite television distribution system has this type of one-way
communication.

(Refer Slide Time: 36.43)

For example, here is the studio of the (TV, Doordarshan) and here is the uplink earth
station and in the uplink earth station by using Ku band it uplinks the signal then the
satellite sends it to a number of cable TV service providers. So the cable TV service
providers have antennas located in their place and then the cable TV operators distribute
the television signals to a number of houses. This is the ku band uplink (Refer Slide
Time: 37:24) and this is the Ku downlink signals which are being received by the cable
TV operators and then they distribute it to the households. So this is a Satellite Television
distribution system.

Nowadays another important service is becoming popular that is your Direct-To-Home


DTH service which is been publicized by our Doordarshan. In the Direct-To-Home
service in the house one can put an antenna which has the diameter of less than 1 m and
then by using one set top box directly from the satellite the digitized television channels
can be captured and then one can put it on the TV. So, bypassing the cable operators
nowadays by using the Direct-To-Home service this type of one-way communication is
used for television signal distribution.
Although I have shown one large geographic area you can have one small group also
which requires some specialized service so that is also possible in one-way VSAT
configuration.

(Refer Slide Time: 38.43)

Now let us look at the two-way VSAT configuration. In this two-way configuration all
the traffic is routed to the master control station hub. Actually VSAT antennas are small
in size and also the power of transmission is small. So because of low gain antennas and
small power that is being used in VSATs direct communication between two VSATs is
not possible so a master control station is named which is also known as hub. So in this
case the configuration is somewhat like this. This is your satellite (Refer Slide Time:
39:24), this is your satellite, this is your satellite then you have got the VSAT hub and
you have got a number of VSAT antennas. It is like a star. So all the VSAT hubs are
communicating through this master control station or hub. So this hub is at the center of
the star topology and the communication is done in this way. So this is the star topology
that is being used.
(Refer Slide Time: 40.04)

However, nowadays with the advancement of technology it is possible to have direct


communication between two VSAT antennas. So in such case each VSAT has the
capability to communicate directly with any other VSAT you don’t require a hub so this
one can be considered as the mesh topology. although the satellite is there it is being used
as the via media, but here say suppose this is one VSAT and this is another VSAT, this is
another VSAT, and this is another VSAT (Refer Slide Time: 40:42) all of them can
communicate with each other so it forms some kind of mesh network since they can
communicate with each other. Hence, it is a mesh type of network that can be formed
although VSAT is there which is used as via media. So this is two-way VSAT
configuration using mesh topology.
(Refer Slide Time: 41.09)

Now you require medium access control protocol for satellite communication. Why you
require that, is explained here. It is a very important design issue in satellite
communication particularly how to efficiently allocate transponder channels. As you
know each satellite is having 12 to 20 transponders and using each transponder
communication can be done with the earth stations. Now, in satellite communication
from the satellite to the earth station it is essentially a broadcast type of communication
so there is no need for medium access control.

However, as you can see (Refer Slide Time: 42:00) by using this same frequency band
uplink frequency the ground stations are simultaneously communicating to the satellite
and as a consequence this uplink channel is shared by all the earth stations that is
communicating with the satellites and that requires medium access control.
(Refer Slide Time: 42.19)

Many medium control techniques are used based on the environment and applications
and they can be broadly categorized into the following types. First one is fixed
assignment protocol. In this case the number of users is small so you can use fixed
assignment and another approach is demand assignment protocol known as DAMA
Demand Assignment Multiple Access and in some situations you can use random access
protocols or it can be hybrid of random access and reservation protocols. Finally they can
be either distributed or centralized control.

(Refer Slide Time: 42.46)


Now we have already discussed about the medium access control techniques for satellite
communication. As you know the collision based medium access control technique
cannot be used because of long round-trip delay. Also, we cannot use token passing based
techniques. So, only possibility is to use ALOHA or slotted ALOHA and the reservation
based protocols. Let us see what are the different types used.

We can use FDMA, TDMA and CDMA depending upon various applications. FDMA
provides the simplest communication because here there is no need for synchronization,
there is no question of collision, however, when TDMA technique is used you have to do
synchronization because at different time slots different channels can be used for
communication and there can be collisions if two ground stations send signals
simultaneously. So synchronization is necessary and there is a possibility of collision in
TDMA but that can be resolved. Another approach that can be used in CDMA where it is
possible to have parallel communication using the same channel in the same time and as
you can see ‘n’ stations can communicate with each other. So TDMA, FDMA and
CDMA are all used in satellite communication.

(Refer Slide Time: 44.51)

Let us look at contention-free medium access control protocol. First one is the fixed
assignment protocols using FDMA. As I mentioned FDMA is the simplest one, they
allocate different frequency bands to different ground stations and communication can be
done in parallel using different frequency bands. Or you can use TDMA, this allocation is
possible, allocation of channel is static so it is fixed it does not change with time and this
is suitable when the number of stations is small. So, when the number of stations is small
permanently some frequencies of time slots can be allocated to the ground stations and
this provides you deterministic delay. In spite of the long propagation delay this delay
will be deterministic because there is no collision here which is important in real-time
applications.
On the other hand, this demand assignment based protocols DAMA is suitable when the
traffic pattern is random. That means a particular ground station sends for some time then
waits for some time when the traffic is bursty in nature. In such a case you have to use
random assignment protocols particularly when the traffic pattern is random and
unpredictable. Efficiency is improved by using reservation based on demand; the
reservation process can be implicit or explicit.

(Refer Slide Time: 46.31)

Here you can see how TDMA can be used, the time is divided into time slots and the
frame is done. Here in this particular case a frame has got four time slots. so you can see
different ground stations are sending at different time slots so this is being sent so this
one is sent by one ground station, these packets are sent by another ground station and
these packets are sent by another ground station and this is the TDMA stream (Refer
Slide Time: 47:16) which is broadcasted by the satellite towards all the ground stations.
(Refer Slide Time: 47.41)

So you can see it is following the same order; first this one, then this one, then this one so
the same order is being followed, this followed by this followed by this followed by this
(Refer Slide Time: 47:32) so this is being repeated one after the other. This is how the
TDMA works.

And in case of random access based MAC protocols as I have already mentioned
collision detection multiple access protocols cannot be used so it has to be either pure
ALOHA or selective reject ALOHA or slotted ALOHA and with this we can combine
reservation so it can be reservation R ALOHA or reservation ALOHA and sometimes a
combination of random access with reservation access protocols. This hybrid technique is
used to have advantages of both random access and time division multiple access.
(Refer Slide Time: 48.23)

Let us quickly have a recapitulation of the techniques that we have used. We have already
discussed in detail about reservation ALOHA that is being used when the number of
ground stations is larger than the number of channels. So the reservation is done by
directly trying to send in different slots. This is called reservation ALOHA. There is a
possibility of collision in different slots. But once a station is able to transmit in a
particular slot then same station will send data in the subsequent slots the.

However, if a slot is free is right now and in the next frame two stations can send so there
will be collision which has to be overcome. Another technique is used, whenever the
number of stations is smaller than the number of slots then you have got implicit
allocations so it is one slot per station. When the number of station is small implicit
allocation is there and then the excess slots can be accessed by using slotted ALOHA that
means this part has to be acquired by contention based technique. This is Binder’s
scheme.
(Refer Slide Time: 50.05)

In Robert’s scheme as we know a separate slot is being used which is known as


reservation slot which is used for the purpose of performing reservation then data can be
sent in different slots. There is centralized medium access control protocol the centralized
FDMA is an extension of Robert’s protocol. Here there are six reservation mini slots of
variable length (Refer Slide Time: 50:23) but here one of the ground stations access some
kind of master which does the reservation and variable length data can be transferred in
this fixed priority oriented demand assignment.

Finally you have got Packet Demand Access Multiple Access Protocol PDAMA where
there is a leader control slot and guard slot. In leader control slot acknowledgement is
given to other stations about the reservation of slots and this guard slot is being used as a
gap so that the ground stations get some time before reservation can be performed from
the next time frame. These are the information subframe where the transmission is
performed based on the reservation. This gives you a quick summary of the medium
access control techniques that is being used for satellite communication.
(Refer Slide Time: 51.18)

Before we discuss about the questions of this lecture let me summarize about the different
systems where medium access control techniques are used. First we have discussed about
the local area networks where we can use the contention based protocols like CDMA,
CSMA/CD, Carrier Sense Multiple Access with collision detection and also we can use
token ring and we have seen their applications in Ethernet, Fast Ethernet and Gigabyte
Ethernet technique.

Then we have discussed wireless LAN where collision is divided by using a protocol
known as CSMA/CA and then we have discussed the cellular telephone networks where
FDMA, TDMA and CDMA are used and in today’s lecture we have discussed about the
satellite communication system where we have seen how various medium access control
techniques are used for sharing the communication channel. I will give you the questions
for this lecture

Review Questions:

1) Distinguish between footprint and dwell time


2) Explain the relationship between the Van Allen belts and the three categories of
satellites.
3) Explain the difference between the iridium and Teledesic systems in terms of usage.
4) What are the key features that affect the medium access control in satellite
communication?
5) What are the possible VSAT Configurations?
(Refer Slide Time: 53.24)

Now it is time to give you the answer to these questions of lecture - 31.

(Refer Slide Time: 54.17)

1) What is the relationship between a base station and a mobile switching center?
A number of base stations are under the control of a single mobile switching center. A
base station is equipped with a transmitter receiver for transmission and reception and for
transmission and reception with the master station in its footprint. On the other hand the
MSC coordinates communication among base station and PSTN network it is a computer
controlled system responsible for connecting calls recording call information and also do
the billing.
(Refer Slide Time: 54.58)

2) What is reuse factor? Explain whether a low or high reuse factor is better?
Actually the fraction of total available channels assigned to each cell within a cluster is
known as the reuse factor. So if you have got n cells in al cluster then the reuse factor is
1/n. Capacity of a cellular telephone system depends on this reuse factor. So as the reuse
factor increases then you have got larger number of cells so the covered area decreases.
On the other hand, if the n is small then the area is larger.

(Refer Slide Time: 55.28)

3) What is AMPS and in what way it differs from D-AMPS?


AMPS is your Advance Mobile Phone System is a purely analog cellular telephone
system developed by bell laps and primarily it was used in North America and some
other countries. On the other hand, D-AMPS is a backward compatible digital version of
A-AMPS. We have discussed about the D-AMPS in the last lecture in detail.

(Refer Slide Time: 56.32)

4) What is mobility management?


Mobility management deals with two important aspects; handoff management and
location management. Handoff management maintains service continuity as we have seen
when a mobile station migrates out of its current base station into the footprint of another
base station. To do this it is necessary to keep track of the user’s current location that is
being performed with the help of location management. So location management and
handoff management together performs the mobility management.
(Refer Slide Time: 56.41)

5) What is the maximum number of callers in each cell in a GSM?


As you know in a multiframe 8 users can transmit in 8 slots and there are 124 such
channels which are sent simultaneously using TDMA. So the total number of callers in a
cluster is 124 into 8 and as the reuse factor is 7 in GSM the maximum number of callers
in a cell is 124 into 8/7 that is roughly equal to 141.

(Refer Slide Time: 57.13)

6) Distinguish between soft and hard handoff.


As you know, in case of hard handoff a mobile station communicates with only one base
station at a time. So when it moves out of one base station to another first it breaks
connection with the existing one before establishing connection with the new base
station. On the other hand, in soft handoff a mobile station can communicate with two
base stations simultaneously and gradually the control is transferred from one base station
to another station.

With this we come to the end of today’s lecture, thank you.


Data Communications
Prof. A. Pal
Dept. of Computer Science & Engineering
Indian Institute of Technology, Kharagpur

Lecture - 33
Internet and Internetworking

Hello and welcome to today’s lecture on internet and internetworking. Here is the outline
of today’s lecture.

(Refer Slide Time: 01:02)


(Refer Slide Time: 02:05)

After a brief introduction I shall discuss about why internetworking, why at all
internetworking is needed, and then I shall discuss various issues related to
internetworking. Also, you will see that it will involve two types of devices, two types of
components; one is hardware, another is software and in this lecture I shall focus on the
hardware components which are essentially internetworking devices and there exist
various types of devices such as repeater, hub bridge, bridge of different kinds like there
is some bridge which does forwarding and learning which are known as transparent
bridge, then source routing bridge and then there are switches, routers and gateways so I
shall discuss all of them one after the other.
(Refer Slide Time: 02:39)

And on completion, the student will be able to specify the need for internet working, state
various issues related to internetworking and they will be able to explain the operations of
various internetworking devices like hubs, bridges, transparent and source routing
bridges. These transparent bridges uses bridge forwarding and learning, they will also be
able to explain about the switches, routers and gateways.

(Refer Slide Time: 04:06)

Let us look at the possible communication approaches. We have discussed different types
of communication systems in the last 32 lectures and in these 32 lectures we have
discussed various communication approaches.
First one is point-to-point. We have seen how two stations can communicate with each
other with the help of point-to-point league such as RS – 232C, then we have discussed
different types of switched communication and network which is also known as WANs
Wide Area networks, for example telephone network, X.25 frame relay and so on. These
are essentially switched communication networks.

We have also discussed multipoint broadcast network like LAN, satellite networks then
your cellular telephone networks, these are essentially multipoint broadcast networks.
Thus, these are the three possible communication approaches we have already discussed
in detail.

(Refer Slide Time: 04:47)

The LAN technology is designed to provide high speed communication over a small
geographical area. So if you look at each of them separately we will find that the function
of LAN is very limited over a small geographical region. On the other hand, WAN
technology is designed to provide communication across different cities, countries and
continents but their rate of data transfer is not very high and it has been absorbed that
isolated LAN and WAN have limited potential and usefulness. So unless these LANs and
WANs are interconnected together they are able to exchange information with one
another. This isolated LANs and WANs are of not much use so this has led to what is
known as internetworking.
(Refer Slide Time: 06:57)

What is internet? The basic objective of internet is to connect individual heterogeneous


network of both LAN and WAN distributed across the world using suitable hardware and
software in such a way that it gives the user the illusion of a single network.

So far as the user is concerned it will be communicating through a number of systems


such as LANs, WANs but it will be transparent to them, they will be able to communicate
seamlessly and that is the objective of internet. This single virtual network is widely
known as internet which is essentially a network of networks.

As you can see in this diagram we have several LANs such as LAN1, LAN2 and a
number of WANs Wide Area Networks like WAN1, WAN2, LAN3 you see several
LANs and WANs are interconnected together and these form the internet. Here are the
users of stations connected to different LANs and WANs to them. Whatever is within this
circle is transparent that means they can communicate with each other irrespective of
whether it is passing through this LAN and through this WAN which is in this particular
station communicating through WAN2, LAN3 then LAN2 so this will be transparent to
it. That is the objective of internet and the way it is being done is known as
internetworking.
(Refer Slide Time: 11:14)

So let us have a look at the various requirements for internetworking. First objective is to
provide link between networks. You have got isolated networks, LANs and WANs you
would like to link them together and link between these networks to provide a route for
delivery of data between processes on different networks. That means processes running
on different networks has to be provided a route so that from one network it can be
transferred or delivered in another network.

Moreover, another requirement is to provide an accounting service that keeps track of the
use of various networks that means a particular network may be using two LANs and
Wide Area Networks and for some of them one has to pay so some accounting is
necessary and routers and also to maintain status information. So apart from keeping
track of what is going on some status information is maintained so that the users or
stations can know what is going on and if there is anything wrong they can get the
information. Then another requirement is to accommodate a number of differences
among the network so wherever you interconnect a number of LANs and WANs of
heterogeneous type it is quite natural they will be having many differences. Let us see
what possible differences that exist.

First one is addressing schemes. We know that for local area network we have discussed
the addressing scheme. The Ethernet LAN uses 48-bit MAC address but in other network
it can be different. We have already discussed the token ring and token bus networks they
are addresses but not the same as the Ethernet so the addressing scheme may be different
in different networks and different LANs and WANs.

Then we have the packet size. The packet size of different networks can be different. In
one case it can be 15000 bites but in other cases it can be 64 kilo bites. Thus, the packet
size can vary from network to network. Then we have the network access mechanism.
We have discussed various medium access control techniques and we have seen that the
medium access control techniques used in different networks are indeed different so in
spite of these differences communication is possible.

Time-outs: time-outs means as the packet passes through the network it is not allowed to
stay in the network for a very long time some time-out is maintained and those times will
be different in different networks.

Error recovery: We have seen that when you are passing a particular packet through a
network or a number of networks there is a possibility of error so whenever an error
occurs you have to recover from the error situations that are your error recovery that can
be different in different networks.

Status reporting also can be different in different networks. We have discussed various
routing techniques like fixed routing, adaptive routing and so on. Routing approaches can
be different in different networks.

User access control can be also different in different networks. It can be connection
oriented or datagram type of service. So, services can be different. In spite of all these
differences the objective of the internet is to communicate between two stations in a
transparent manner and obviously the objective of internetworking is to accommodate all
these differences.

(Refer Slide Time: 13:31)

Now let us have a look at the various internetworking issues. First one is addressing. We
have seen that, whenever two stations wants to communicate with each other they must
have some address and in case of internet we have seen the 48-bit medium access control
MAC address.
In other networks this addressing scheme can be different. So we have to discuss about
addressing then we have to do packetizing. Apart from data other information like source
address, destination address and various other things have to be put together in the form
of a frame known as packetizing.

As we have seen the packet size can pass through a network cannot be same so it may be
necessary to fragment, that is, divide a particular packet into a number of packets as it
goes through a network then at the other end or in the destination node they have to be
put together they have to be reassembled. So it will be necessary to perform
fragmentation and reassembly then you have to do the routing as the packet passes
through a sequence of networks.

Then for the purpose of the reliability it will be necessary to do flow control, error
detection and error control, it will be also necessary to perform congestion control and to
ensure quality of service. then other issues like security, network security has to be
maintained as the data passes through the network, security has to be maintained with the
help of suitable encryption decryption technique then it is sometimes necessary to use
names instead of addresses because it is difficult to memorize or remember the names so
sometimes it will be necessary to use address and then we have to see how the naming is
being done and how actually mapping from name to address is performed. So these are
the various internetworking issues we have to discuss.

(Refer Slide Time: 15:22)

First we shall focus on the various types of hardware and software needed for this
purpose. we will see that we have to use different types of hardware such as repeaters and
hubs which are essentially layer - 1 relay, they act as some kind of relay, then bridge and
switches they are layer - 2 relay then router which are layer - 3 relay and gateway which
can operate in all the layers which is layer seven relay so these are the different hardware
components we have to use when we do internetworking.
On the other hand you will require software. The most common software that is used
today is TCP/IP Transmission Control Protocol and Internet Protocol it operates in two
layers. There are four layers but TCP/IP essentially operates in two layers as TCP in the
transport layer and IP in the network layer and it’s a protocol suite for internetworking.

This TCP/IP acts as glue and which binds all different types of networks into one. So
TCP/IP is the software which acts as a glue which actually puts different networks
together into one network. And of course there are application layer protocols which
operate on top of TCP/IP like TELNET, FTP, SMTP etc. So we will require some
hardware and some software. In this lecture we shall mainly focus on hardware and in the
next two lectures we shall focus on TCP/IP.

(Refer Slide Time: 19:03)

So first let us consider repeater. A repeater connects different segments of a LAN. We


have seen that there is a maximum length of a segment of network. For example, if you
are considering Ethernet then we know the maximum segment length of 802.3 Ethernet
500 m but if you want to have a single network with network span of more than 500 m
then you have to use a device known as repeater in between two segments. So here we
see we have got two segments each of them can be of 500 m, this is 500 m and this is 500
m so if you put them together you get 1 km of Ethernet LAN and as you can see these
two segments are linked by a device known as repeater. So what the repeater does is it
forwards every frame it receives. So if it receives a frame on this segment it forwards it to
segment two, if it receives a frame on this segment it forwards it to segment one so in this
way it does the forwarding.

You may be asking is it a amplifier or in what way it differs from amplifier? Actually a
repeater is not really an amplifier. What an amplifier does is it amplifies the signal as
well as the noise and it does discriminate between signal and noise. On the other hand, a
repeater essentially extract the signal, remotes the noise and regenerates the signal. That
means if you are sending a sequence of zeros and ones on this side then it may get
submerged in noise but it will remove the noise and extract these bit sequences and then
regenerate it, it brings the voltage levels corresponding to 0 and 1 and puts them on the
other side. So this way it is not really an amplifier but it is a regenerator. That means on
this side the noises are not transferred, if you are sending from this side to this side the
noises are not transferred but the bit sequences are transferred in regenerated form. So it
is a bidirectional link. The transfer of bits can take place from this side to this side as well
as from the other side to this side, segment one to segment two and segment two to
segment one. So, repeater is a very useful device for expanding a LAN.

Hence, you are not really connecting two LANs by a repeater but essentially you are
increasing the span of a local area network. As you know, in case of Ethernet you can
have five repeaters at most in cascade so in that way maximum network span can be 2.5
km. so you will require a sequence of repeaters so before the signal level is becomes very
poor becomes very difficult to extract from noise you have to put repeater. So repeaters
are a very useful device for increasing the size of networks.

(Refer Slide Time: 20:12)

Another version of repeater is usually called a hub. Hub is a generic term which means
from where signals are generated and received. But here by hub we commonly refer to a
device known as multiport repeater, essentially it is a multiport repeater. We have seen in
the previous diagram a repeater has got two ports; here you have got one port, another
you have got another port so it linking the two segments but a hub can have multiple
ports so it can be used to create multiple levels of hierarchy with the help of a hub, you
can create a single LAN having multiple levels of hierarchy and stations connect to the
hub usually with the help of RJ-45 connecter and as we have already discussed maximum
segment length is 100 m so it is easy to maintain and diagnose that means detection is
easy. That’s why this type of topology is becoming more and more popular.
(Refer Slide Time: 21:15)

Here you have got a hub at the upper level so this is connected to three different ports
here although it is shown that it is connected to a single port essentially a hub has got
multiple ports. This is a port, this is a port, this is a port (Refer Slide Time: 20:40) this is
a port, this is a port and this is a port so these three hubs are connected to three different
ports here. Then in the second level there are a number of stations or computers
connected to different ports of the hubs. So it is essentially a multiport repeater and you
can see how in a heretical manner you can expand a network and create a big network
with large number of computers. Usually hubs are available with 8 ports, 16 ports, 24
ports and nowadays 48 ports.

(Refer Slide Time: 23:15)


Now the second device that we shall discuss is bridges. A bridge operates both in
physical and data-link layer. We have seen, in case of repeaters it does not do any
filtering. Whatever is received on a port, say this is a hub let’s assume (Refer Slide Time:
21:44) and you have got a number of ports, 8 port is here so whatever signal is on this
port will be available on all other ports, it is repeated on all ports. So as a result hub is
some kind of a dumped device, it does not do any filtering except regeneration of the
signals and same signal is repeated in all other ports so it does not do any filtering or
routing.

On the other hand, a bridge uses a table for filtering and routing. That means if a packet
arrives here a bridge will not forward it to all other ports which is done by hub a hub
forwards the packet on all the ports but a bridge does not do that. However, the bridge
does not change the physical addresses in a frame. Thus, as we know a frame will have
the source address, destination address, data, CRC and so on so various parts will be there
in a frame and it does not change the content but it does the filtering or routing we may
call it routing as well. There are two types of bridges; one is known as transparent bridge
and another is known as source routing bridge. Let us see the operations of these two
types of bridges.

(Refer Slide Time: 23:25)

So here the operation of a bridge is shown. Here as you can see these are the 48-bit Mac
addresses represented in hex form is shown here. In this particular case a bridge is shown
with two ports but as I said a bridge can have many ports. So on this port one a LAN – 1
is connected to which two stations are connected with the Mac addresses as given so port
one and address so a table is maintained, address and port number.

On port two again two other stations are connected with these addresses 742B and so on
and 742B and so on, this is 112 and 113 (Refer Slide Time: 24:15) so this is connected to
port two, this kind of table is maintained in a bridge and this table is used for the purpose
of filtering or you may call it routing.

The functions of the transparent bridge: All stations are unaware of what is going on
inside the bridge, it acts as some kind of plug and play device, a bridge is installed, there
is no need to do any reconfiguration, any initialization nothing is required, simply plug it
and then it will start operating and that table will be automatically created.

(Refer Slide Time: 25:09)

So, reconfiguration of bridge is not necessary, it can be added and removed without being
noticed. And question arises how it really works so that it behaves like a transparent
bridge. Actually it performs two important functions. First function that it does is known
as forwarding of frames. This is the key function it performs. Another operation is known
as learning which is necessary for forwarding to be done; this learning is done to create
the forwarding table the table that I have shown just now.
(Refer Slide Time: 28:24)

What is being performed in bridge forwarding? It essentially performs three functions;


discard the frame it source and destination addresses are same. That means if the source
and destination addresses are same that means around the same port or on the same port.

(Refer Slide Time: 30:14)

For example we have seen that these two addresses are connected to the same port and
are on the same port. So, if a frame is generated on this destination address whether it is
this or this on port one then what will happen is that particular packet, suppose this is
sending a packet to this station (Refer Slide Time: 26:47) then the bridge will discard that
packet because it is directly communicated between this station to this station so it will
not be forwarded to the other port of the bridge.

Second operation that is done is forwarding the frame, if the source and destination
addresses are different and destination address is present in the table. Suppose the
destination address corresponds to the other port and if this present here and the port
number is known then it will simply forward it. Suppose this station is sending a frame
with this destination address so this destination address is present in the table and it
shows that it is on port number two so this frame is forwarded on port number two so this
is how this works. So this is the second operation ‘forward’ forwarding the frame and the
first one is ‘discard’. Third is, a particular destination address may not be present in the
table.

So, in the beginning whenever you plug a bridge in the network that table will be empty
so in such a case there will be no destination address so in such a case what the bridge
does is it forwards on all the other ports except from where it has come so this is known
as flooding. So flooding is done and obviously it will be delivered to the destination
address.

(Refer Slide Time: 30:26)

For successful operation of bridge forwarding the bridge learning process is performed.
Here as you can see, if a particular station sends a packet the station’s MAC address is
automatically recorded in the table along with the port number. So source is found in
forwarding database and if it is no then that source address is added to source address
database with direction and new timer. It also maintains a timer. The reason for timer is,
the table is not really static. The table that is being created keeps on changing with time.
The reason for that is, suppose a particular station does not send any packet for a long
time so it is something like that in such a case what will happen is after a few minutes it
will see the condition of the timer and when the time-out occurs that station’s MAC
address is removed from the table and whenever a frame is received then even when the
frame MAC address is present in the database its timer is updated. So the direction is
updated that means you can take out a station from one place for example here you can
take out the station from here to there (Refer Slide Time: 30:02) then automatically after
the station is connected to this side it sends the frame and automatically that table will be
updated.

So apart from performing the building of the table it also maintains the timer and
whenever the time-out occur those addresses are removed from the table so this is how
bridge learning operation is performed. So, bridge learning and bridge forwarding
together makes the transparent bridge operation.

(Refer Slide Time: 3057)

And here we can see how the bridge forwarding table is created. This is the initial
condition, a bridge has been plugged in and it has been connected to four LAN’s. This is
LAN 1, this is LAN 2 this is LAN 3 and this is LAN 4 so it is connected to four LANs.
Initially this table is empty. Now, whenever A sends a packet the bridge notices it and
then it ((...)) the table, whenever C sends a packet it also notices it and will [su…..31:17]
a table between A and C.

Essentially we have to write the 48-bit MAC addresses if it is Ethernet. So in this way in
the long run all the stations if they are active their entries will come in the address along
with the port number. So after all the stations have sent a frame this will be the situation
of the table. I have not shown the timer here but the timer value is also maintained. And
whenever suppose this C is turned off then automatically after some time this entry will
be removed from the table. So this is how the bridge forwarding table is created in
transparent bridge operation.
(Refer Slide Time: 33:34)

This forwarding and learning processes work fine as long as there is no redundant bridge
in the system. So only there exist one part from one station to another and in such a case
there is no problem. So, for unique path it is necessary that there is no redundant bridge
so from one station to another station there is a unique path may be through a number of
bridges. These are the bridges; this is one bridge, this is another bridge, this is a station
and this is a station (Refer Slide time: 32:45). However, from the view point of reliability
it is necessary to have multiple paths. The reason is if one bridge becomes faulty then the
source stations will not be able to send if there is no redundant path. So, for the sake of
reliability it is necessary to have redundancy.

To get best of both the worlds that means the advantage of this transparent bridge and at
the same time to avoid the problem of redundancy where redundancy creates loop
problem so we have to avoid this loop problem and if you want to retain it then let us see
how the loop problem arises.
(Refer Slide Time: 34:52)

So here you can see there are two LANs; this is LAN X and LAN Y and these are
connected by two bridges bridge A and bridge B. Whenever station A sends a frame to
bridge A and sends a frame on this network LAN X then bridge A as well as B bridge B
receives it and since the destination address is not known it sends it to network LAN Y
and also bridge a sends it to LAN Y. Now this t1 (Refer Slide Time: 34:25) is received by
LAN bridge B and again it forwards it to LAN X. So in this way some kind of loop
occurs so the frame sent by station A goes to bridge B goes to LAN Y then it is
forwarded by bridge A again it is forwarded by bridge B so in this way looping arises. So
a frame can indefinitely keep on looping within the network before it is delivered. This is
the typical loop problem and we have to avoid that loop problem. So loop problem can be
avoided by using a technique known as spanning tree.
(Refer Slide Time: 35:39)

Spanning tree is essentially a special kind of graph where there is no loop. So a spanning
tree topology is created. The methodology for setting up a spanning tree is known as the
spanning tree algorithm. There exist standard algorithms efficient algorithms for creating
spanning tree of a graph and we can use that for transparent bridge and without changing
the physical topology. So some kind of logical topology is created that overlay on the
physical one. Let us see how exactly it has been done here.

(Refer Slide Time: 38:59)

Here you see you have got LAN 1 to which two bridges are connected, you have got
LAN 2 to which one bridge is connected, you have got LAN 3 to which three bridges are
connected and LAN 4 and LAN 5 one bridge is connected so in this way you have got
multiple paths and multiple bridges in this network which is connecting five different
LANs. Now there is a spanning tree algorithm which identifies a root bridge, then it
identifies the destination bridge and again it identifies the forwarding port and identifies
the blocking port. I am not discussing the algorithm which makes use of these
terminologies but the outcome is shown here. For example, this is the route bridge for
LAN 1 actually if you have multiple bridges then the bridge having lower MAC address
is used as the route bridge. So this is considered as the route bridge.

Here this particular path (Refer Slide Time: 36:51) is the blocking path. On the other
hand, bridge B1 is where both the ports are forwarding ports. Then if you look at this
bridge you can see this is the forwarding port and this is the blocking port so the dotted
lines are blocking ports. That means these LANs are not used for forwarding purposes.
Whenever you look do this then you find that from any LAN to any LAN there is a
unique path.

For example, say from LAN 1 if you want to go to LAN 2 you have to only go through
this bridge B1 and from LAN 2 to LAN 3 if you want to go you have to use bridge 7 then
from bridge 7 it goes to bridge 1 then to bridge 2 then through bridge 1 it goes to LAN 3.
Or if b this LAN 2 wants to communicate with some stations on LAN 5 so although there
is a path like this instead of this path as you can see it has to be connected and forwarding
will take place in this manner and the path is unique so there is a unique path the logical
topology is created on the physical topology and dotted lines are not used for forwarding
purposes they are blocked, that is the spanning tree approach.

However, if because of some reason suppose this particular bridge becomes faulty then
automatically the spanning tree will be modified and some of these ports will become
forwarding ports and both the ports will become the blocking ports. So in this way the
spanning tree is created and using this spanning tree the transparent bridge will work.
(Refer Slide Time: 39:37)

Apart from these transparent bridges there is another routing technique that is used which
is known as source routing. In this source routing the source itself decides the route that
will be used by a packet through a number of bridges so all the intermediate bridge
addresses are provided by the source stations.

The packet format is shown here. As we have seen normally the destination addresses
and source addresses are provided. But in this particular frame as you can see apart from
destination address and source address the routing information contains a number of other
addresses. For example, route number one designator, route number two designator and
route number m designator and so on. So with the help of these designators you can
identify the route and also there are 16-bits for route control. The source station provides
all the information for routing through which the packets will go one after the other and
all the information is provided, it is quite simple. However, the source node has to gather
all the information from other parts of the network before this packet can be framed and it
can send the packet to the designated station.
(Refer Slide Time: 43:23)

Switches: We have seen the function of a bridge. What is a switch?


A switch is nothing but a fast bridge. We have seen that a bridge performs the forwarding
operation or routing function, may be done by software, but then it will be quite slow.
Instead of doing by software it can be done by hardware and it can be done very fast and
that is what is precisely done with switches and it uses the same technique as it uses a
table or a directory having the address and port number and each frame is forwarded after
examining the address and is forwarded to the proper port number.

I have already discussed about the three techniques; cut-through, collision-free and fully
buffered. in cut-through the routing is done whenever a frame is received and only after
receiving the destination address which is in the beginning of the frame the frame is
transferred. On the other hand, to make a frame collision-free particularly in the context
of Ethernet you have to receive at least 64 bytes so after receiving 64 bytes and if there is
no collision then the frame is forwarded then we say that the frame is collision-free but
error detection is not performed.

To perform error detection you have to receive the entire frame, compute CRC and after
receiving the entire frame and computation of CRC if it is found to be error free then it is
forwarded in the third case which forwards collision and error free frames. These are the
three approaches used and obviously the cut-through approach has the smallest delay but
it suffers from these advantages, it does not do collision detection and error detection. A
frame can be forwarded which had suffered collision and which is not error free.
Collision-free has got medium delay it will be sent only after receiving 64 bytes and fully
buffered has long delay because it has to receive the entire frame, compute the CRC and
then forward it.
(Refer Slide Time: 44:39)

Now, coming to the third layer that is the router the layer - 3 switch the routers operates
in layer - 3 switch and obviously the addresses that is being used is IP address which I
shall discuss in detail in the next lecture.

This is the schematic diagram of a router. As you can see a router has got a number of
input ports so the basic configuration is the same. It has got a number of ports as input
ports and output ports as shown here. But to the outsider the ports are essentially
bidirectional. However, for the purpose of understanding the functionality we see that it
has got a number of ports and there is a routing processor, there is a switching fabric
which is a special piece hardware which can perform translation from one side to another
very quickly. This is the switching fabric (Refer Slide Time: 44:31) and on the other side
also you have got a number of output ports. This is the schematic diagram for a router
which performs the routing of packets in layer - 3.
(Refer Slide Time: 45:20)

As we have seen a router has got four basic component input port it performs physical it
performs physical and data link layer functions of the router the ports are also with buffer
to hold the packet before forwarding to the switching fabric output port performs the
same function as the input ports but it does it in the reverse order the routing processors
performs the function of the network layer the processors process involves table lookup
as we have already seen and the switching fabric moves the packet from the input queue
to the output queue by using specialized mechanism and these are the two different
functions shown here.

(Refer Slide Time: 45:58)


So here the input port receives from the packet from the physical layer processor and then
it does the data link layer for processing there is data link layer processor then it
maintains a queue before it can be transmitted. Similarly, whenever a packet goes to the
output a queue is maintained then data link layer processor does the processing then it
goes to the physical layer processor before it is launched on the transmission media.

(Refer Slide Time: 46:13)

And here is a typical switching fabric that is used. This is the special piece of hardware
which moves the packet input to the output very quickly.

(Refer Slide Time: 47:34)


Here a simple internet is shown where you have got a ring LAN, a switched Wide Area
Network WAN, a Local Area Network LAN bus type Local Area Network, there are two
point-to-point links between two WANs Wide Area Networks and here you have got
another LAN.

You will notice that here (Refer Slide Time: 46:54) whenever the internetworking is
being done we have given a different kind of notation 240.34.8.24 what are these we have
not yet discussed, these are essentially IP addresses that is been used in internetworking
which is part of the TCP/IP. In the next lecture I shall discuss about the use of this type of
IP addresses which is used when you do internetworking and data is transferred from one
LAN to another LAN through a wide area network or between two wide area networks.

(Refer Slide Time: 49:40)

Here is the summary of what we have discussed today. We discussed about three
different pieces of hardware, one is your repeater or hubs which operates in physical
layer or which can be considered as layer - 1 relay, then we have discussed bridge which
operates in data link layer which can be considered data link relay or layer - 2 relay then
router which operates in network layer so it can be considered as network layer - 3 relay.

Another type of hardware which I have not discussed is known as layer seven relay. This
works above the network layer such as application layer. This kind of gateway can be
necessary. that means it will not only operate in layer - 2 or layer - 3 that means routing is
not only performed based on MAC address or IP address, it can check the contents of
data for example the content of an e mail so that kind of application gateway may be
necessary in implementing fireworks for the purpose of security.

Although for normal routing purposes gateway is not necessary but in special situations
such as implementing firewall for the purpose of security this kind of device will be
necessary which operates in layer 7 that means it operates in application layer, it can look
at the contents of data. Then we have the software which acts as a glue to link different
types of LAN and WAN to provide internet a single integrated network for seamless
communication.

(Refer Slide Time: 50:37)

And here it gives some kind of a pictorial representation of the layer - 1 relay the
functions of repeaters and hubs. as you can see this is one system say user A and here is
another system say user B or station A station B they where communicating with each
other and here you have got a repeater in between. So, repeater essentially performs bit
by bit translation so it receives bit by bit regenerates it and transfers bit by bit so it is
operating in the physical layer and this is connected to user A as well as it is connected to
user B and communication is taking place in physical layer.
(Refer Slide Time: 51:55)

A bridge is essentially a layer - 2 relay which links similar or dissimilar LANs. That
means with the help of this bridge you can connect two Ethernet LANs or one Ethernet
one say token ring LAN so two similar or dissimilar LANs can be connected. here we are
not really extending a single LAN so two dissimilar LANs are connected or two similar
LANs two Ethernet LANs are connected with the help of a bridge and it operates in data
link layer as you can see so this is one side and this is another side (Refer Slide Time:
51:23) which are linked, this is one user and this is another user so this is linked with the
bridge. As you can see the frame goes from this side to this side through a bridge which
operates in two layers. So it is designated to perform store and forward frames. So it
receives it and then it does some error checking and other things and then forwards it. So
it is protocol independent and transparent to end stations.
(Refer Slide Time: 52:40)

On the other hand, a layer - 3 relay which links dissimilar networks acts on network layer
frames not transparent to the end stations. That means it can modify the contents of the
frame, it can change the addresses so it is not really transparent to the end stations, it
isolates LAN into subnets to manage and control network traffic. So these routers are
very important functions. They are linked with help of these kinds of devices these are
nothing but routers (Refer Slide Time: 52:36).

(Refer Slide Time: 53:10)

Finally this is the gateway as I mentioned which works above the network layer such as
application layer. So here you see the gateway operating in application layer and two
systems are connected through a gateway and the communication is performed. It goes all
the way up to the application layer and then it comes down and goes to the other side. So
this is how the gateway works.

(Refer Slide Time: 53:49)

Now it is the time to give you the review questions.

1) Why do you need internetworking?


2) Why a repeater is called the level one relay?
3) What is bridge how it operates in the internetworking scenario?
4) What limitation of transparent bridge protocol is overcome by the source routing
protocol?
5) What limitations of a bridge are overcome by a router?
These are the 5 questions based on this lecture to be answered in the next lecture.
(Refer Slide Time: 54:33)

Here are the answers to the questions of lecture – 32.


1) Distinguish between footprint and dwell time?
Signals from a satellite are normally aimed at a specific area called the footprint so it
covers a large area. On the other hand, the amount of time a beam is pointed to a given
area is known as dwell time. That means a satellite can focus its beam for some duration
which is of the order of millisecond for transfer of data or for data communication then it
focuses on some other path, so this is known as the dwell time.

(Refer Slide Time: 55:29)


2) Explain the relationship between the Van Allen belts and the three categories of
satellites.
Van Allen belts are the two layers of high energy charged particles in the sky. We have
seen there are two layers of Van Allen belts. There are three orbits LEO, MEO and GEO
which are positioned above or below these three layers. They are chosen in such a way
that the satellites are not destroyed by the charged particles of the Van Allen belts. So this
is the relationship between the different types of orbit and the Van Allen belts.

(Refer Slide Time: 56:14)

3) Explain the difference between the iridium and the Teledesic system in terms of
usage.
Iridium project was started by Motorola in 1990 with the objective of providing
worldwide voice and low rate data communication service using handheld devices. On
the other hand, Teledesic project was started in 1990 by Craig McCaw and Bill Gates
with the objective of providing fiber optic like broadband communication. From here you
can see these are the two differences for communication. So, essentially this Teledesic
system was deployed to provide internet in the sky.
(Refer Slide Time: 56:43)

4) What are the key features that affect the medium access control in satellite
communication?

There are four features; long round-trip propagation delay which is about 470
millisecond, inherently broadcast media, low privacy and security, and the cost of
communication is independent of distance. So these are the four features that you have to
take into consideration.

(Refer Slide Time: 57:23)


5) What are the possible VSAT configurations?

Possible implementation configurations are one-way used in broadcast satellite services


for example direct to home service which are used nowadays, satellite television
distribution system split two-way in this case VSAT does not require uplink transfer
capability which significantly reduces the cost. Two-way is very popular, all the traffic is
routed through either master control station or each VSAT has a capability to
communicate directly with each other or with any other VSATs. With this we conclude
today’s lecture, the next two lectures will be on TCP/IP, thank you.
Data Communication
Prof. A. Pal
Department of Computer Science & Engineering
Indian Institute of Technology, Kharagpur

Lecture - 34
TCP/ IP - I

Hello and welcome to today’s lecture on TCP/IP.

(Refer Slide Time: 01:00)

We shall continue our discussion on internet and internetworking. Here is the outline of
today’s talk.
(Refer Slide Time: 02: 38)

First we shall discuss about the relationship between TCP/IP and the popular OSI model,
then we shall discuss about various issues related to internetworking. For example,
particularly IP addressing then in that context subnet masking and also the address
resolution protocol ARP and RARP and then we shall consider fragmentation and
reassembly in IP. Also, there are several companion protocols like ICMP and IGMP
which are related to internetworking and used along with the internet protocol IP protocol
which we shall discuss about. Then we shall discuss about two protocols SLIP and PPP
which are used by home users for connecting their computers to the internet service
provider using IP. And although at present IPv6 is commonly used but it has got a
number of limitations which can be overcome by the use of IPv6 but it is yet to become
popular. In this lecture I shall just give an overview of IPv6.
(Refer Slide Time: 03:50)

And on completion the students will be able to explain the relationship between TCP/IP
and the OSI model, they will be able to explain the different classes of IP addresses and
they will be able to explain the concept of subnetting and subnet masking and they will
be able to explain Address Resolution Protocol and Reverse Address Resolution Protocol
ARP and RARP. They will be able to explain fragmentation and reassembly that is
necessary while doing internetworking and they will be able to explain ICMP and IGMP
protocols and how they are used in congestion with IP protocol.

Then they will be able to explain the SLIP and PPP protocols and they will be able to
explain how they are used for internet access by using IP. They will also be able to state
the key features of IPv6. As I mentioned in the last lecture you will require a combination
of hardware and software for internetworking.
(Refer Slide Time: 04:36)

And in the last lecture we have discussed about various hardware like; repeaters and
hubs, bridge, routers, gateways and so on, these are the different hardware that will
require internetworking. In today’s lecture we shall discuss about the software
particularly one of the two software that is a protocol which is known as TCP/IP. And as
I mentioned this acts as glue to link different types of LAN and wan to provide internet, a
single integrated network for seamless communication. So TCP/IP is essentially a
software, it’s a set of protocols. We shall see the relationship between TCP/IP and OSI
model.

(Refer Slide Time: 07:14)


Although the TCP/IP was developed much earlier than OSI model always we try to map
a particular protocol with the OSI model. Let us see what is the relationship between the
TCP/IP and the OSI model.

As you can see the TCP/IP has got only four layers in contrast to several layers of OSI
model. For example, for the lowest layer any specific thing can be used here which is a
combination of physical and data link layer and one is free to use any type of network
like Arpanet, Local Area Network LAN, satellite network, sonnet or they can use simple
serial links like SLIP and PPP which provides the physical and the data link layer. So, the
physical and data link layer forms one layer in TCP/IP.

Then there is a layer which is corresponding to network layer which is IP Internet


Protocol and it has got several other companion protocols like ARP, RARP, ICMP and
IGMP which we shall discuss in this lecture then there is a protocol TCP or UDP which
can be considered as transport layer protocol and finally the session, presentation and
application all are not present in the TCP/IP there exists a single application layer which
provides a number of protocols for various applications like SMPT for transfer of mail,
ftp for transfer of files, telnet for sending mails and SMPT which is used for monitoring
network, SNMP Simple Network Management Protocol is used for management purpose
and TFTP so these are the various application layer protocols which we shall discuss in
the next lecture.

(Refer Slide Time: 08:27)

Then as I mentioned there are a number of internetworking issues like addressing,


packetizing, fragmentation and reassembly, routing, flow control, error detection and
error control, congestion and quality control, security and naming and particularly all
these functionalities are provided by two protocols one is IP another is TCP/IP and that’s
why they are paired together to form TCP/IP and particularly IP provides a unreliable
connectionless best effort datagram delivery service. So, if you want reliability and then
you have to use TCP and TCP provides reliable, efficient and cost effective end-to-end
delivery of data so these two together gives you host to host communication in a reliable
manner. And particularly the IP will provide these functionalities like addressing,
packetizing, fragmentation and reassembly and routing. So these are the functionalities
which will be provided by IP protocol which we shall discuss in this lecture.

(Refer Slide Time: 09:43)

First let us focus on addressing. In TCP/IP you will encounter three different types of
addresses. As you know if you want to send a letter you have to write the address of the
destination and then put the letter in the letter box. Similarly, if you want to send any
information you have to know the address of the destination where you want to deliver.
In TCP/IP you will encounter three different types of addresses. We have already seen
the physical layer address which is also known as MAC address medium access control
address, for example, for Ethernet it is of 48-bits and then IP provides you internet
address or IP address in short which is 32-bits, on the other hand, the TCP will require
some addressing that is 16-bit address and it is known as port address. So you will
encounter three different types of addresses in TCP/IP. we have already discussed about
the physical address or MAC address.
(Refer Slide Time: 11:06)

For example, the physical address comprises of 48-bits and here is the format of that. So,
apart from 46-bit addresses the first two bits decides whether it is for individual address
or it is group address that means you can use for unicast whenever you have to use the
individual address and whenever you are sending to a group of people then you have to
use the multicast address and there is another bit u or u by l which gives you the global or
universal administrated address or it can be a local administered address. So when it is (i)
it corresponds to l and when it is 0 it is corresponds to u.

So we see that whenever broadcasting has to be done all the bits have to be one, each
station on the network receives and accepts the frame. So, with the help of this MAC
address you can do unicasting that means one to one, broadcasting one to all or
multicasting which is one to a group of people so you can have three types of delivery by
using MAC address over a single network or LAN.
(Refer Slide Time: 13:19)

However, we are interested in internetworking and obviously we would like to


communicate between a number of LANs or LAN, WAN and so on so in such a case we
have to use internet address. An internet address as I mentioned comprises of 32-bit. This
32-bit has got three different fields; first one is known as provides the class type. Actually
the addresses are divided in five classes A B C D E which is specified by first few bits.

For example if the first bit is 0 then it’s a class A address, if the first bit is 1 and the
second bit is 0 then it is class B address, if the first bit and the second bit both are one and
the third bit is zero then it is class C address and if the first three bit are one and fourth bit
is zero then it is class D address and if all the first four bits are one then it is class E
address. So apart from this class the remaining part is divided into two fields; one is
known as network ID, another is host ID. And as you can see (Refer Slide Time: 12:21)
in different classes particularly A, B and C, the size of net ID and host ID are different.

In case of class A the net ID comprises only seven bits and host ID compresses 24-bits.
So here there are 8-bit of net ID including zero the most significant bit and the host ID
comprises 24-bits.

On the other hand, class B you have got 1 0 followed by 14 bits of net ID and you have
got 16 bit of host ID. On the other hand, in class C the first 3 bits are 1 1 0 and remaining
21 bits are net ID and 8-bits are host ID. And 1 1 0 and remaining bits here you have got
(Refer Slide Time: 13:08) multicast address that means remaining 28-bits provides you
the multicast address and class C is 1 1 1 1 this is reserved for user and not presently in
use.
(Refer Slide Time: 16:15)

There are two ways you can write an IP address. One is in binary form where you have
got sequence of ones and zeros. For convenience as you can see it has been divided in
four octets each of 8-bits. Then another notation which is commonly used is known as
dotted decimal notation. In dotted decimal notation as you can see each octets have 8-bits
these 8-bits are represented in decimal form instead of binary. So the range of number
that can be in each field is very strong 0 to 255 which can be represented by 8-bit. So by
using four octets with the help of their decimals values each separated by dots is known
as dotted decimal notation.

By using this dotted decimal notation the range of host addresses for different classes are
given here. For example, in case of class 1 it will be found 1.0.0.0 to 127.255.255.255. In
case of class B it varies from 128.0.0.0 to 191.255.255.255. So you see whether a address
corresponds to class A or class B can be found out by looking at the first field of the
dotted decimal notation or first decimal number. If it is in between 128 to 199 it belongs
to class B, if it is found within 1 to 127 it is class A and then class C from 192 to 233 then
class D is from 234 to 239 then class E is 240 to 247. Hence, this is the range of numbers
that is being used. On the other hand, it is from 240 to 247 this is class E of course it is
not in use.
(Refer Slide Time: 17:29)

Now, you may be wondering why 0 is not here in class A. So, for that purpose there are
some special IP addresses apart from those general cases. For example, if the address is
all 0 it means this host is commonly used at the time of booting then if the address is sent
to a host on this network but not to a different network then first ten bits can be 0 and the
remaining can be the address of the host or host ID. Or if it is a broadcast on this network
you are sending to all the computers or systems on the network then it can be all 1.

So you have to exclude all zeros and all ones whenever you count the number of address
possible. On the other hand, broadcast on a distant network then you will give the
network ID by using ten bits and remaining fields can be all 1. You are trying to
broadcast to a distant network and net ID can be ten bits and whenever you want to do
loopback and you want to receive on same host then it can be 127 and this field can be
anything remaining bits can be added. These are some of the special IP addresses.
(Refer Slide Time: 19:00)

Now you may be asking whenever an IP address is being used how a particular network
receives a particular packet which is made for that network, this is done with the help of a
technique known as masking.

As you can see this is the IP address (Refer Slide Time: 17:48) and within this router, the
router has to filter up the output packet corresponding to this network so it does not
bother about the host ID it only bothers about the net ID. So what is being done is some
kind of hand operation is performed with the net ID part that means the mask is
255.255.0.0 that means all bits are 1 so 111111111111111 then remaining bits are 0, this
is mask, and this is 144.16.5.210 to filter out the net ID so here is the 144.16.0 then this is
being looped into the table of the router and if there is a matching then it receives it
otherwise it is not allowed to pass through the router. So, if this matches then it sends it to
this network.
(Refer Slide Time: 20:35)

It has been found that this particular technique uses essentially two hierarchal levels of
addressing by using net ID and host ID so it uses two levels of hierarchy and
unfortunately the number of host and network varies as you choose a particular class.

For example, if class A addressing is used only 8-bit is used for net ID and the remaining
is host ID. So, if class A address is taken then if the number of host in a particular
network is small then a large number of addresses are wasted. On the other hand, if class
B address is taken a good number of addresses is wasted because usually in a single
network you don’t have 64 k of addresses so with the help of 16 bit you can have 64 k
addresses but you may not have 64k addresses. On the other hand, if you take class C
address the total number of address available is only 254 excluding that broadcast and all
0 and all 1 so you can have only 256 addresses. So, to begin with if you start with c after
sometime it may not be adequate so you have to apply for another address.
(Refer Slide Time: 20:56)

This problem can be avoided by introducing an intermediate level of hierarchy in the


form of subnetting. So what is being done is in subnetting the addresses the addresses
divided into three parts; the net ID, subnet ID and the host ID as it is shown here.

(Refer Slide Time: 23:20)

So suppose in this particular address say 144.16.5.210 obviously it is clear that we are
using class B addressing so the network address is represented by 144.16 and 0.0 this is
the network address. However, within the network 8-bit is used as subnet address and
remaining 8-bit is used as the host ID, identification of the host. So by doing this what
you can do, to the external world is the same network as you can see here (Refer Slide
Time: 21:52) but locally you can have a number of subnets which can be done by using a
special type of masking known as subnet masking. So, instead of sixteen ones you will
use twenty four ones in this particular case so that as an address comes by ending with
twenty four ones you get an address like 144.16.5.0 and each is sent to different subnets.

As you can see this router can send to all the host corresponding to subnet 144.16.6.0
with the help of subnet mask of 255.255.255.255.0 that means you are using 24 bit ones
in the first part. In this way you can do subnet masking and you can have a number of
subnets within a single network so to the outside world essentially it is a network of
144.16.0.0 but within the network you can have different subnets. For example, different
departments within IIT can have a separate subnet, on the other hand, to the outside of the
world it can be a single network having the address 144.16. This is the concept of subnet
masking that is being used.

(Refer Slide Time: 25:26)

Now comes the question of another protocol known as address resolution protocol. As
you know, for delivering a packet to a host or a router within a network in the data link
layer it is necessary to know the network MAC address or physical address as we can call
it. So, if a particular host or a router does not know the physical address of the destination
device then how it will know that is provided with the help of ARP address resolution
protocol in a dynamic manner. So it does dynamic mapping approach for finding the
physical address for knowing the IP address. That means the IP address of the destination
is known but for delivering the packet it is necessary to use the physical address so
physical address can be obtained by using this ARP protocol.

This is done in two steps. ARP request is a broadcast so the user sends ARP request so
the ARP user sends a broadcast message to all the stations. As you can see suppose A is
the host which is looking for the destination physical address so it sends a packet that
means ARP sends a packet on this network and it is a broadcast message. So it is received
by all the host in that network then the host which is actually requesting the mapping that
means the concerned host which matches with the destination IP address sends the reply,
here in this case it is unicast that means one to one so it sends a packet to host A
providing the physical address.

(Refer Slide Time: 27:21)

For that purpose some kind of special type of frame is being used. So this is the ARP
packet and ARP packet is encapsulated in a MAC frame. As you can see (Refer Slide
Time: 25:37) this is the typical Ethernet frame preamble, then Synchronization Frame
Detection SFD, then data, then destination address, source address this are all 48-bit
addresses then in this case the type specifies that it is a ARP protocol and here as you can
see there is field corresponding to the hardware address that is the physical address as
well as the protocol address that is your IP address so six bytes and four bytes and target
hardware address and target protocol address.

So initially target hardware address is given all 0 and whenever it is returned the proper
target address is provided. This is how with the help of ARP a particular station can know
about the physical address if a station knows about the IP address. The reverse one
operation is being performed by RAP. In case of RAP the physical address is known but
IP address is not known. For example, this can happen in a diskless station, at the time of
booting it should know the IP address although it knows its physical address. Therefore,
to know the IP address it sends a request to the server and server sends a reply with the IP
address. But presently RARP is gradually getting obsolete and it is being replaced by
another protocol which is known as DHCP, we shall discuss about it later on.
(Refer Slide Time: 27:32)

Now let us consider the second issue that is packetizing. Packetizing means the data has
to be sent in a format of frame and the packet is used by the internet protocol to frame a
datagram or frame. So the datagram includes the header comprising a number of fields
and apart from data it adds a header. The header has a number of fields as explained here.

(Refer Slide Time: 28:00)


(Refer Slide Time: 29:20)

For example, it has got the version field which is represented by four bits. For example,
presently the IP protocol is IPv4 that is being used. Thus, if we use IPv6 then that has to
be mentioned in the version field. Then header length 4-bit says about the length of the
header expressed as the number of 32-bit words so minimum size is 5 and maximum is
15. So the header length is specified with the help of this field then you have got the total
length of the frame length in bytes of the datagram including header, maximum datagram
size so you can see the IP frame or datagram can be of 64536 bytes which is of the
maximum size of the datagram. The service type field actually allows packets to be
assigned a priority so router can use this field to route packets but however it is not
universally used.
(Refer Slide Time: 30:08)

Then there are other fields such as time to live. It prevents a packet from traveling forever
in a loop, the senders sets a value that is decremented at each hop. If it reaches zero,
packet is discarded. We have already discussed about flooding and flooding can be
restricted by controlling the number of hops. Hence, this field can be used for that
purpose. Then the protocol field is there which is defines the higher level protocol that
uses the services of the IP layer. So the packets will be coming from different higher
level protocol so it has to do some kind of multiplexing with the help of this protocol
field as shown here.

(Refer Slide Time: 30:53)


The protocol field allows some kind of multiplexing. The packets are coming from a
number of upper layers like ICMP TCP IGMP UDP OSPF so these are the different
higher level protocols and it receives a packet and sends it to the next lower level.
Obviously whenever it reaches the destination you have to do the demultiplexing. By
looking at the protocol field it again delivers it to the proper protocol like ICMP or TDP
TCP or IGMP or UDP or OSPF. So, for the purpose of the multiplexing and
demultiplexing this protocol field is necessary.

(Refer Slide Time: 31:10)

Then the source IP address is necessary which is 32-bits the destination IP address and
this identification, flags and fragment fields are used for the purpose of fragmentation.
(Refer Slide Time: 31:17)

Let us consider the fragmentation and reassembly and see how it is being performed in
the IP layer.

(Refer Slide Time: 33:25)

Why fragmentation is necessary?


As you know each network imposes a limit on maximum size known as maximum
transfer unit or MTU of a packet because of various reasons. There may be various
reasons for restricting the size of a packet. As you know, if packet size is very large then
the probability of error in the packet increases. So, to reduce the probability of error of a
packet the size is restricted or different standard uses or different size of packets or
frames. As you have seen IP allows 64 kilobytes. On the other hand, Ethernet users 1500
bytes, so in this way different standards uses different packets.

When you do internetworking the packets are originating from different networks and
they have to pass through different types of network. Suppose a network receives a
packet which is bigger than it can handle then how this problem can be tackled. There are
two ways of handling it. One approach is to prevent the problem to occur in the first place
that means the source takes care of them. The source is requested to send packets of
minimum size so that maximum size is allowed by any network in that path from source
to destination. This is one approach which you may call prevention.

The second approach to deal with this problem is to use fragmentation. What can be
done, this gateway is allowed to break packets up into fragments and send each fragment
as a separate IP packet. This is the second approach which is commonly used in most of
the cases to have more flexibility.

(Refer Slide Time: 38:59)

However, fragmentation is very easy, breaking anything is very easy but putting them
together is very very difficult, even a child knows about it. So similar situation arises in
reassembley, fragmentation can be done but reassembly is a more difficult problem. Let
us see how it can be done.

Reassembly is performed in IP with the help of three fields; identification, flags and
fragmentation. In general how reassembly can be done? Reassembly can be performed by
using two different approaches. First one is known as transparent fragmentation. In this
case the gateway router which receives the packet breaks it up as it passes through the
network with the maximum size then as it reaches the exit gate way or exit router it is
reassembled. So it comes out of the network as it was received. So within the network it
is fragmented but as it comes out it comes as the original packet, it is reassembled by the
exit router. In this case the problem is each and every packet has to be routed through the
same exit gateway. Therefore, as it passes through different networks it is transparent to
the source as well as the destination. The destination receives a reassembled packet as it
has been sent by the source. This is known as transparent fragmentation as it is done by
the ATM network. Of course in ATM network instead of fragmentation they use the
different term known as segmentation so the transparent fragmentation is being done.

On the other hand, in case of internet protocol or IP non transparent fragmentation is


done. Here as the packet passes through different networks it is broken down or
fragmented depending on the maximum size that is allowed within the network. For
example, here the gateway where the packet has entered has fragmented the original
frame into a number of frames and the exit router does not do anything so it can be
exceeded to different paths. The sender sends one packet and through this network when
it comes out three packets are going towards the destination and as it enters another
network it is further fragmented so each packet is fragmented into two so six IP packets
are generated by the network two. Hence, it is the responsibility of the router to the
destination node to do the reassembly that’s why it is called non transparent
fragmentation which is done by the IP protocol using three fields; identification, flags and
fragmentation offset field.

(Refer Slide Time: 37:20)

Identification part is a sixteen bit field that identifies a datagram originating from the
source host. So this identification part remains unaltered as it is fragmented. There are
three flag bits, the first flag bit is not used, the second bit is the do not fragment field bit
and the last field it is more fragment field. That means whether it has to be fragmented or
more fragmented has to be specified here. That means with the help of these two bits one
can specify whether fragmentation has to be done or not to be done.
Then the fragmentation offset is specified with different fields with the relation to the
original data. Here it is explained with the help of a packet which is of 4800 bytes which
is divided into three fragments. as you can see each will have a header and it is divided
into three fragments (Refer Slide Time: 37:58) so here it is divided into 0 to 1599 then
1600 to 3199 and 3199 to 4799 so these are the three packets. now as you have seen the
maximum size is 64 kilo bytes which cannot be represented by thirteen bits that’s why it
is expressed in terms of eight bytes so this 1600 is divided by 8 to get an offset of 200
and 200 is return in that offset field. Similarly, here it is 3200 which is the offset from the
with respect to the beginning which is divided by 8 to get 400 and 400 is returned of
course in binary in thirteen bits in that fragmentation offset field. This is how it is
fragmented and these fragmented packets as it reaches the destination the destination
known will be able to do the reassembly by using this information so it uses the non
transparent fragmentation techniques as I have explained here.

(Refer Slide Time: 39:13)


(Refer Slide Time: 39:51)

There is other options field, this can be used to provide more functionality to the IP
datagram and there is a checksum field which is of sixteen bits. Checksum is done only
for the header part so header is treated as sixteen bit integers, the integers are all added
using ones complement arithmetic, then the ones complement of the final sum is taken as
the checksum and if the checksum does not match then the frame is discarded.

Now let us discuss about ICMP a companion protocol used along with IP protocol. We
have seen that IP is a best effort unreliable protocol. So if an error occurs, if a frame gets
corrupted or if it is not delivered then the source will not know about it. Since IP has no
error control mechanism there is no flow control mechanism. ICMP stands for Internet
Control Message Protocol which has been designed to compensate for these deficiencies.
that means it is an unreliable one but the source should know what is happening what is
happening to a particular packet that is being provided with the help of this ICMP and
ICMP can send a number of messages or it can do the query.
(Refer Slide Time: 41:59)

It can send different types of messages for example destination unreachable. a packet is
sent and the router is not able to deliver the packet. so in such a case although the packet
is not delivered a ICMP packet is generated informing the source that the packet that was
sent is has not reached the destination or time exceeded. so a packet is discarded as the
time is exceeded. In such a case also an ICMP frame is sent to the source to inform that a
packet has been discarded because it was out of time.

Source quench: As you know the IP protocol does not provide flow control or congestion
control, it is not supported by IP. In such a case source quench somehow provides the
functionality of flow or congestion control. Thus, whenever a particular packet is
discarded because of congestion then a source quench packet is sent to the source
informing that the packet has been discarded and moreover congestion still exists in the
network.

Parameter problem: Whenever the frame reaches with incorrect field in such a case
parameter problem arises which is also informed to the source.
(Refer Slide Time: 44:17)

Redirect message: this is necessary. Whenever a router receives a packet it is not made
for that particular router. So it redirects it towards the proper destination at the same time
it informs the source about the redirection. These are the various messages that is being
supported by ICMP.

Also, different kinds of queries can be performed. For example, in the case of echo
request and reply, a station can send a request to another destination station to know
whether the station is live, whether it is working or not working. Similarly, in the case of
time stamp request and reply a source can send a frame to the destination and destination
will also use the time stamp and reply. So, by this the round-trip delay can be estimated
or this can be also used to synchronize two clocks the source clock and the destination
clock. So, for synchronization of two clocks or measurement of delay this can be used.
(Refer Slide Time: 44:36)

Then we have the address mask request. A particular station may not know about the
subnet mask so that can be queried and it can get the reply. For that purpose there are a
number of message formats which are shown here. For example, destination unreachable,
time exceeded, source quench, time stamp request, time stamp reply and so on.

(Refer Slide Time: 44:42)


(Refer Slide Time: 46:06)

Then comes the question of routing. As you know there are three different types of
routing namely unicasting, multicasting and broadcasting. Unicasting is a one to one
communication and there are two different types of protocol; one is known as interior and
the other one is known as exterior protocol. As you know the network is divided into a
number of autonomous systems. So the communication within the autonomous systems
can be performed with the help of two protocols known as Routing Information Protocol
RIP and Open Shortest Path First protocol OSPF. Routing Information Protocol is based
on distance vector routing and Open Shortest Path First protocol is based on link state
routing which you have already discussed in detail while discussing routing techniques.

Whenever the routing has to be done outside the autonomous systems say inter
autonomous system routing has to be done for that purpose you require a different type of
protocol for example Border Gateway Protocol or BGP which is based on path vector
routing which is provided to do this routing. So these are the different protocols used in
IP.
(Refer Slide Time: 47:32)

However, it has to also support multicasting. As we know that class D is having more
than 250 million addresses for the purpose of multicasting. So multicasting is used to
send message to a group of users which is supported by IP so you have to use some
suitable protocol for that purpose. However, to keep track of different groups a special
protocol is used which is known as IGMP or Internet Group Message protocol. Don’t get
confused with the routing protocol. IGMP is not a multicasting routing protocol but it
helps multicast routing. It has been designed to help multicast router to identify hosts in a
LAN environment.

IGMP uses three types of messages like query message, membership report and leave
report and it operates in a somewhat similar manner as ICMP. We have already discussed
how ICMP works. So in a somewhat similar manner IGMP works and does the function
of multicasting.
(Refer Slide Time: 49:55)

We have discussed about different types of LANs, WANs which are used in the
internetworking. Now as I said suppose a home user wants to use TCP/IP how can he do
it? By using serial link it can be done and that can be done by using serial line IP or slip.
So in this case stations send raw IP packets over serial line with special flag byte (0xC0)
and if it appears in the IP frame then you have to use suitable techniques so that this is
not repeated in the IP frame.

However, it has got a number of limitations. It does not use error detection and
correction. So error detection correction is not supported by slip protocol. IP addresses
should be known in advance. That means source and destination addresses should be
known and it supports only IP protocol and other protocols which are used in
internetworking is not supported and also it does not use any authentication.

Moreover, there is no standard for slip, different manufacturers have developed different
versions of slip which are not compatible to each other. These are the limitations, and
how it is used is shown here. Suppose this is the home user which uses a dial up
telephone line or it can use this line as well (Refer Slide Time: 49:23) to get internet
service from internet service provider. Thus it can use slip protocols and with the help of
modems it can set up a serial link then it can use the slip protocol to communicate
between the host and the router in the internet service provider premises to communicate
to send IP frames between these two edges.
(Refer Slide Time: 50:50)

Therefore, to overcome the limitations of slip a protocol has been introduced known as
PPP Point to Point Protocol and it performs error detection, it supports multiple
protocols, it allows IP addresses to be negotiated that means the IP addresses need not be
known before hand it permits authentication so it makes it reliable compared to slip. This
PPP protocol uses HDLC framing. As we have seen in the previous case, it does bit
stuffing. Whenever this is not been done 0x appears in the IP packet similarly here also
bit stuffing can be done whenever flag bit appears in the data field.

(Refer Slide Time: 51:58)


This is the flow diagram for PPP protocol. Apart from using this HDLC framing it sets up
the link with the help of Link Control Protocol LCP so it uses for bringing up testing and
negotiating options and also for bringing down.

As you can see in the initial case it can be in the idle state, and when the link is
established it goes to established state and when the carrier is detected if it felts it comes
back to idle then it does the authentication and by if it felts it goes to the terminate state
and if the authentication is successful it goes to the communication protocol by using the
network control protocol which is used to perform the configuration purpose and to
perform communication between the two devices.

(Refer Slide Time: 52:57)

For that purpose different packets like the LCP, PAP and CHAP packets are encapsulated
in that HDLC frame. For example, this protocol field specifies what type of packet it is
such as LCP or PAP or CHAP. LCP stands for Link Control Protocol, PAP stands for
Password Authentication Protocol and CHAP stands for Challenge Handshake
Authentication Protocol. The payload has got different codes or different functions and
ID holds for the value of match with request and reply and extra information needed for
some type of packets. This is how different packets can be put in the payload of the
HDLC frame.
(Refer Slide Time: 54:07)

I think our discussion will not be complete if I don’t mention about IPv6. We have seen
that IPv4 which is now very popular has got a number of limitations which is being
overcome in IPv6 and these are the key features of IPv6.

It uses 128-bit address instead of 32-bit address so 30-bit address is gradually becoming
sufficient and it uses more flexible header format compared to IPv4 and basic header
followed by extended header. It is used in two levels. It performs resource allocation
options which were not provided in IPv4, there is provision for new or future protocol
options and it supports for security. There is facility for authentication and in this case the
fragmentation is done at source instead of performing that and non transparent
fragmentation. So these are the key features of IPv6. Now it is time to give you the
review questions.
(Refer Slide Time: 54:41)

1) Why do you need ARP protocol?


2) What is the purpose of dotted decimal representation? Give dotted decimal
representation of the IP address given here.
3) How is masking related to subnetting?
4) What limitations SLIP protocol are overcome in PPP protocol
5) What are the key features of IPv6?

(Refer Slide Time: 55:10)

Now it is time to give you the answer to the questions of lecture – 33.
1) Why do you need internetworking?

As stations connected to different LANs and WANs want to communicate with each
other it is necessary to provide this facility internetworking creates a single virtual
network over which all stations in different network can communicate seamlessly and
transparently so that is the purpose of internetworking.

(Refer Slide Time: 55:30)

2) Why a repeater is called level - 1 relay?

A repeater operates in the physical layer data received on one of its ports is relayed to the
remaining port bit by bit without looking at the contents that is why a repeater is called a
level - 1 relay.
(Refer Slide Time: 56:02)

3) What is a bridge? How it operates in the internetworking scenario?


A bridge operates in the data link layer it looks into various fields of a frame to take
various actions for example it looks at the destination address field so that it can forward
the frame to the port where destination stations is connected it also looks at the frame
check sequence field to check error in the received frame it any a bridge helps to create a
network having different collision domains as you have discussed in detail.

(Refer Slide Time: 56:28)

4) What limitation of transparent bridge protocol is overcome by the source routing


protocol?
Transparent bridge protocol uses spanning tree algorithm where a unique path is used to
communication between two stations as a consequence it does not make use of the other
path leading to lesser utilization of the network resources this problem is overcome in
source routing algorithm.

(Refer Slide Time: 56:52)

5) What limitations of a bridge are overcome by a router?

A router overcomes the following limitations of a bridge linking of two dissimilar


networks routing data selectively and efficiently enforcement of security vulnerability to
broadcast storm so this are the advantages of router over bridge so with this we come to
the end of today’s lecture so there will be another lecture on TCP/IP where I shall mainly
focus on TCP, thank you.
Data Communication
Prof. A. Pal
Department of Computer Science & Engineering
Indian Institute of Technology, Kharagpur
Lecture - 35
TCP/IP - II

Hello viewers, we shall continue our discussion on TCP/IP.

(Refer Slide Time: 00:59)

In the last lecture we have covered some of the issues of TCP/IP particularly the network
layer protocols and some lower layer protocols. In this lecture we shall consider the
transport layer protocols and also some of the application layer protocols. Here is the
outline of today’s talk.

First I shall give a brief introduction which will put you in perspective of the need for
UDP and TCP that means the transport layer protocols. And we shall introduce two
transport layer protocols; one is user datagram protocol known as UDP in short and
another is transmission control protocol TCP. As we shall see UDP is connectionless
unreliable datagram service. On the other hand TCP is a connection oriented service and
it provides you reliability with the help of flow control, error control and congestion
control which I shall discuss in detail in this lecture.
(Refer Slide Time: 02:05)

Then we shall see how the communication take place using the TCP/IP based on client-
server paradigm and some applications which work on client-server paradigm such as
domain name system, remote login, mail transfer we shall briefly discuss about them.

And on completion the students will be able to explain how UDP allows two applications
running into remote locations can communicate. They will be able to also state the
limitations of UDP because it is not connection oriented, it is unreliable so because of
that it has got some limitations so the students will be able to state that or identify it.
Then they will be able to explain how TCP provides connection oriented service the
mechanism behind it and they will be able to explain how TCP incorporates reliability in
internet communication.
(Refer Slide Time: 03:20)

They will also be able to explain how DNS domain name system allows the use of
symbolic names instead of IP address which we have introduced in detail in the last
lecture. The students will be also able to explain the use of LAN server model for remote
login, mail transfer and file transfer and this client-server model is a basis of any
distributed algorithm. Here is the introduction to put you in perspective.

(Refer Slide Time: 4:01)

If you look from user’s point of view you will find that TCP/IP based internet is nothing
but a set of application programs that use the internet to carry out useful communication
tasks. So a number of applications are running and how TCP/IP uses the underlying
network for that is one view of the user. Particularly the most popular internet pplications
that we use in our day to day life are electronic mail, file transfer and remote login. How
these applications run using TCP/IP is what will be introduced in this lecture.

We have seen that internet protocol allows transfer of IP datagrams among a number of
stations where the datagram is routed through the internet based on IP address of the
destination. We have seen in the last lecture that internet protocol allows communication
between two computers or two hosts that is done with the help of IP address. However,
an additional mechanism is needed to facilitate multiple application programs which are
known as processes.

What is a process? A process is nothing but a program is execution. So an application


program in execution is called a process and how two processes can communicate in
different stations with each other simultaneously is the need for this TCP.

(Refer Slide Time: 05:43)

That means a particular host may be running a number of processes. As we know in a


multi user environment multiple processes will be running simultaneously and how
multiple processes can communicate simultaneously through the internet will be
discussed how it is done using the transport layer protocol.
(Refer Slide Time: 06:50)

We have already discussed the relationship of the TCP/IP model with the OSI model and
in the last lecture we have already introduced discussed some of the aspects of this part
that means the network layer protocols, the internet protocols along with the companion
protocols like ARP, RARP, ICMP and IGMP and we have briefly mentioned about this
SLIP, and PPP also which were not covered earlier.

In this lecture we shall discuss first the UDP, then TCP and we shall also discuss some of
the applications because ultimately the users are interested in applications and of course
the TCP/IP is the underlying protocol which shall be used for that purpose. So first let us
focus on the simplest of the protocols the user datagram protocol or UDP in short.
(Refer Slide Time: 07:18)

UDP provides the primary mechanism that application programs use to send datagrams to
other application programs. Suppose an application program is running in host A it will
send the message to one of the two transport layer protocol either TCP or UDP but in this
particular case we are considering UDP and that UDP puts a header and that segment is
sent and that goes to the IP layer and IP layer converts it into a datagram by adding a
header and that datagram goes to the data link layer making each of these a frame by
putting a separate header and a tailer and that frame is transmitted bit by bit through the
physical layer and that goes to the other side to the internet and at the other end as it
reaches the destination host then it goes off to the data link layer, IP layer and header and
tailers are removed and ultimately the message is delivered to the application of the host.

(Refer Slide Time: 08:27)


This is how communication takes place with the help of the UDP. As I was mentioning
multiple processes can be running in a single host and that requires what is known as
multiplexing and demultiplexing. So multiple hosts will be running and they
communicate using a concept known as port mechanism.

(Refer Slide Time: 08:52)

So UDP is responsible for differentiating among multiple sources and destinations within
one host. So what is happening, you have got a single host and multiple applications are
running and each application is associated with a port number so each application transfer
messages to different ports and gives it to the UDP and UDP segment is then passed on to
the IP. So here what we are doing is known as multiplexing.
Similarly the datagrams will be coming to the IP layer which will forward to UDP and
UDP will identify different applications based on port numbers and it will do the
demultiplexing and send it to the respective applications based on suitable port numbers.
So here we are using another address which is port number.

We have already discussed about the two types of addresses, the physical address. The
physical addresses are used by the data link layer for example for local area networks we
use the physical layer address and as we know that physical layer address is 48-bit
Ethernet address.

On the other hand the internet address that is IP address is from host to host that is 32-bit
address. Now, as you can see there are port addresses which are identified as processes so
it is for process to process communication which is 16-bit. So now we have three
different types of addresses to be used in TCP/IP such as the physical address, internet
address and port address.

(Refer Slide Time: 10:45)

Here is the UDP datagram format. As you can see it has got a source port and destination
port and the source port defines the port number of the application program in the host of
the sender. That means sender may be running a number of applications so that particular
port number gives you the source as from which application that particular datagram has
come.

Similarly, the destination port number defines the port number of the application program
in the host of the receiver. So we see that with the help of this we are doing
demultiplexing and with the help of the source address we are doing multiplexing. This
length field provides a count of octets in the UDP datagram and as you can see the
minimum length is 8 octets which is essentially the header of the UDP.
(Refer Slide Time: 11:58)

As I mentioned UDP is an unreliable protocol. Essentially there is no need for error


detection that’s why the error detection is optional in case of UDP. So whenever it is not
used this is made 0 so 0 is written here in this checksum field and of course there are
situations where it is being used so in such a case it is not 0. As I mentioned the transport
layer addresses are specified by 16-bit port numbers so these port numbers are assigned
with the help of an agency known as Internet Assigned Number Authority IANA and the
addresses are divided in three categories. For example, this is the range you can say
(Refer Slide Time: 12:59) it is divided into three parts, 0 to 1023, this part is known as
well known ports and these well known ports are controlled by Internet Assigned
Number Authority IANA.

On the other hand, 1024 to 49151 up to this these are your well known ports and these are
registered ports. To use these ports one has to register with the IANA. These are not well
known ports but to use any of these addresses one needs to register with IANA. On the
other hand there are other codes starting from 49152 to 65535 which can be used by
using a 16-bit address so with the help of the sixteen bit address this is the maximum
number one can use and these numbers can be dynamically assigned by user as and when
needed.
(Refer Slide Time: 14:15)

So, for this it is not necessary to register with IANA and also it is not necessary to use the
well known ports. The user can use them dynamically in different situations. These are
the three different categories of port addresses that are commonly used. Here are the well
known ports used by UDP.

(Refer Slide Time: 15:20)

There are some port numbers which are provided as well known ports. For example, port
number seven is used for the purpose of echo, sometimes we do the ping operation which
is being performed with the help of this port number, then there are other applications
like SNMP Simple Network Management Protocol and some special port number is
assigned.

These are the common well known ports used by UDP given here for different
applications. Let us now focus on the characteristics of UDP. We have seen that UDP
provides an unreliable connectionless delivery service using IP to transport messages
between two processes. Because it is unreliable and it is connectionless the outcome is
the messages can be lost, duplicated, delayed and can be delivered out of order.

(Refer Slide Time: 16:03)

So if a packet is lost the sender will not know about it. If a packet is damaged the receiver
will not know that a datagram has been received in error condition, it has become
corrupted. Similarly, datagrams can pass through different routes and they can reach out
of order so at the receiving end they have to be put in order. These are the limitations of
the user datagram protocol.

However, UDP is a very thin protocol. As we have seen it has got only 8 octet header. As
a consequence it is not having any high overhead, the overhead is much less in this
particular case. However, it does not add significantly to the functionality of IP. We have
seen that IP also provides you the connectionless service.

However, it provides you service from host to host and the only additional feature that
UDP adds is from process to process rather than from host to host that is the only thing
that is being provided. And also it cannot provide reliable stream transport service which
is needed in many applications. And for many applications reliability or reliable
communication is the key or is very important so in such a case we have to look for
another protocol, we have to use another protocol which is known as transmission control
protocol or TCP.
So TCP provides a connection oriented full-duplex reliable stream delivery service using
IP to transport messages between two processes. In both cases IP is being used to
transport messages between two hosts and then with the help of the port address it is
communicated between two processes. And the reliability is ensured because of
connection oriented service, because of flow control used in TCP and it uses the sliding
window protocol. We know that by using sliding window protocol we can do flow
control, so that is being performed here. It does error detection using checksum and also
it uses error control using go back-N-ARQ technique so it performs error control as well.

(Refer Slide Time: 18:45)

Moreover, it also performs congestion control by using congestion avoidance algorithms


such as multiplicative decrease and slow start which we shall discuss later on. Now let us
have a look at the TCP frame. Obviously the TCP frame is not very thin it has got 20
octets the source code, destination port and so on. apart from the source port and
destination port addresses each of them are of 16-bit there are additional fields such as
sequence number, acknowledgment number, data offset, reserved bits and several flag
bits 6 flag bits and there is a window which is used for flow control then there is
checksum which is used for error detection then there is an urgent pointer which is used
in some situations and then we have the optional field plus padding.
(Refer Slide Time: 19:45)

So, apart from the header you have got the optional field and padding and then comes the
data. Let us look at the function of these different fields.
 Source port as usual defines the port number of the application program in the
host of the sender.
 Destination port defines the port number of the application program in the host of
the receiver.
 Sequence number conveys the receiving host which octet in this sequence
comprises the first byte in the segment. The counting is done with the help of
octets or in terms of bytes. So the sequence number is specified in terms of the
byte number and the octet with which the segment starts.
(Refer Slide Time: 20:52)

Acknowledgment number specifies the sequence number of the next octet that expects to
receive and in both cases 32-bit fields are used.
 Header length is 4-bit. This bit specifies the number of 30-bit words present in
that TCP header. Thus, with the help of this 4-bit field the number of 32 words as
you have seen can be 1 2 3 4 5 or it can be more or it can go up to 20 octets so it
specifies in terms of the 32-bit words.

Then as I mentioned there are several control flag bits. First one is;
 URG stands for urgent pointer and urgent pointer is used in some specific
situation.
 Acknowledgement flag bit indicates whether acknowledgment field is valid. So if
this bit is not 1 then acknowledgement field is invalid or is not used.
 PSH stands for push the data without buffering so this is a special situation where
pushing of data has to be used without buffering.
 RST reset the connection. as we shall see this flag bit is used when connection
establishment is done.
 SYN synchronize sequence numbers during the connection establishment so
synchronize or SYN flag bit is used at the time of connection establishment.
 FIN stands for finished which is used for terminating a connection.
 16-bit window specifies the size of the window that is the buffer size available
and a 16-bit checksum is used.
(Refer Slide Time: 22:53)

As I have already discussed, the checksum can be calculated by using ones complement
addition in terms of sixteen bits then complementing it and then there is a user pointer
which is used only when URG flag is valid. So when the URG flag is valid then this
particular pointer is being used and they are optional, 40 byte information can be
provided if necessary. So the TCP also has got a number of well known ports.

As you can see port number 7 is used for echo, 9 for discard and there are several other
applications like TELNET that has got a well known port which is used, this TELNET is
used for remote login then SMTP Simple Mail Transfer Protocol which is used for mail
transfer which has got the well known port number 25 domain name server with a well
known port number 53 and HTTP which is used for Hypertext Transfer Protocol which
has got a well known port number 80 and so on. The important applications which are
being used with the help of TCP/IP is provided with well known port numbers.
(Refer Slide Time: 23:50)

Now let us focus on the connection oriented service. As I mentioned the TCP establishes
a virtual path between the source and destination processes before any application
terminates. so there are two processes involved; one is connection establishment which is
used to start the connection reliably and the connection termination with the help of
which connection is terminated gracefully, so these two are being used and as I
mentioned communication is full-duplex and for connection establishment a three-way
handshake protocol is being used. Let us first discuss that.

(Refer Slide Time: 25:00)


So, to establish a connection as I mentioned that SYN flag bit is set and then sequence
number is set to some value X and here (Refer Slide Time: 25:24) you are sending one
datagram to the host on the other side. In response to that the host on the other side will
send an acknowledgment, that acknowledgment will have a SYN bit set and it will
provide you the sequence number Y and acknowledgment number X plus 1.

Here you can see it is using one sequence number X that means the next datagram that is
expected will start with X plus 1 byte or octet so this is being used when the host is ready
for sending data. whenever it is not ready there is a possibility that the port number is
being busy for some other application, in such a case connection cannot be established so
in such a case instead of SYN datagram that RST flag is set and RST packet is being sent
from the HOST 2 to HOST 1 that means the connection cannot be established.

(Refer Slide Time: 26:45)

However, when HOST 2 is ready for establishing connection then this particular packet is
sent to the other side and in response to that the HOST 1 will send another sink packet
with sequence number X plus 1 but earlier it was X and acknowledgment number Y plus
1. the sequence number was Y here so one byte is being used for the purpose of
establishment of synchronization and acknowledgment number Y means this HOST 1 is
also ready to receive packets starting with Y plus 1 as the beginning of the packet. This is
how the three-way protocol is being used to establish a connection.

For connection termination a four-way handshaking protocol is necessary for termination


of connection in both directions. For the termination purpose as I mentioned it is a full-
duplex protocol. However, we may consider it as two half-duplex or two simplex
connections. That means two connections can be terminated independently.

So, for example if HOST 1 wants to terminate the connection for sending data from this
end to the other end then it will send one packet the datagram with FIN or finished flag
bits with sequence number X and on receiving this the HOST 2 will send an
acknowledgment packet with ACK is equal to X plus 1. When this is being received the
host connection which is used for sending data from host A to host B is terminated.
however the data transfer in the other direction from HOST 2 to HOST 1 can continue
and if HOST 2 wants to terminate that HOST 2 again sends another datagram of packet
with FIN flag bit set with sequence number Y and acknowledgement X plus 1 and on
receiving that the HOST 1 sends an acknowledgment packet with ACK is equal to Y plus
1.

Now, whenever both the ends want to terminate connection these two can be combined to
a single packet. That means both acknowledgement flag bit and FIN flag bit can be set
and by doing that ACK part that is acknowledgment filled is sent X plus 1. Only these
two flag bits are to be sent and instead of two signals only one message will go from
HOST 2 to HOST 1 and that way it can become a three-way protocol to terminate
connection when both the connections are to be terminated simultaneously.

Now let us focus on the reliability.

(Refer Slide Time: 31:17)

As I mentioned reliability is ensured by flow control, error control and congestion


control. Now, flow control is being done with the help of a buffer. So whenever sender is
sending data it sends the size of the data that is 4k which is the size of the data and that
starts with the sequence number 0. And as it reaches the receiver then four k buffer gets
filled up as you can see this part gets filled up (Refer Slide Time: 30:41) and it sends an
acknowledgment with 4096 so that is the beginning of the sequence number which the
receiver is expecting and window size 4096 because 4k window is now free.

Now if the sender has a 3 kilo byte data to send then it can send it starting with sequence
number which is the beginning of the octet 4096 and as it reaches the destination as you
can see so 4 plus 3 is equal to 7 kilo bytes of buffer gets filled. Now it sends an
acknowledgment with acknowledgment filled 7168 that is the beginning of the sequence
and window size 1k because this is window size available at the receiving end.

If the sender has more data to send it will not be able to send now because buffer is not
available here. Now the application program can read the data and transfer it may be to
the secondary storage so in such a case the buffer will be empty and the buffer will be
free. And suppose the application reads 4k bytes these 4k bytes are read by the
application.

So now the 5k buffer becomes empty so window size becomes 5120 and it sends an
acknowledgment frame to the sender and on receiving that the sender can send another 4
kilo byte of segment starting with sequence number 7168. This is how the flow control is
being performed and the buffer size is communicated by the receiver whenever
acknowledgement segment is sent from the receiver to the sender. This is how the flow
control is being performed.

Let’s see how error control is performed.

(Refer Slide Time: 32:55)

To perform error control a number of timers are used. There are four timers;

 Retransmission timer
 Persistence timer
 Keep-alive timer
 Time-waited timer

Retransmission timer is used in situations where suppose a frame has been sent and it is
lost then the retransmission timer is started and whenever the time-out occurs and if
acknowledgement does not come then the frame is retransmitted. So the retransmission
timer is used for the purpose of retransmission and whenever a frame is lost.

Then the persistence timer is used to avoid some kind of dead lock. Let us consider this
situation that the buffer is now not enough it is only 1 k and a frame has been sent and
acknowledgment has been sent by the receiver. Now on receiving this, the sender is
waiting for acknowledgment having higher window size. Now another acknowledgment
sent only when this buffer becomes empty with window size 5190.

Suppose this particular acknowledgment is lost what will happen is the sender will be
waiting for the acknowledgment and the receiver will be waiting for the data and this
leads to some kind of dead lock and this dead lock can be avoided with the help of this
persistence timer. So whenever time-out occurs, in the persistence timer, the sender will
send a persistence segment to the receiver and on receiving that this particular
acknowledgment will be again sent by the receiver and data communication will start.

Keep-alive timer is used in situations where the communication is not taking place for a
long time. Suppose a sender has no data to send for a long time in such a case when the
data communication is not taking place for a long time the transmitter may assume that
the receiver has now been turned off. So to know that a keep-alive frame is sent by the
sender and if the receiver responds then connection is continued otherwise connection is
terminated.

Time-waited timer is used at the time of termination. Whenever termination is performed


it is performed immediately but the connection is kept open for two packet transmission
time, two RTT that is your Receive Transmission Time to the Round-Trip Time this is
the round-trip time (Refer Slide Time: 36:36). So for two round-trip time this timer waits
before the connections are shut off.

Now let us focus on the congestion avoidance algorithm. Congestion is avoided by using
two algorithms; one is multiplicative decrease and another is slow start.
As you can see in the beginning there are two reasons for any congestion control. One is
because of the smaller receiver capacity and another is because of lower network
capacity.

And as you have seen the receiver capacity is provided with the help of the window. So
window size provides the receiver capacity. However, the network capacity is not
provided by this window. For that purpose a separate window known as congestion
window is used and the minimum of the two is used for sending data. So initially suppose
1k word is the segment size that is being allowed by the network so initially segment one
is being sent and if no time-out occurs then the two segments are sent, then four segments
are sent, then eight segments are sent then it increases exponentially until the threshold is
being reached.

There is a threshold which is decided by the congestion window. And after that threshold
it increases linearly that means it is incremented by 1k, 1k and so on so initially it is 1k,
then 2k, then 4k etc so exponentially it increases and after the threshold is increased it
increases linearly. And whenever time-out occurs this threshold is halved, so whatever
was the value here half of that is considered as the threshold and then again slow start is
being performed and then multiplicative decrease is performed. This is how congestion
avoidance is performed with the help of multiplicative decrease and slow start.

(Refer Slide Time: 39:03)

There is another problem here that retransmission time. Question arises how you find out
the retransmission time. How do you decide the retransmission time is another issue here
in connection with the retransmission time.

In case of internet a probability of time-out is varied, it is over a large space large time
period. In case of LAN the delay can be very small but the delay can vary over a large
range in case of internet. So if you choose a retransmission time T1 then there will be
many retransmission time then on the other hand if you choose T2 as the retransmission
time then unnecessarily you are wasting time.

Now what is being done is initially a suitable value or an exact value is chosen based on
the measurement. based on what is rate of the round-trip delay time a retransmission
timer is set and after that if there is no time out then this timer is increased on the other
hand if there is time out then it is decreased. So in this way the retransmission timer is
controlled and variable RTT is being used the retransmission timer is used and based on
the based current estimate of the timer.
(Refer Slide Time: 41:11)

We have discussed how the flow control and congestion control is being performed. Now
let us focus on the client-server paradigm that is being used for various applications.

(Refer Slide Time: 41:35)

As we shall see the way the application programs communicate with each other is based
on client-server model. In fact this provides the foundation on which distributed
algorithms are developed. As we shall see, in the present day context, many algorithms
are distributed algorithms and all the distributed algorithms work based on the client-
server model or client-server paradigm. So let us focus on the discussion of client-server
paradigm and discuss how exactly it works.
You have got a client process which formulates a request. So you have got two systems;
one is client, another is server so a client runs a process which formulates a request, sends
it to the server and then awaits the response.

On the other side the server process awaits a request so it is not just slipping; it is waiting
for a request at a well known port then after receiving that it sends responses. So the
server process awaits a request at a well known port that has been reserved for different
applications then sends response. This is how it works and through the internet
communication takes place based on TCP/IP. Of course the communication is full-duplex
as I mentioned, both client and server communicate with each other by using full-duplex
communication.

(Refer Slide Time: 43:25)

To do that there are two identifiers; one is your IP address, another is your port address.
So a pair of information is necessary for process to process communication. Just the IP
address cannot serve the purpose and apart from IP address it is necessary to have the
port address. These two together are known as socket address. So a socket address
comprises two components; one is your IP address and another is your port address and
these two together gives you the socket address. And a pair of socket addresses one at the
client and the other at the server are necessary for transport layer protocol. That means at
the client side you will have a socket address and at the server side another socket address
will be necessary for communication between the two so these allow multiplexing and
demultiplexing by the transport layer.
(Refer Slide Time: 44:55)

That means the socket address is used for multiplexing and demultiplexing at the
transport layer with the help of IP address and port address. So the IP address is provided
by the IP header and port address is provided by the TCP header or UDP header whatever
it is.

Now let us focus on some applications which work on which work using the client-server
paradigm. The one very important application protocol is known as domain name system.

(Refer Slide Time: 46:25)


We have seen that IP addresses are a convenient and a compact way of identifying
machines are fundamental in TCP/IP. So IP addresses are used for communication
between two hosts. However, it is unsuitable for human user. Human user cannot really
remember a long sequence of zeros and ones although by using dotted decimal notation it
has been made slightly user friendly but it is not enough. So it is necessary to use high
level symbolic names for convenience of the humans.

This domain name system application software permits users to use symbolic names but
the underlying network protocols requires addresses. That means the TCP/IP protocol
requires IP address that means the machines require IP address and the users require
symbolic address so the gap has to be bridged and that is being done by using the domain
name system and that requires a suitable syntax and also an efficient translation
mechanism.

There are two ways of providing address; one is known as hierarchical naming system,
another is flat addressing system. So the DNS uses hierarchical naming system. We shall
see how it is being done. Before that let us briefly discuss about the flat namespace.

(Refer Slide Time: 47:35)

In this case each machine is provided with a unique name as assigned by the NIC
Network Information Centre and a special file is used to keep name address mapping and
all hosts must know the current mapping from all other hosts with which they want to
communicate. This requires large mapping file if communication with a large number of
machines is required.

Particularly in the present day internet context the size of the number of people that is
being communicating with each other is very, very large so this will require a huge
mapping file. It is not really a good scheme for communication communicating to
arbitrary machine over a large network such as internet so this flat namespace which was
used earlier is not really suitable and for that reason hierarchical namespace has been
used which break complete namespace into domains. So instead of providing a unique
name it is divided into a number of domains. So domains are broken up recursively into
one or more subdomains and each domain is divided into a number of subdomains for
each of which is basically a domain again.

(Refer Slide Time: 48:40)

Further division to create any level of hierarchy is being provided with the help of the
namespace tree. And it delegates the task of name allocation/resolution parts of the tree to
distributed name server. The translation from name to address and address to name is
performed in a distributed manner. This is how the domain names are given in a
hierarchical manner. As you can see there are two flavors of naming systems; one is
generic and another is based on country.

(Refer Slide Time: 49:22)


Generic is a com which is assigned to communication companies, edu for educational
institutions, org for different organizations like IEEE, gov for government organizations
and net for networking companies.

On the other hand AU for Australia, IN for India, US for USA which are based on the
countries. So this is your root domain and under the root domain you have got several sub
domains, it can be based on generic or country based. So you have got a number of
subdomains and under each sub domain again you have got several subdomains. For
example, under India in sub domain you have got ernet, vsnl and so on so under ernet
again you can have different IITs like IITM, IITK, IITKGP, IITB and so on and then
under IITKGP you have got CC, CSE, mechanical and so on.

So you can see it starts with a root node and then you have got next layer of domains and
then another layer of domains so in this way it can continue. And ultimately for example
a name can be like cse.iitkgp.ernet.in that forms a name and any user connected to this
server can be for example, apal@cse so this forms the complete name for sending any
email or other things.
(Refer Slide Time: 51:03)

And the translation of a domain name into an equivalent IP address is called name
resolution. It is very efficient because of the use of distributed databases and the
databases are stored in different servers, as you can see some of the databases are stored
in inserver, some of the databases are stored in ernet server, some of the databases are
stored in IIT KGP servers, some of the databases are stored in cse server and so on.

(Refer Slide Time: 51:40)

Then mapping is being performed in this way by using recursive query resolution. the
client sends the query to the to the next server so cse iitkgp ernet in then it sends to the
next server iitkgp ernet in that query goes to the next higher domain then it goes to the
ernet in then it goes to the in and it goes again to the edu and if the information is not
absolute here then it goes to ucf edu and then it goes to the cd edu and it definitely has the
record and it sends the IP address and goes back to the client.

(Refer Slide Time: 52:16)

(Refer Slide Time: 52:40)

This is how the query is resolved that means the name is mapped to address so name to
address mapping is performed in this manner. Here are some of the applications based on
the client-server protocol. The Telnet is used for remote terminal protocol. There are
three basic services offered by Telnet; it defines a network virtual terminal that provides a
standard interface to remote systems. So any key press goes to the remote system and
also the response is displayed on the client system so in this way there is communication
as if a computer is directly connected to a remote server so a client can be connected to a
remote server this is how it works.

(Refer Slide Time: 53:55)

It includes mechanism that allows the client and server to negotiate options from a
standard set. It treats both ends symmetrically. That means client to server
communication is performed in symmetrical manner. Similarly the file transfer protocol
is another protocol that uses the client-server paradigm and this is being used to transfer
files between two remote machines through internet. A TCP connection is set up before
the transfer and it persists throughout the session.

(Refer Slide Time: 54:25)


It includes mechanism that allows the client and server to negotiate options a standard
set. Through the internet the files can be transferred from the server to the client and for
that purpose two connections are necessary. One is control connection and another is data
connection and more than one file can be sent before disconnecting the link. The user
views FTP as an interactive system because file transfer can be done in an interactive
manner with the help of this control connection then data transfer is performed with the
help of this data connection.

Finally, electronic mail is among the most widely used available application services. The
mail system buffers outgoing and incoming messages allowing transfer to take place in
the background so some kind of mail gateway can be used, the protocol that is being used
is known as SMTP.
(Refer Slide Time: 55:22)

The TCP/IP protocol that supports the electronic mail on the internet is called Simple
Mail Transfer Protocol which supports the following:
 sending a message to one or more recipients
 sending messages that include text, voice, video or graphics

This is the basic system so you have got a client and here you have got a server and that
server holds the messages that can be sent through the internet using TCP/IP to another
server the mail server and that mail server is connected to another client and these two
clients communicate with each other through these servers which act as the gateways.
With this we come to the end of the discussion on TCP/IP. Here are the review questions.
(Refer Slide Time: 55:55)

(Refer Slide Time: 56:30)

1) What is the main function of UDP PROTOCOL?


2) Why pseudo header is added in a UDP datagram?
3) How TCP establishes and terminates connection?
4) What are the advantages of DNS?
5) Explain how FTP works.
(Refer Slide Time: 56:45)

Why do you need ARP PROTOCOL?


Two machines on a network can communicate only if each other they know each other’s
physical address. So, IP address is not enough to deliver the packets so you have to
((perform)) mapping from IP address to physical address that is being performed in ARP
protocol as you have discussed in detail in the last lecture.

(Refer Slide Time: 57:09)

2) What is the purpose of dotted decimal representation? Give the dotted decimal
representation of the IP address.
As you know the 32-bit field is divided into four octets and each is represented in decimal
form and that gives the IP address in dotted decimal notation and for this bit sequence
this is the dotted decimal notation.

(Refer Slide Time: 57:30)

3) How masking is related to subnetting?


Masking is the process that extracts the physical network address part from the 32-bit IP
address. When subnetting is done the masking is performed to get the subnetwork address
rather than the network address. So three levels of hierarchy has been used as we have
discussed

(Refer Slide Time: 57:55)


4) What limitations SLIP protocol are overcome in PPP protocol?
Limitations of SLIP which are overcome by PPP are;
 No error detection correction is supported
 IP addresses should be known in advance
 Supports only IP
 No authentication
 No standard
This is being overcome by using PPP protocol.

(Refer Slide Time: 58:15)

5) What are the key features of IPv6?


Some of the most important new features are mentioned below:

 IPv6 uses 128-bit address


 It uses more flexible header format and
 there are two parts basic header followed by extended header and
 resource allocation options are not present in IPv4
 provision for new and future protocol options are provided
 supports security-facility for authentication and
 fragmentation is done at source

These are the features of IPv6.


With this we come to the end of our discussion on TCP/IP, thank you.
Data Communication
Prof .A. Pal
Dept of Computer Science & Engineering
Indian Institute of Technology, Kharagpur
Lecture 36
Multimedia Networks

Hello and welcome to today’s lecture on multimedia networks. Multimedia


communication is considered to be the holy grail of data communication.

No lecture series or course on data communication is complete without a discussion on


multimedia communication. I shall cover the various aspects of multimedia
communication in three lectures. This is first lecture on multimedia communication and
in this lecture I shall go for the following topics.

(Refer Slide Time: 01:30)

First I shall give an introduction about mentioning what is multimedia, then we shall
consider how audio and video which are the two important multimedia which can be
digitized, then the bandwidth requirements after digitization, then we shall discuss the
multimedia transmission requirements and SAS factors for audio and video and based on
this the network performance parameters required for transmission of multimedia will be
discussed and we shall explore the possibility of multimedia communication through
different types of networks such as circuit-switched networks, packet-switched networks,
local area networks and internet.

And as we shall see that there are many limitations we have to overcome by adding some
functionalities or protocols such as RTP, RSVP and MBONE which we shall cover in
this lecture.
(Refer Slide Time: 02:40)

On completion the student will be able to define multimedia that means what you really
mean by multimedia, they will be able to explain how audio and video signals can be
digitized and what their bandwidths are, they will be able to explain the SAS factors, I
shall go into the details of this for audio and video transmission and they will be able to
state the network performance required for multimedia transmission and then they will be
able to explain the limitations of multimedia transmission through different types of
networks as I mentioned.

Now let us come to the definition of multimedia. Multimedia stands for more than one
continuous media such as text graphics, audio, video and animation. In this context this
presentation that I am making can be considered as multimedia presentation because I am
using text, I am using graphics and I am using animation.
(Refer Slide Time: 03:17)

However, commonly when two or more continuous media are played during some well-
defined time interval with user, interaction is considered to be multimedia and the most
demanding of them are audio and video. So we shall primarily focus on audio and video
although text, graphics, animation are also considered as part of multimedia.

What you mean by audio?

By audio we mean what we hear through our ears, video is what we see through our eyes.
So let us now see how we can get the digitized audio signal because we have to
communicate through computer network which is digital in nature in most of the
situations now.
(Refer Slide Time: 04:28)

As a consequence the analog audio signals are to be digitized. First let us deeply consider
how it is being done. As you know the sound waves are converted to electronic form
using microphone. Microphone is used to convert sound waves in our ear to electronic
form which is obviously analog in nature that analog signal is converted into digital form
by using several steps.

First thing to be done is sampling. so you have to do this sampling of the analog signal
based on Nyquist criteria and we get what is known as PAM Pulse Amplitude Modulated
signal which we have already discussed earlier then another an step is performed which is
known as quantization. Quantization is performed with the help of analog to digital
converter and we get a digital output in terms of zeros and ones. So here it gives you how
it is being done for a sine wave, here is the sine wave (Refer Slide Time: 5:55) so the sine
wave is sampled and sampling frequencies have to be twice the maximum frequency that
means 2 f max where f max is the maximum frequency of the signal and after sampling
here it is quantized because you have to represent the digital value by using some discrete
numbers and then these are converted into digital form by analog to digital converter.
(Refer Slide Time: 05:43)

Let’s assume here considering voice and since the voice frequency component remains
up to 4 KHz we can convert it to digital form by sampling at 8 KHz.

(Refer Slide Time: 06:25)

So when you sample at 8 KHz and use a 7-bit analog to digital converter the bandwidth
required for communication of this digital voice is 56 Kbps. On the other hand, using
sampling frequency of 8 KHz and a resolution of 8-bit that means ADC has the resolution
of 8-bit we get a bandwidth of 64 Kbps.
Whenever we have to record music on CD the higher sampling frequency has to be used
because music can have frequency components as high as 20 KHz that is the maximum
limit of work here. So as a consequence the sampling frequency is 44.1 KHz and for
resolution if we use 16-bit then it gives you 1.411 Mbps where a quite high data rate is
generated. Of course whenever we record in CD usually it is stereophonic so there are
two channels and in case of voice it can be a single channel that’s why we have to
multiply the data rate by 2 to get 1.411 Mbps.

And normally as we shall see this uncompressed data both for audio and video has got
higher bandwidth so some compression is used so that it can be efficiently communicated
through the transmission media or the network. In this case, for example, whenever you
compress voice that 64 Kbps bandwidth becomes 4 to 32 Kbps depending on the
compression technique and nature of the voice used.

(Refer Slide Time: 08:22)

Similarly, the CD data whenever it is compressed from 1.411 mega bits it comes down to
64 to 192 Kbps which can be transmitted at a much lower cost. So compression is
indispensable as we shall see not only for audio but particularly for video it is more
important.

Now let us focus on video. The best way to understand electronic video signal is to
understand how we get a picture from a digital camera. Nowadays all of us are familiar
with digital camera, the digital camera has got electronic sensors that is called Charge
Coupled Devices which is nothing but some semiconductor sensor which converts
different levels of light to electronic signals of different amplitude so the light intensity is
converted into some electrical signal of different amplitude.
(Refer Slide Time: 09:05)

If it is color then the light is passed through an RGB filter as it is shown here and then
you get three different components red, frequency and green from the same video. For
example here you have got an image and this image can be converted into a digital form.
And whenever we consider video it can be considered as a sequence of frames. for
example, this is one frame, this is another frame and when these frames are displayed fast
enough we get the impression of motion, as you know we use the retentivity property of
our eyes whenever the frames are flashed on the eye at the rate of say 50 frames per
second we don’t identify the changes that’s means it becomes continuous to us that’s why
to get flicker free display the frames are repeated fifty times per second so that is how we
get video so essentially it is multiple frames per second.
(Refer Slide Time: 10:14)

How do you digitize it? What is being done is each frame which is an image which is
divided into small grids called pixels and for black and white TV grey level of each pixel
is represented by 8 bit.

(Refer Slide Time: 11:04)

So if you represent by 8-bit then each pixel will give you 8-bit data, in case of color each
pixel is represented by 24-bit so 8-bit for each primary colors so it is 8-bit for R red, 8-bit
for G green and 8-bit for B blue components. In that case for each pixel you’ll require 24
bits.
For example, let’s assume a frame is made of 640 by 480 pixels which is essentially used
storing video signals in CD, the bandwidth requirement for this is 2 into 25 into 640 into
480 into 24. Actually it is repeated 25 frames per second two times and these are the
number of pixels and here is the number of bits per pixel which gives you 368.64 Mbps
so quite a high data rate is necessary and this is too high for transmission without
compression through internet. So compression is a must particularly for video. Here an
example is given. This is a frame and it is divided into 640/480 pixels which you get from
a digital camera.

(Refer Slide Time: 12:39)

For example, a 3.1 mega pixel camera digital camera provides the following resolution.
In the highest resolution you get 2048 into 1536 pixel that means in this direction 2048
and this direction y direction you get 1536 and if you want the lower resolutions then it
can be 1600 into 1200 or 1024 into 768 or it can be 640 into 480.

Now, for different applications we want different resolutions. For example, for high
density TV, HDTV the number of pixels required is or horizontally it is 1920, for vertical
it is 1080 and using 24-bit as the resolution and 60 frames per second that gives you 2986
Mbps so it is quite high and it cannot be sent by using most of the networks.
(Refer Slide Time: 13:27)

For our standard TV you require 720 into 576 pixels and for each pixel giving 24 bits and
25 frames per second gives you 249 Mbps that is quite high. For VCR it is 640 into 480
and again by using 24-bit and with 25 frames per second we get 184.32 Mbps. There are
two other standard which are CIF and QCIF which are primarily used for video
conferencing where the data rates are small, I mean the pixel number of pixel is used is
smaller or of lesser bandwidth, for CIF it is 352 into 288 and 8-bit is used for giving
resolution and 15 frames for per second gives you 12.156 Mbps. And for QCIF for low
resolution multimedia and video conferencing 176 into 144 is the frame and 8-bit per
pixel and 10 frames per second gives you 2 Mbps. Thus it varies from about 3000 Mbps
to 2 Mbps for different applications, so this requires compression and with compression
from 2986 Mbps it gets converted into 25 to 34 Mbps this can be transmitted through
many computer networks using Mpeg 3.
(Refer Slide Time: 15:27)

Your TV quality signals which requires 249 Mbps in uncompressed form gets converted
to 3 to 6 Mbps by using Mpeg 2 technique and VCR quality signal requiring 184.3 Mbps
in uncompressed form can be converted into 1.5 Mbps by using Mpeg 1 so the bandwidth
required is greatly reduced.

For CIF you can have h.261 compression technique to give you a data rate of 112 Kbps
and for QCIF by using Mpeg 4 you can generate less than 64 Kbps data rate. Hence, after
compression the bandwidth is becoming much smaller which can be communicated
through many networks that we have already discussed.

Now let us look at the qualitative requirements needed for multimedia transmission. For
identifying qualitative requirements two things are necessary. First one is the response of
the human ear which is to know the frequency range.
(Refer Slide Time: 16:54)

As you know our ear can hear from 20 Hz to 20 KHz, which is the range of frequencies
our ear can hear. Of course but dogs and other animals it can be higher frequencies. One
important property of our ear is it is more sensitive to the changes of the signal levels
rather than the absolute values. That means the absolute values are not that important but
the changes are more important so it is more sensitive to changes rather than the absolute
values. So this characteristic of the ear has to be utilized or exploited when we do
multimedia transmission.

Similarly, whenever we look at the response of the human eye we have to utilize the
property of our eyes. Whenever some images is passed on eye it retains that information
for few milliseconds before it dies down that is called the retentivity of the eye and that
can be exploited for compression and later on for communication. Then there is some
tolerance of error. There will be some error, error will take place as you communicate
through some network and particularly higher error rate tolerance is there for
uncompressed signal and whenever it is compressed then it is less tolerant to error.

Later on as we discuss the compression techniques in the next lecture it will be evident
why whenever an error occurs it is less tolerant for compressed and it is not that much
tolerant for uncompressed signals.

Tolerance to delay and variation in delay: Here also as we shall see there is small
tolerance for live applications. So whenever it is not live application then you can tolerate
larger delay or a variation in delay.

Possibly the most critical aspect is the lip synchronization. Normally voice and video
these two are recorded separately compressed separately and transmitted separately but it
is necessary to have synchronization so that there is lip synchronization. When somebody
is singing then lip synchronization has to be done with the audio and if it is not there it
will look very odd that’s why this lip synchronization is the most critical aspect for
multimedia communication. so the audio and video signals are to be synchronized. Later
on we shall see how it is being done.

The performance parameters for multimedia communication is expressed in terms of a


parameter known as synchronization accuracy specification factor SAS factor and this
SAS factor is specified with the help of four parameters and essentially it specifies the
goodness of synchronization.

In multimedia essentially two continuous streams of data is coming; one for audio
another for video then it needs to be synchronized. Now, how good the synchronization is
is being expressed with the help of the SAS factor which is represented by four
parameters.

(Refer Slide Time: 20:30)

One is delay. Delay is essentially the acceptable time gap between transmission and
reception. Here you have got a transmitter, it is passing through a network and it is going
to the receiver. This is your receiver, and this is your transmitter (Refer Slide Time:
21:20) and obviously there will be some delay depending on the communication media. If
it is satellite it will have long delay, if it is local area network the delay will be very small
so depending on the network the delay will vary.

The second important parameter is delay jitter. Delay jitter is essentially the
instantaneous difference between the desired presentation times and the actual
presentation time of the streamed multimedia objects. Let us see with the help of this
diagram.
(Refer Slide Time: 21:46)

Suppose these are the presentation times for different multimedia objects, now because of
the variation of delay it may reach here or it may reach here or it may reach somewhere
here or it may reach somewhere here (Refer Slide Time: 22:09) so at different points with
respect to the original or actual presentation times. This is called the delay jitter. Delay
jitter represents the variations in delay, the instantaneous difference between the desired
representation times and the actual presentation times of the streamed multimedia objects.

On the other hand, the delay skew is the average difference between the desired and the
actual presentation times; it gives you the average value. For example, here immediately
after synchronization it reaches here, then it reaches little later or here it reaches still little
later here, it reaches still little later here, it reaches still little later and so on. that means
the skew the difference is increasing the average value is increasing with time so in this
direction you have got your time so this is being explained by the delay skew, this
parameter specifies the delay skew.

Fourth parameter is the error rate. Some error is committed when the digital signal goes
through the transmission media or the network which s represented by the bit error rate.
That means the number of bits in error bits in error and total number of bits that give you
the bit error rate. So, with the help of these four parameters the synchronization accuracy
specification is specified.

Now let us consider it in case of audio. In case of audio particularly for conversation one-
way delay should be 100 to 500 millisecond range and should not be more than this
which requires echo cancellation.
(Refer Slide Time: 23:58)

Then the delay jitter has to be ten times better than the delay. For example, if the delay is
100 milliseconds then the delay jitter should be less than 10 milliseconds so it should be
ten time better than the delay. Then lip synchronization should be better than 80
milliseconds that means the time gap between the audio objects and the video objects
should be less than 80 milliseconds and bit error rate should be less than 0.01 for
telephone that is voice and less than 0.001 for uncompressed CD or less than 0.0001 for
compressed CD. Therefore, the bit error rate requirement is more for compressed CD
than the uncompressed CD.
Now let us look at the SAS factors for the video.

(Refer Slide Time: 25:32)

For video the delay and jitter requirement is less than 50 milliseconds, for HDTV High
Density TV, it is less than 100 milliseconds for broadcast quality TV, it is less than 500
milliseconds for videoconference. And error rate is less than point 0.000001 for HDTV,
less than 0.00001 for broadcast TV and less than 0.00001 for video conference. So, for
HDTV the bit error rate requirement is more stringent than the videoconferencing.

Now, let us focus on the traffic characterization parameters. Traffic characterization


parameter arises because of the variability of the bit frames.
(Refer Slide Time: 26:08)

It is divided into two categories. One is constant bit rate applications, for example,
uncompressed digitized voice transmission. Whenever voice is transmitted or video is
transmitted in uncompressed form it gives you a constant bit rate so that is being
explained by constant bit rate. On the other hand, whenever audio or video is compressed
different parts of the video or the different parts of the audio will not be compressed by
the same amount, the compression ratio will not be same for different parts of the audio
or video so as a consequence it will generate variable bit rate so video transmission using
compression leads to variable bit rate.

And particularly we’ll see that most multimedia applications generate VBR traffic that is
Variable Bit Rate traffic. This variable bit rate traffic is the cause for burstiness in the
traffic and this burstiness ratio is expressed as the ratio of min bit rate by the peak bit
rate. That means the min bit rate which is the average bit rate by the peak bit rate. So this
ratio gives you the burstiness of a particular application. Now, having discussed the
parameters required for multimedia transmission now let us focus on to the networks. The
performance of networks is specified with the help of network performance parameters or
NPPs. again here you have four parameters first one is the throughput.
(Refer Slide Time: 28:11)

Throughput is the effective rate of transmission of information bits. For example,


Ethernet has the data rate of 10 Mbps. However, although the data rate is 10 Mbps the
throughput is much less; it can be 3 Mbps you may be asking why. The reason for that is
in Ethernet as you know there may be collision and because of collision there will be
retransmission or delay in transmission so because of all these things the overall
throughput the rate at which the data will be delivered is much less than 10 Mbps so
that’s why throughput can be lesser than the data rates.

Then comes the delay, delay is the time required for a bit to traverse through the network,
it is essentially the end-to-end delay. As I explained end-to-end delay is important and
depending on network you are using it can be different.

For example, for satellite communication the round-trip delay is quarter of a second, for
LAN it will be very small and for other wide area networks WAN it will be less than one
quarter of a second but definitely it will be much higher than the local area networks
LANs. And maximum that can be tolerated for network performance parameters is 250
milliseconds that is specified and delay variance that is the variation of delay as a packet
traverses through the network. This can happen because of various reasons. whenever an
application is sending a data it has to be packetized, there will be transmission time and
there will be propagation time, there are three factors; packetization, transmission time
and propagation time and because of these three parameters and their variations in some
cases it will be store and forward type and because of all these things there will be delay
variance and this variation of delay is expressed as the delay variance and it should have
maximum value of 10 milliseconds.

Then error rate is specified in various ways. First one is bit error rate which is essentially
the number of bits in error per unit time then packet error rate which is the number of
packets in error per unit time and packet loss rate is the number of packets lost per unit
time and cell loss rate is the number of cells lost per unit time. So, depending on the
network it can be packets, it can be cells etc. For example for ATM it will be number of
cells sent per unit time and the number of cells in error is expressed as cell error rate.

These values of network performance parameters are compared to the bandwidth and
SAS factors. So the bandwidth and SAS factors of the application of multimedia
transmission have to be compared with the values of the network performance parameter
to determine the QoS Quality of Service of the network. So Quality of Service of the
network will be determined by comparing these two. So, quality of service has to be
identified and better the quality of service good is the network.

Let us now consider different types of networks and see how capable they are in for
multimedia communication. First we shall focus on circuit switched networks. We have
already discussed different types of circuit switched networks. The first one and the most
popular one is the Public Switched Telephone Network or PSTN.

(Refer Slide Time: 32:28)

We have already come out of the era of plain old telephone service which was analog.
Now in most of the places it has been replaced by digital that’s why I am telling PSTN
Public Switched Telephone Network which gives you data rate of 64 Kbps. But definitely
this bandwidth or data rate is not enough for real-time video so this is excluded for
multimedia communication. However, we have seen that by ADSL which is the
broadband service based on the public switched network t can give a bandwidth data rate
of 1.544 Mbps to 6.1 Mbps over a small distance so this can be used for video on demand
and also for internet access at home.

So this ADSL which is based on PSTN can be used for multimedia communication. Then
comes the ISDN, ISDN has got two different interfaces BRI Basic Rate Interface and
Primary Rate Interface giving the data rate of 144 to 192 Kbps or 1.544 Mbps
respectively and this BRI interface is suitable for digital voice and video conference and
PRI is suitable for compressed VCR quality video.

We have not discussed ISDN so let me very quickly go through the ISDN and give you
an overview of ISDN before we come to other circuit switched networks.

(Refer Slide Time: 34:34)

ISDN was developed in 1980s to allow the digital transmission of audio, video and text
over existing telephone networks. So the basic idea was to convert into digital form
instead of keeping it analog. It used common multiple channels interleaved manner by
using time division multiplexing. There were four popular types of channels. First one is
A type which is 4 KHz analog telephone channel, B type 64 KHz digital PCM channel
for voice and video, C type 8 or 16 Kbps digital channel then D type 16 Kbps digital
channel for signal, this is primarily used for out of band signal which we have discussed
in detail.

As I mentioned there are basic types of interfaces of which two are more popular. Basic
rate gives you 2B type and 1D type so these three channels are interleaved and in primary
rate 23B type and 1 D type are interleaved together and in hybrid 1A and 1C these two
are interleaved together. So this hybrid one is considered as replacement of this pure and
old telephone system namely the analog one. So here it gives you the two types of
interfaces.
(Refer Slide Time: 36:15)

This is your BRI basic rate interface primarily used for home and small business houses.
Here as I mentioned 2B type 64 kilobits and 1D type 16 kilobits are provided, of course
there is some overhead of 48 Kbps with total data rate of 192 Kbps and this will require
one network interface at home and from the network interface it goes to the ISDN office
or ISDN telephone exchange.

PRI was developed for little bigger business houses which gives you 23B channels and
1D channel of 16 Kbps. These two together (Refer Slide Time: 37:18) and all these
together gives you a data rate of 1.544 Mbps which can give you many advantages and
multimedia transmission.

This will require two interfaces NT2 and NT1. NT1 is the same kind of interface as it is
used in BRI it is essentially the interface which goes to the ISDN exchange and NT2 can
be a PBX which will do some kind of demultiplexing to a number of places. So, these
two are necessary for interface intra ISDN exchange.

Now let us go back to our previous slide. With the help of the PRI you can get
compressed VCR quality, video you cannot have the TV quality video but compressed
VCR quality video transmission is possible by using PRI. Then you have lease line
having data rates varying from 384 Kbps to 274 Mbps for fractional ti to t1, t2, t3 and t4
giving you different types of capabilities starting from video conferencing, VCR quality
video, broadcast quality video and HDTV for different types of lease lines. Then we have
already discussed SONET which is based on optical communication giving you
bandwidth data rate varying from 51.84 to 2844 Mbps and it is available in multiples of
51.84 Mbps and definitely it is very suitable for multimedia traffic.

Now, the circuit switched network gives you very good quality of service capability
because of circuit switched service which is excellent and because of end-to-end
connectivity. So a circuit is set up and after the circuit is set up then whatever delay is
there that is constant so delay jitter is not present so a digital pipe is available and
continuously data is available as a result the quality of service is very good for the circuit
switched networks. Now let us focus on the public switched networks. As we discussed
the public switched networks has got two options. One is the virtual circuit type, another
one is datagram service type.

(Refer Slide Time: 39:49)

So in case of virtual circuit type it establishes end-to-end connectivity before data


communication and all the data goes through the same path or same route which is not
true in datagram service. As a result there is no end-to-end connectivity. Obviously in
terms of quality of service the VC based connection is better than the datagram based
connection. So, for multimedia communication we shall prefer this virtual circuit based
packet switched network than this datagram type connectivity because it gives you better
quality of service.

Here are the examples of packet switched network.


(Refer Slide Time: 40:50)

We have already discussed X.21 which gives you data rate of 64 Kbps obviously it is not
suitable either for audio or video traffic. On the other hand, frame relay which gives you
a data rate of 56 Kbps to 1.544 Mbps is better than X.25 so for some applications this can
be used. On the other hand, SMDS which give you much higher data rate 1.544 Mbps to
46 Mbps can carry multimedia traffic. Let me very briefly consider why X.25 is not
suitable because it gives you error detection and correction for every hop and as a
consequence it involves lot of overhead and delay.

(Refer Slide Time: 41:34)


Since the bandwidth is 64 Kbps and nowadays higher bandwidth are also available but
those limitations are still there and it cannot afford to bandwidth reservation, it cannot do
multicasting either which is very important in case of multimedia communication so
multimedia communication cannot run on X.25.

(Refer Slide Time: 42:13)

For frame relay again there is no end-to-end error and flow control as a consequence it is
better suited for multimedia communication having higher bandwidth so it may carry
multimedia depending upon the detailed specification of the multimedia application.

Now I mentioned about SMDS which is Switched Multimegabit Data Service. Unlike
X.25 and frame relay SMDS is connectionless. So as a consequence because it is
connectionless it can be used for multimedia traffic but it is not designed to provide
quality of service. However, it gives you smaller delay less than 10 ms and so on so it can
be used for multimedia traffic. Now let us focus on local area network. We have already
discussed various types of local area networks, here is the summary.
(Refer Slide Time: 43:09)

First one and the most popular one is the Ethernet which has a speed of 10 Mbps but as I
said the throughput is much smaller than the actual speed so throughput can be can vary
from 3 to 9 Mbps and obviously it is not suitable for multimedia and because of non
deterministic nature of traffic a particular packet may suffer very a large number of
collisions. Not only that a particular packet or a frame may get discarded. If it suffers a
large number of collisions there is a possibility that a packet will not be delivered. As a
consequence Ethernet is not really suitable for multimedia traffic.

On the other hand, switched Ethernet gives you speed of ten Mbps but it has much higher
throughput because the bandwidth is not shared in this case and as a result it gives you
higher throughput. Also, it is suitable for stored multimedia communication. We shall
discuss about the different types of multimedia applications such as stored multimedia,
live multimedia and interactive multimedia later on. Hence, it is suitable for these types
of applications.

Then comes the Fast Ethernet which gives you a speed of 100 Mbps and throughput in
the range of 40 to 90 Mbps so it is suitable for both stored and live multimedia.
Nowadays most of the LANs are having Gigabit Ethernet backbone along with the Fast
Ethernet distribution system. So in hierarchical system you that backbone band backbone
network is by Gigabit Ethernet and other parts at the lower level you have got Fast
Ethernet networks, in such cases this live multimedia communication is possible in such
types of local area networks.

So far as the older rings are concerned the token ring with speed of 4 Mbps and
throughput of 3.8 Mbps is not suitable, token ring with 16 Mbps data rate or speed having
throughput of 15.5 Mbps can be used for audio and video communication, FDDI-2 with
6.144 Mbps can be used for constant bit rate broadcast TV and ATM which is based on
BISDN broadband ISDN is very suitable for multimedia traffic and it is actually designed
for multimedia traffic which is based on circuit switching concept and it gives you a
speed of 34 to 155 Mbps. As a consequence ATM is very suitable for multimedia
communication.

Now let us focus on multimedia over network which is most popular and important.
Unfortunately like many other technologies the internet was not designed to carry
multimedia traffic. so when the internet was designed and evolved in those days
multimedia traffic was not non-existent it was essentially for data communication,
communication of emails and other information and not really for multimedia at most
some graphics so it was not really designed for multimedia traffic.

(Refer Slide Time: 46:45)

And as we know we have discussed two protocols TCP and UDP. TCP includes
connection establishment, error control, flow control, congestion control and so on and
we have seen that all these things put lot of overhead and leads to delay and delay jitter so
as a consequence TCP is not suitable for real-time multimedia traffic. On the other hand,
UDP which is connectionless protocol at the transport layer can deliver real-time data.
However, as you know that UDP is unreliable there is no error control facility. As a
consequence if error can be tolerated as with uncompressed audio and video this UDP
provides an alternative for multimedia communication.
(Refer Slide Time: 48:12)

However, you will require a number of enhancements. You have to add some more
functionalities and protocols to make the internet multimedia ready. Let us see what are
the enhancements and what are the additional protocols that can be used to make the
internet multimedia ready or multimedia enabled.

First one is multicasting. This is one very important requirement for multimedia.
Multimedia is usually not between two persons, it is not for one to one communication, it
is essentially for one to many. As a consequence it is essential to have multicasting
feature. Unfortunately original internet protocol is the best effort unicast approach; here
there is no concept of multicasting. So you have to add something on top of this which is
known as IP multicasting which is an extension to original IP protocol to support
dynamic and distributed group membership, multiple group membership and multiple
send receive nodes. So IP multicast protocols or feature has to be added on top of IP to
enable internet protocol multimedia. Another important function is known as multicast
backbone which has evolved or has been developed to support multimedia to give you
real-time in implementation of IP multicast.

Let me give you an overview of what you really mean by multicast backbone.
(Refer Slide Time: 50:05)

So it can be considered as an internet radio or TV. You are already familiar with the
broadcast radio and broadcast FM. There are radio station and TV stations which
continuously broadcast radio and TV signals and with the help of our receivers and
television sets we can tune to one of the stations and get the radio and TV signals that we
get through air but here we would like to get through internet. How it is being done?

An user will call up and ask for a particular service may be he will ask for a particular
movie or some particular type of music so it will view uncompressed movies and how it
is being implemented is it is implemented in this way, it is a virtual overlay network on
top of internet. So the existing network is there and on top of that there is virtual overlay
as shown in this diagram, it consists of multicast islands connected by tunnels. So here is
a multicast island, it can be a local area network as it is shown here or it can be some
other types of networks which have the multicasting facility so each of these islands has
got multicasting facilities in them. These islands are connected by what is known as
tunnels with the help of these m bone routers and through these routers they communicate
with each other through to give the connection through the network.
(Refer Slide Time: 52:07)

Then other enhancement required is UDP. UDP is more suitable for TCP for interactive
traffic but it requires services of RTP a special type of protocol real-time transport
protocol and application protocol designed to handle real-time traffic, it uses an even
numbered UDP code and the next number used as a companion protocol for real-time
control protocol. So RTP is used for data communication and RCTP is used for control
signal communication. RTP provides end-to-end transport services to real-time audio,
video and simulation data, however it does not give you QoS guarantees.

(Refer Slide Time: 52:52)


Another important protocol that is been developed is known as RSVP. It gives you a
signaling protocol over IP and it runs over IP to provide the necessary Quality of Service.
It has got two components for flow specification. Actually we’ll require some flow based
QoS model so two specifications are there; one is Rspec which defines the resource
specification required by the multimedia application and Tspec which defines the traffic
characteristic of the flow that means whether it is constant bit rate, variable bit rate etc.
With the help of this RSVP protocol the resources can be reserved along the path by
using path messages and reservation messages which are sent by the receivers and on
each node RSVP attempts to reserve the sources to be able to provide the requested
quality of service and it can run over IPv4 and IPv6 and it is scalable and allows dynamic
group membership and routing changes.

(Refer Slide Time: 53:37)

Let’s see how it really works so you see from the sender the path messages will go to
different receivers then the reservation is done by different receivers by sending their
requests.
(Refer Slide Time: 54:10)

However, it also allows reservation margin for example from receiver 3 and 2 the
bandwidth reservation is 3 and 2 which can be merged to have 3 Mbps and see this 3 and
from receiver 2 this 5 Mbps are merged to get 5 Mbps bandwidth. So 5 Mbps is the
merged bandwidth which is the required by the sender this is how the bandwidth can be
reserved.

Now let me give you the review questions based on this lecture.

(Refer Slide Time: 54:56)

1) What do you mean by multimedia?


2) What is SAS and what role it plays in multimedia communication?
3) Explain the function of NPPs in multimedia communication
4) Distinguish between BRI and PRI of ISDN
5) Explain the role of RTP and RCTP.

Now it is time to give you the answer to the questions of lecture – 35.

(Refer Slide Time: 55:24)

1) What is the main function of UDP protocol?

UDP is responsible for differentiating among multiple resources multiple sources and
destinations within one host the multiplexing and de multiplexing operations are
performed using port mechanism which I have explained in detail in the last lecture how
multiplexing and de multiplexing is done
(Refer Slide Time: 55:53)

2) Why pseudo header is added in a UDP datagram?

As you know UDP header has got the information of source port destination port but it
has not got no information about the IP address so a pseudo header is added a twelve
octet pseudo header is used for checksum computation by UDP.

The purpose is to verify whether the UDP datagram has reached the correct destination
and destination at this are essentially the source IP address and destination IP address so
this pseudo header is added for computation of checksum in case of UDP.
(Refer Slide Time: 56:34)

3) How TCP establishes and terminates connection?

For connection establishment TCP uses three-way handshaking protocol as I have


explained in detail and for connection transmission termination four-way handshaking
protocol is used for termination of connection in both direction

(Refer Slide Time: 56:56)

4) What are the advantages of DNS domain name system?


As I mentioned DNS allows meaningful high level symbolic names instead of IP
addresses which is more convenient for humans and it uses hierarchical naming system
which has many advantages over flat addressing used earlier. It is very efficient because
of the use of distributed databases where authoritative records are stored.

(Refer Slide Time: 57:27)

5) Explain how FTP works?

FTP sets up two simultaneous connections; one for control and another for data. Control
connection persists for the entire session. Data transfer connections and data transfer
processes are created dynamically as and when required.

The session is terminated when control connection disappears. This is shown in this
diagram. Again we can see how the control connection and data transfer takes place from
one place to another through internet.

With this we come to the end of today’s lecture and in the next lecture we shall discuss
about the compression techniques which are indispensable for multimedia
communication.
Thank you.
Data Communication
Prof. A. Pal
Department of Computer Science & Engineering
Indian Institute of Technology, Kharagpur
Lecture - 37
Audio and Video Compression

Hello and welcome to today’s lecture on audio and video compression. In the last lecture
we have discussed in detail the bandwidth and SAS factors of different multimedia
signal. We have also discussed the network performance parameters of different
networks. It is quite evident from the discussion of the last lecture that sending
multimedia signal through internet is not possible without compression. So in today's
lecture I shall try to give an overview of the compression techniques.

Fortunately in the last couple of decades rigorous research in the area of compression
technology has led to the emergence of sophisticated compression techniques leading to
massive compression. This has made possible transmission of multimedia signals through
internet. Hence discussion of the compression technique is very much essential but the
subject is very vast it is not really possible to cover everything in one lecture. A full
course of forty lectures can be taken on compression techniques. So in this lecture I shall
essentially try to give an overview of audio and video compression.

Here is the outline of the lecture.

(Refer Slide Time: 02:28)

I shall give a brief introduction and then discuss the audio compression technique
particularly what is being used in MPEG-3 that is MPEG version 3 then I shall discuss
the video compression techniques; essentially there are two approaches JPEG and MPEG.
MPEG has different versions like MPEG-1 MPEG-2 and MPEG-4 I shall give an
overview of these three versions and also the H.261 which is essentially used for low bit
rate compression and I shall give an overview of that also.

(Refer Slide Time: 03:16)

On completion the student will be able to state the need for compression, why
compression is needed and its benefits. They will be able to explain how audio
compression in MPEG is performed, they will be able to explain the basic principles of
video compression and they will be able to explain the popular video compression
standard as I have mentioned just now.

Now first let us consider the definition of compression. What do we really mean by
compression? By compression it essentially means that the compression converts an input
data stream to another data stream of smaller size. So essentially it reduces the size of the
data.
(Refer Slide Time: 03:45)

For example here we apply the raw data to a device known as encoder, the device which
performs the compression is known as encoder and after this encoder performs the
compression the compressed data is sent through the network and the compressed data is
received at the other end by the decoder which converts back the compressed data to
uncompressed data. So the need for compression is quite clear. First of all it reduces the
storage space the amount of storage that we will require in uncompress data is
significantly larger than the compressed data. Similarly, it will reduce bandwidth for
transmission through a network because of lesser rate bandwidth of the converted signal
it will be possible to send through a network of smaller bandwidth or communication link
of lower bandwidth so it will lead to lower communication cost. As you know the higher
the bandwidth of the link more is the cost so lower bandwidth requirement reduces the
cost of communication.

Moreover, it leads to emergence of new and new applications. The compression


approaches can be broadly divided into two types; first one is known as lossless
compression. That means after compression the data is reduced in size but it is done in
such a way that at the other end the original data can be recovered so data in original
form can be recovered without loss of any information.

On the other hand, there are some techniques in fact which are more common which are
known as lossy compression. In such cases the uncompressed data may not be exactly
same as the original raw data, the reason for that is our eyes and ears do not really
recognize the differences, as a consequence we can afford to lose some information so
even the uncompressed data in the lossy compression gives you good quality production
although some information is lost. These are the two basic approaches we shall consider
particularly the lossy compression techniques.

First I shall focus on audio compression.


(Refer Slide Time: 07:08)

Audio compression techniques can be broadly divided into two types; one is known as
predictive encoding another is known as perceptual encoding. Predictive encoding
encodes essentially the differences between the samples instead of the absolute sample
values that are encoded.

We have already discussed in detail the differential pulse code modulation or adaptive
differential pulse code modulation where the differences between sample values are sent
instead of the absolute values and this leads to reduction of lower bit rates so this is
essentially the basic idea behind predictive encoding. However, the amount of
compression that you can achieve by using predictive encoding is not very high that’s
why the perceptual encoding is more common.

You will be surpirised to know that it makes use of the flaws of our auditory system
based on the study of how people perceive sound. So it essentially exploits the flaws and
limitations of our ears to do not hear everything equally. It is based on the science of
understanding how people perceive sound and based on that the encoding is performed.
Let us see the hearing threshold of a normal person. As you can see this is the frequency
range 0 Hz to 16 KHz.
(Refer Slide Time: 08:54)

Over this range as you can see 2 KHz to 4 KHz our ear is most sensitive to this region
and attenuation increases as we go towards lower frequency as well as we go towards
higher frequency. Our ear is very insensitive to high frequencies like 14 KHz or 16 KHz
or 20 KHz however the situation is different for dogs.

Dogs can even hear ultra sound, such a higher frequency signal but for human beings our
ear is not really sensitive to these higher frequencies. This is how it is expressed in
decibel. This is 40 db and this is 0 db so our observation is, ear sensitivity to sound is not
uniform, it won’t hear all the frequencies equally, so audio samples that are below the
threshold can be deleted. So what is the point of sending signals with amplitude in this
range of higher frequency? In any case the ear will not hear them so the basic idea is it is
better to discard and save bandwidth of the link. This is one approach. Another is, some
sounds can mask other sounds. It has been observed that some sounds can mask other
sounds.
(Refer Slide Time: 10:41)

There are two types of masking. One is known as frequency masking. It has been
observed that a loud sound in a frequency range can partially or fully mask another sound
in the nearby frequency range. For example, here it is shown (Refer Slide Time: 11:10) if
there is loud sound in this frequency say 8 KHz then how the threshold is increases
shown by this dotted line. So, if there are sounds with amplitude in this range it will not
be audible to the ear. Hence, the threshold is being raised by the presence of a loud signal
of a particular frequency, this is known as frequency masking.

(Refer Slide Time: 11:05)


Another type of masking is possible which is known as temporal masking. In temporal
masking a loud sound can numb our ears for a short duration even after the sound has
stopped. So, if there is a loud sound at particular time instant then some sounds around
that time will not be audible to the ear. for example, whenever a train passes in the nearby
region by blowing whistle or a plane crosses over your head then because of the large
background sound we cannot talk to each other or when if you are taking a class it is
difficult for the teacher to make it audible to the students. This we have observed in our
day to day life and we shall see how they can be exploited to achieve compression.

Here for example the temporal masking is explained in detail.

(Refer Slide Time: 12:40)

There is a strong sound A that has occurred at time instant 0 of a particular frequency
preceded or followed by a weaker sound B at the same or nearby frequency and if the
time interval between the two is short B may not be audible. For example, this signal
(Refer Slide Time: 13:09) will not be audible because it is within 10 milliseconds and it is
about 32 db. On the other hand the threshold has been raised to 60 db. You can see
threshold is suddenly raised and it falls gradually.

However, after 20 milliseconds it can be hard because it comes above the threshold value.
So depending on where the sound is taking place with respect to the loud sound it may be
audible or it may not be audible so a particular person may or may not hear depending on
when it is occurring. So the sum of the signals in this range which are present can be
removed from the signal for transmission. This is how compression can be done.

What is being done is the critical bands are determined according to the sound perception
of the ear. So the entire range is divided as 27 bands here.
(Refer Slide Time: 14:01)

Of course the range is expressed in a by using a unique known as bark by the name of by
the name HG bark bouson a German Scientist and in his name it is being expressed. So,
for frequencies below 500 Hz 1 bark is equal to 8/100. On other hand for frequencies
with above 500 Hz then 1 bark is equal to 9 plus 4/4 log 8/1000. So, at higher frequencies
the frequency range is more in a particular band. Thus, in band 0 it is only 0 to 50 Hz, in
band 1 it is 50 to 95 Hz. On the other hand, in band 10 it is 940 to 1125 Hz, in band 19 it
is 3840 to 4690. On the other hand, in band 26 it is 15375 to 20250 KHz. Therefore, the
range is much larger in higher bands than the lower bands. Essentially because of the
sensitivity of the ear is lower in higher frequency bands this is how it is being divided.
(Refer Slide Time: 15:44)

Now let us see how MPEG-1 performs the audio compression based on the techniques
that I have discussed. First of all sampling is done at 32 KHz, 44.1 KHz or 48 KHz. of
course the 44.1 KHz is very common for CD quality audio. Then the signal is converted
from time domain to frequency domain using first Fourier transform. The resulting
spectrum is divided in at most 32 frequency bands each of which is processed separately.
Processed separately means as we have seen each of the different frequency bands are
removed or reduced in a different way or said to be sent with a different resolution.

For example, frequency ranges that are to be completely masked are allocated zero bits,
frequency ranges which are to be partially masked are allocated smaller number of bits.
On the other hand, frequency ranges that are not to be masked are allocated large number
of bits. So, higher precision is allocated to frequency ranges to which our ear is more
sensitive and lower frequency to less sensitive and zero bits which are not at all audible to
the ear.

Of course, in addition to that in case of stereo redundancy inherent in two highly


overlapping audio sources are exploited. So whenever we are recording stereophonic
sound two channels have lots of commonality or redundancy that can be exploited and
that redundancy inherent in two overlapping audio sources are exploited for further
compression. By doing MPEG-1 the audio stream is adjustable from 32 Kbps to 448
Kbps depending on the bandwidth of the source and different frequency components
present in the signal.

And as we can see whenever it is uncompressed voice 8 KHz sampling frequency and a
resolution of 8 bits per sample gives you 64 Kbps.
(Refer Slide Time: 18:00)

On the other hand, whenever voice is compressed with the sampling frequency and the
resolution can lead to 4 to 32 Kbps so 8:1 or 2:1 compression is possible. For CD quality
audio if it is uncompressed, as we know, because of two channels and sampling at 44.1
KHz with resolution of 16 bits per sample it gives you 1.411 Mbps and whenever you
compress it, it gives you 64 to 192 Kbps which is the amount of compression that is
possible for audio signal.

Now let us focus on video compression. Video is essentially a temporal combination of


different frames and each frame can be considered as an image. Image means it can be
considered as a still image which comprises spatial combination of pixels.
(Refer Slide Time: 19:03)

Now two basic principles are used. one is known as joint photographic experts group
standard that is JPEG standard that is primarily used to compress images by removing
spatial redundancy that exist in each frame. So each frame is considered as a still picture
and spatial redundancy present in it is reduced with the help of JPEG. On the other hand,
MPEG which is Moving Picture Experts Group standard is used to compress video by
removing the temporal redundancy of a set of frames because the differences between
two adjacent frames can be very very small so that is being exploited in temporal
redundancy. These are the two techniques used. First let us focus on JPEG. JPEG
involves the following four distinct steps.
(Refer Slide Time: 20:22)

First one is block preparation and second one is discrete cosine transform, third is
quantization fourth is compression. These are performed one after another to generate the
compressed output. The raw data is received here then block preparation, discrete cosine
transform, quantization and data compression is performed to generate the output signal.

First let us consider the block preparation. As you know after the video signal is digitized
it is converted into an array of pixels for example for CD quality or VCR quality video
with the typical number of pixels in horizontal direction 640 and in the vertical direction
480 so you have got 640/480 pixels and each of the pixel has got RGB components Red,
Green and Blue components each is represented by 8 bits. This leads to twenty four bits
per pixel. This is how still image is digitized.
(Refer Slide Time: 20:52)

However, before performing any compression approach, before performing any operation
it is converted into luminance and chrominance component. The reasons for translating
RGB to luminance and chrominance is our eyes are more sensitive to luminance than
chrominance so chrominance can be sent with lesser resolution. Secondly by converting
to luminance and chrominance components it allows more compression than it is present
in RGB that’s why the video digital signal has converted into Y U V components.

Of course each pixel has got again, in place of RGB it has got Y U and V components.
The formula that is being used for conversion is shown here (Refer Slide Time: 22:47).
Whenever you will be playing at the other end the receiving end the inverse conversion
has to be done to get back the RGB. This conversion to luminance and chrominance has
got another benefit. It makes it compatible with black and white technology. For
example, if the luminance component is sent to a black and white receiver we shall get
that black and white picture corresponding to a color picture.

Now let us see how the block preparation is done. Each pixel is represented by Y U V
and it is divided into 8/8 pixels. In each block it has got 8/8 pixel and each block is
processed separately so that the computation required for compression is minimized, that
is the basic objective of this block preparation. So the Y component which is very
important and to which our eyes are more sensitive as you can see the number of pixels
remains unaltered so it has got 4800 blocks. So after the block preparation the Y
component has got 4800 blocks and each block can be processed separately.

On the other hand, the U and V blocks (Refer Slide Time: 24:33) they are average, I
mean 4 pixels are average to generate 1 pixel and U and V both are represented by 320 by
240 after averaging out. So, after averaging out we get 320/240 so this significantly
reduces the size and in turn the number of blocks to be processed is reduced to 1200 for
each.
So 1200, 1200, 1200 for U and 1200 for V and 4800 for Y so altogether 7200 blocks are
to be processed. Let’s see what kind of processing is done and see how each block is
processed. Each block of 64 pixels that is 8/8 goes through a transformation called
discrete cosine transform.

(Refer Slide Time: 25:24)

This discrete cosine transform is highly mathematical. I shall give three examples to
show how it helps to minimize the information or how the numbers reduce.

Ultimately we have got a block where there are 8 into 8 is equal to 64 numbers. Now as it
is it is necessary to send 64 numbers each of 8 bit. However, suppose there is a block
where there is uniform intensity there is no change. By using discrete Fourier transform it
is converted into a frame as you can see here. We have got only one DC component and
all the AC components which essentially represent changes with respect to this 00. And
AC components are 0 because with respect to that there is no difference in any other
pixel. So essentially it has got one DC component and all DC components are 0. So you
can see these zeros can ultimately send in a very compact form as we shall see later.

Then let us consider the second case where you have got two different tones that means
two different intensities, 1 and 2. And as you can see intensity level is represented by 20
20 20 20 and the other half is 50 50 50 50.
(Refer Slide Time: 27:02)

So after discrete cosine transform as you can see it has got one DC component and few
AC components and a large number of zero values. So the number of zeros is
significantly increased and you have got very few AC components and DC components.

Now, if the intensity changes uniformly as you can see here it is 20 30 40 50 60 70 80 90


then also after discrete cosine transform it generates numbers like 400 which is
essentially average value with a multiplication factor and other AC values like 146 and so
on so these are the AC components and here also you see you have got large number of
zeros. Of course in real life picture you will not have a uniform tone or just two tones or a
gradually increasing tone but there will be some variation of course the number of AC
components will be larger than this but definitely it will have a large number of zeros. So
this is how the discrete cosine transform is performed and it helps to get large number of
zeros in different pixel points.
(Refer Slide Time: 28:49)

Now another step is performed which is known as quantization which further increases
the number of zeros. As we have seen this DC component is the most important
component and along that other components are important. However, the pixel
components on this corner (Refer Slide Time: 29:12) are not really that much important
so they are divided by different numbers and then the fraction part is removed.

For example, these four pixel values are divided by 1 and the values in this row and this
column are divided by 2. Then this row and this column the values are divided by 4 and
in this way the last row and last column is divided by 64 and as a consequence for
example if this is the original DCT Discrete Cosine Transform coefficients after
performing quantization you can see the number of zeros has increased. For example in
this range we have more number of zeros compared to this one (Refer Slide Time: 29:58)
so this helps you to increase the number of zeros after quantization.

And in fact the MPEG is lossy because of these quantization steps. The prior steps
namely DCT and block preparation are not essentially lossy. But this is where some
information loss takes place because of quantization. However, our eyes will not be able
to detect the differences. After we have got the blocks each block is scanned in this zig
zag fashion, this one, this one, this one, then this one, this one, this one, (Refer Slide
Time: 30:40) in this way.
(Refer Slide Time: 30:32)

You may be asking what is the purpose of this. The purpose is if you scan in this manner
then the runs of zeros will be lesser than if you scan in this zig zag fashion.

For example, if you scan in horizontal manner the number of zeros in consecutive
locations will be 24 plus 7 is equal to 31. On the other hand if you scan in this zig zag
manner then as you can see from here all are zeros. So you have got 38 zeros in a in a
row that is called runs of zeros. Essentially it helps you to get more runs of zeros. Then
these runs of zeros can be sent in a compact form as it is shown here.

For example, this 26 here there are 26 zeros and these 26 zeros can be sent by this part of
the information. So a block is now converted into fewer numbers 2 1 a 0 26 3 so initially
we had 64 numbers now it has got converted into fewer numbers and that can be
transmitted to the other end where coding can be done to get back the original signal.
This is the basic concept behind the JPEG. Now let us see what is performed by MPEG.
(Refer Slide Time: 32:19)

MPEG-1 was the first standard to be finalized for video compression for interactive video
on CDs and for digital audio broadcasting. With the help of MPEG-1 a VCR quality
video having 640/480 pixels with 25 frames per second and 24 bits per pixel gives you
368.64 Mbps in uncompressed form, this is in uncompressed form. And after performing
MPEG-1 compression it produces 1.5 Mbps so there is tremendous compression ratio,
significant reduction in the size which can be sent through many networks. And this
368.64 is very difficult to send through many networks, it cannot be sent. And this
MPEG-1 is likely to dominate the encoding of CD-ROM based movies because it gives
you quite good quality performance and another advantage is this 1.5 Mbps can be
transmitted over twisted-pair for modest distances. For example, it can be transmitted
through ADSL network quite efficiently so by using ADSL twisted-pair of network it can
be sent over a distance of 18000 ft or roughly 5 km. because of this compression it is
possible to send MPEG video through ADSL.

MPEG has three components; audio, video and system. We have already discussed how
the audio compression is done.
(Refer Slide Time: 34:08)

Audio signal is applied to the audio encoder which does the compression independent of
the video encoder. So video signal is applied to the video encoder and audio signal is
applied to the audio encoder after the sampling. Now a clock is used which operates at 90
KHz to provide the information in the form of time stamp. So this time stamping is
performed and this time stamped audio and video signals are multiplexed to generate
MPEG-1 output which is propagated all the way to the receiver. So this time stamping
helps you to do the synchronization of audio and video signal. As we have seen one of
the important SAS factor is lip synchronization, one of the important requirement is lip
synchronization. So to facilitate lip synchronization this kind of time stamping is
necessary and this time stamping is also necessary for streaming as we shall see in the
next lecture.

Let us see how MPEG video compression is performed. Encoding each frame separately
with JPEG removes spatial redundancy. We have already seen how JPEG can be used to
remove spatial redundancy. However, just JPEG encoded frames can be sent without
further compression. This can be done in situations where each frame is accessed
randomly such as when you are editing for video production. However, in such a case
after compression you will get 8 to 10 Mbps compressed bandwidth and not very high
compression.
(Refer Slide Time: 35:31)

However, additional compression can be achieved by taking advantage of the fact that
consecutive frames are often almost identical that is temporal redundancy. For example,
two frames are shown here (Refer Slide Time: 36:38) this is frame one and this is frame
n. As you can see here in this frame the background is identical, this house, this tree this
all are same except the blocks in which this person is there and the blocks in which this
person is here. So in this region and in this region only in these two regions there are
differences. So, if this frame is used to reconstruct this frame then only information of
this block and this block is sufficient to encode this frame and also at the receiving end to
regenerate this frame from this frame. This is the basic idea behind MPEG.

To do that MPEG-1 output consists of four kinds of frames for motion compensation.
Essentially the difference between two frames are to be compensated which is known as
motion compensation. This is done by sending four different types of frames.
(Refer Slide Time: 37:28)

First one is I type of frame and this I type of frames are essentially intracode frame, these
are self contained JPEG encoded which appears periodically. So they are essentially
JPEG encoded and contain all the information of the JPEG encoding and they can be
decoded independently. On the other hand, the P type the predictive frames use the block
by block difference with the preceding I or P frames. So a P type of frame which is
predictive frame cannot be decoded independently which are not self contained so
preceding I and P frames will be required to encode P frames and also regenerate or
decode the P frames.

Another type of frames is sent known as bidirectional frames. In bidirectional frames


differences with the preceding and also the following I and P frames are used as
references. These are the three basic frames used. However, this D type of frames is
essentially block averages which are essentially used for fast forward but not really used
for compression.

Now let’s us see how I, P and D frames are used. Here for example this is the order in
which different frames are to be displayed. This is the I frame, 2, 3 and 4 are B frames
(Refer Slide Time: 39:22) that means 2 depends on 1 as well as on 5, 3 depends on 1 as
well as on 5, 4 also depends on 1 as well as on 5. On the other hand, 5 depends only 1 the
previous 1 and not the succeeding following 1, 6 depends on 1 as well as 9, 7 depends on
1 as well as 9, 8 depends on 1 as well as 9 on the other hand 9 depends only on 1 and may
be on 5.
(Refer Slide Time: 39:09)

Now, because of this dependency what is being done is the encoder does not send the
frames in this order the way they are to be displayed. What is being done is first 1 is sent
then 5 is sent because as you can see 2, 3 and 4 you will require the information of 1 as
well as 5 so it is necessary to receive the frame 5 before 2, 3 and 4 can be reconstructed at
the receiving end that’s why 1 is sent then 5 is sent then 2, 3, 4 similarly 9 is sent before
6, 7 and 8. So you can see the display order and the coding order is different. So the
encoder generates the frames in this form which is being received by the decoder and the
decoder after doing necessary processing will again display in a natural sequence like 1,
2, 3, 4 etc.

So one observation is coding and display order can be different. Second is encoder takes
minutes or hours to do the encoding of the data but decoder has to be fast. So it has been
observed that encoder algorithm is quite complex so it may take minutes or hours for
encoding purpose. However, at the receiving end the decoder has to do on the fly
processing and display the images that’s why the decoder has to be pretty fast. So there is
a difference in the complexity of encoder and decoder. Now let us consider the MPEG-2.
(Refer Slide Time: 42:01)

Although MPEG-2 is similar to MPEG-1 it was developed for digital television. Of


course in this case D frames are not supported because usually in television you don’t
really do fast forward or anything of that kind so D types of frames are not realty
required. And to have greater quality DCT is 10 bit by 10 bit instead of 8/8 that is used in
MPEG-1 to provide better quality and it supports four different resolutions, two of them
are shown here HDTV and TV, another two are there and it supports five different
profiles for different applications. So we can see this MPEG-2 provides different options,
it is somewhat like a shopping list we choose from, for a particular profile whether this is
the resolution and so on so you can make choices accordingly depending upon your
application. And MPEG-2 has a more general way of multiplexing. Here each stream is
packetized with time stamps as it is done in case of MPEG-1 and the output of each
packetizer is a Packetized Elementary System having 32 header field.
(Refer Slide Time: 43:14)

So each packetizer produces a Packetized Elementary System which has got 30 header
fields and that 30 header fields are essentially gives the time stamping, error detection
and various other fields are there which are used then these are multiplexed. One is PS
multiplexer which is essentially a program multiplexer and another is a transport
multiplexer. This program multiplexer is essentially variable length packets and a
common time base. As we have seen after compression it can generate variable rate data,
that’s why this PS is variable length packets however a common time based is used.

On the other hand, this transport sequence is fixed length packets with no common time
base so these two are sent after the compression.

Coming to the MPEG-4, MPEG-4 video compression project started as a standard for
very low bit rates like for using portable applications like videophones. So to support
very low bit rates this MPEG-4 is being used and this standard includes much more than
just data compression.
(Refer Slide Time: 44:34)

We have seen in MPEG-1 and MPEG-2 essentially the techniques are concerned with
compression. But apart from compression as we can see MPEG-4 has got many important
functionalities.

(Refer Slide Time: 45:14)

Some of the important functionalities are mentioned here.

 content based multimedia access tools


 content based manipulation and bit stream editing
 hybrid natural and synthetic data coding
 improved temporal random access
 improved coding efficiency
 coding of multiple concurrent data streams
 robustness in error-prone environment and
 content based scalability

So the main feature as we can see it has got many facilities for content based processing.
you must have noticed that nowadays when you are watching relay of cricket matches
sometimes whenever a batsman loses out or a bowler receives a catch that part can be
replayed very quickly and that is precisely because of this content based multimedia
access tools. Or whenever a commentator comments about a catch or wrong fielding etc
then this situation can be very easily identified and accessed with the help of this kind of
processing capability which MPEG-4 allows. Moreover, it allows some kind of data
streaming so multilingual transmission is also allowed with the help of MPEG-4.

Now let us focus on the last standard that is your H.261. This H.261 was developed as a
standard for digital telephony for ISDN services.

(Refer Slide Time: 46:52)

As we have seen in case of ISDN services the rates are multiples of 64 Kbps, that’s why
after compression it is essentially necessary to transmit at a rate lower than 64 kilo bits or
some multiples of it. So H.261 limits the images to just two sizes; one is known as
Common Intermediate Format CIF or Quarter Common Intermediate Format QCIF.

So, using CIF the frame size is 352/288 and each pixel is represented by 8 bit and there
are 15 frames per second so that gives you 24.33 Mbps without compression and by using
H.261 you get 112 Kbps. And in case of QCIF still smaller frame size is used and the
number of frames per second is also reduced from 15 to 10. Of course the number of bits
used for each pixel is 8 and the uncompressed signal has got 4.0 Mbps and after
compression by using MPEG-4 you can also use H.261 it gives you less than 64 Kbps.
This is primarily used for transmission through ISDN network. Another requirement of
H.261 is both encoder and decoder have to be fast because both operate in real-time.

(Refer Slide Time: 48:43)

So, in case of MPEG-1 or MPEG-2 or MPEG-4 we have seen that encoder can be very
slow because it is very complex and on the other hand decoder has to be fast. But in case
of H.261 both encoder and decoder has to be very fast because it is used for interactive
video transmission for example video conferencing. And like MPEG the compressed
stream is organized in layers and macro blocks and like MPEG it uses 8/8 DCT and zig
zag order as we have seen for the purpose of compression and motion compensation is
used when pictures are predicted from other pictures and motion vectors are coded as
differences. This part is similar to MPEG. The way the motion compensation is done in
MPEG it is done in the similar manner in H.261. So we have discussed different audio
and video compression techniques in this lecture. Let us see the review questions.
(Refer Slide Time: 50:11)

1) Why do you need data compression?


2) Distinguish between frequency masking and temporal masking?
3) Compare the importance of luminance compared to chrominance in the context of
compression obviously in the context of video compression.
4) Distinguish between spatial and temporal redundancy?
5) What is motion compensation?

Now it is time to give the answers of question of lecture minus 36.

(Refer Slide Time: 50:48)


1) What do you mean by multimedia?

As we know multimedia stands for more than one media. So, by definition multimedia
means more than one media such as text, graphics, audio, video and animation. For
example, while delivering this lecture I have used text and also used graphics and
animation so we can say I have used multimedia facilitations and also audio/video.
However, it is commonly referred to two or more continuous media such as audio and
video played during some well-defined time interval using user interaction. So commonly
this is how the multimedia is defined.

(Refer Slide Time: 52:05)

2) What is SAS synchronization accuracy specification and what role it plays in


multimedia communication?

As we have seen the multimedia signals require bandwidth as well as well as SAS factors
to specify the goodness, transmission and particularly the SAS factors comprising four
parameters like delay, delay jitter, delay skew and error rate as we have already discussed
in detail. They specify the goodness of synchronization of multimedia stream in addition
to the bandwidth which is required. it ensures occurrence of events at the same time, such
as lip synchronization. This synchronization accuracy specification ensures that the entire
multimedia signal takes place in precise time instance in a synchronized manner. That is
the role of SAS factors.
(Refer Slide Time: 53:13)

3) Explain the function of NPPs in multimedia communication.

For multimedia transmission the requirement is specified by SAS factors and bandwidth.
On other hand, the network performance is specified in terms of network parameters and
these two are to be compared the network performance parameters and SAS factors as
well as bandwidth requirement of an application. By comparing them one can determine
whether or not the network is capable of carrying multimedia traffic. So this network
performance parameter is very essential to decide whether a network is suitable for
multimedia communication or not.

(Refer Slide Time: 54:11)


4) Distinguish between BRI and PRI of ISDN.

We have seen that ISDN uses two important interfaces; one is known as basic rate
interface BRI that specifies a digital pipe comprising two 64 Kbps B channels and 116
kilobits per channel D so this is of lower bandwidth and it is designed to meet the
requirements of residential as well as small offices.

On the other hand, the primary rate interface PRI specifies a digital pipe with 23B
channels each of 64 Kbps and 1D channel that also is of 64 Kbps. So obviously with the
help of this you can achieve much higher bandwidth that can support multimedia
transmission at video quality. Thus, obviously a PRI interface cannot support multimedia
transmission and video quality.

And of course whenever you have got PRI interface apart from video quality multimedia
transmission you can use it for various other applications such as data communication.

(Refer Slide Time: 55:35)

5) Explain the role of RTP and RCTP.

RTP is a transport layer protocol on top of UDP to support real-time traffic. UDP is a
connectionless unreliable protocol so it cannot really support multimedia communication
so, to facilitate that a RTP UDP RTP UDP should be used. That means UDP should be
used in conjunction with RTP to support real-time traffic.

Thus, RTP provides end-to-end transport services to real-time audio, video and
simulation data. Of course for control information you require another protocol that is
your RCTP that allows communication of control information.
With this we come to the end of today's lecture and we have discussed how audio and
video compression can be performed. In the next lecture we shall discuss the different
applications possible because of compression.

Thank you.
Data Communication
Prof. A. Pal
Department of Computer Science & Engineering
Indian Institute of Technology, Kharagpur
Lecture - 38
Multimedia Services

Hello viewers, welcome to today's lecture on multimedia services. The advancement of


technology has led to the deployment of high speed networks and also tremendous
research in the field of compression has led to development of powerful compression
techniques. As a result the bandwidth required after compression is small and networks
provide high speed broadband services. These two together has made many new
applications possible.

In today's lecture I shall try to give an overview of the popular applications or services
available today for multimedia applications. Here is the outline of today's lecture. First I
shall give a brief introduction and then mention about the popular multimedia services.
They can be categorized into three basic types.

(Refer Slide Time: 01:46)

First one is streaming stored audio and video, video on demand is an application,
streaming live audio and video, direct to home DTH and interactive real time audio/video
for which I shall give examples from teleconferencing and voice over IP.
(Refer Slide Time: 02:25)

On completion of this lecture the students will be able to state various multimedia
applications possible today, they will be able to explain how streaming is stored, let me
emphasize on this term stored audio/video services such as VOD is provided, they will be
able to explain how streaming live audio/video services such as TV broadcasting through
internet is possible, they will be able to explain how interactive real time audio/video
services such as videoconferencing and voice over IP are provided.

So here is the basic introduction. As I mentioned the deployment of high speed networks
and reduction in bandwidth requirement because of compression has led to the emergence
of diverse and many applications and this application can be broadly categorized into
three types as I have mentioned streaming stored audio/video, streaming live audio/video
and interactive audio/video and I shall give some examples of these application such as
video on demand, broadcasting of radio/TV programmes through internet, interactive
audio/video such as tele/video conferencing and voice over IP.
(Refer Slide Time: 03:13)

First let us focus on streaming stored audio/video. The simplest approach of giving this
service is to download the compressed audio/video files just like a text file by client and
then the client plays the file.

(Refer Slide Time: 04:05)

Here the schematic diagram is shown. Therefore, in the first step the audio/video files are
requested by the client through the browser to the web server and in response the web
server sends the file and after receiving the file obviously it has to be stored here this can
be played in the third step with the help of a media player so it involves three basic steps;
getting the file from the server by sending a request like GET and getting the response
from the web server then playing it. Now in this the limitation is the file is bulky even
after compression. As a result the file will be very large. For example, without
compression the size of a one over MPEG-1 file is 600 megabyte, after compression it
comes to 300 megabyte for VCR quality 1 over video using MPEG-1.

Obviously to download such a file it will take quite some time, it may take minutes some
time hours depending on the size of the file and the bandwidth of the link through which
the downloading is done so it will take time, it will occupy large storage space in the
client system and then it can be played. These are the limitations of this streaming stored
audio/video service. Here of course we are not really doing the streaming although I have
written here streaming stored/audio/video service is being transferred then it is played but
it is not a streaming service that way.

Let us now consider the second approach. In second approach the media player is directly
connected to the web server for downloading audio/video file.

(Refer Slide Time: 06:28)

Here what is being done the browser again sends a request not really for the file but a
metafile. The metafile contains information of the actual file so the web server sends that
response to the browser. Then the media player uses that metafile to interact directly with
the web server then the media player sends a request like GET audio/video file to the web
server and in response the web server streams the audio/video file and that can be played
here.

So in this case the storage requirement is very small. Here we are using essentially two
different types of file. First is a metafile and the actual file that is being actually being
played and performed in a streamed manner. So the web server streams the file and sends
it to the media player as it plays. This is how it takes place. Now here the advantage is
that the entire file need not be transferred before you can play it.
In the previous case we have seen it may take minutes or hours before you can play the
file. in spite of the fact that you may have very large storage space you may have to wait
but here it is not so, you can start playing after every short time and only a small portion
will be transferred from the web server to the media player then the media player we’ll
keep on playing as it receives the actual file from the web server. So this is the advantage
of this approach. So the client system need not have very large storage space in this
particular case.

However, here the limitation is both the browser and the media player uses the HTTP for
downloading the metafile as well as the audio/video file so all the steps will require the
use of HTTP which runs the TCP. And as we know TCP is not really a very good
protocol for the purpose of streaming. Why TCP is not very suitable? The reason is TCP
first of all performs flow control, error control and as a result if there is some error the
some kind of retransmission will be required, that is not supported or required for
streaming because of high redundancy present in the files. So it is not really a very good
approach that’s why the third approach is used in which case a separate media server is
used for downloading audio/video file.

(Refer Slide Time: 09:46)

So in the first step the client through the browser sends a GET request for the metafile
then the web server sends the response and that metafile information is sent by the
browser to the media player then the media player communicates with media server in
step four and five and then the media player communicates with the media server to
perform the communication. In such a case it avoids the use of TCP which is unsuitable
for downloading audio/video files. So here it uses UDP where UDP does not support
flow control, error control and so on which is not really required here however UDP
alone cannot do the job it uses other support protocols to perform this streaming. Hence,
this is a better approach than the previous two and this is being commonly used for
audio/video streaming.

Now let us consider the application of this streaming stored audio/video. The most
popular application is the video on demand. We are all familiar with the stores which
gives the CDs that is CDs of movies on rental basis.

(Refer Slide Time: 11:15)

We take the CDs to our home then we can play them and whenever we want we can
pause, we can stop or we can replay that is a part of the movie can be replayed. We want
similar facilities through the internet. Instead of rental stores here we shall use the
audio/video servers to do the job and the only requirement is that we must have a high
speed network such as SONET or ATM and these links are usually fiber optic. We have
already discussed SONET and ATM.

The servers are connected through high speed network and to some switches and with the
help of some switches there is local distribution network and you can have some kind of
local spooling server. Sometimes the files are transferred to the local spooling server and
from that local spooling server it can be broadcasted or multicasted to a number of users.
For example, we can use different types of local distribution systems.

For example, it can be LAN in some cases or it can be some other type of network that
can be used for local distribution network. So this is the requirement for video on demand
service. You’ll require a high speed network and then it is transferred and it goes to the
switch and then distribution is performed. Let me take up an example as a video on
demand that is being done in IIT Kharagpur campus. Video on demand is being used here
for educational purposes.
(Refer Slide Time: 13:41)

We don’t use it for commercial purposes but for educational purposes. Instead of movies
here we use a number of video courses. There are about hundred video courses which are
used as video on demand and throughout from the campus anywhere the students, faculty
members anybody can access these video courses and there are 30 to 40 lectures in each
of them each of one hour duration so that is being provided within the IIT Kharagpur
campus. For that purpose you require some infrastructural facilities. Let us look at the
infrastructure deployed in IIT Kharagpur.

First of all you require high speed LAN and for residences ADSL network. We require
media servers these are the hardware required, of course you will require some software
as well like operating system. There are several alternatives but here windows 2000/.NET
server is being used then you require encoding software, here also there are several
alternatives. Windows media encoder is being used here at the server end and at the user
end windows media player is being used.

These are being used because these are primarily free and they provide good quality
audio/video above 128 Kbps. Let us now focus on the distribution network in IIT
Kharagpur campus.
(Refer Slide Time: 15:49)

There is a Gigabit Ethernet based back bone network. This is the main switch the layer
three Gigabit Ethernet switch which acts as a backbone. This switch is installed in the
computer and informatics centre and from that place through optical fiber link various
departments and even all the hostels are connected. So optical fiber links are going to all
the departments and all the hostels where there are 100 Mbps Fast Ethernet switches so
with the help of that it goes to different users or desktops.

However, for the residential area the requirement is different. This LAN facility is not
extended to the residential area but with the help of DSL based broadband access the
broadband services are provided to residential area and for that the technology that is
being used is the DSLAM which is used as the access provider which is the equipment
that really allows DSL to happen.

What it does is there is a box which is shown here DSLAM which is essentially a
multiplexer, this DSLAM takes connections from many customers, it goes to twelve
residences where there is ADSL modem. Then the signals from twelve customers are
aggregated and are connected to 100 Mbps that means Fast Ethernet LAN. And on the
other side to get the audio service it is connected to the PABX square where about 2000
PABX line is there so with the help of this, both audio and internet service is available.

The campus LAN it is connected to the Gigabit Ethernet switch which is the internet
service provider to the residences. Then it comes through the residence to ADSL modem
and in that ADSL modem one port goes to the desktop and another port through low pass
filter goes to the telephone. So the telephone conversation which is restricted to usually 4
Kbps does not disturb the data communication for the internet access so both can take
place simultaneously internet access as well as telephone telephonic conversation. So
with the help of this broadband service available to the residences as well as to the
institutional area that means the hostels, departments and so on this video on demand
services available.

Now let us focus on the media servers that are being necessary. Media servers has to be
little powerful in terms of the main memory and the hard disk storage. Media servers
typically has got twenty Pentium 4 processor each with main memory of 1 GB and hard
disk drive with RAID-5 which means you require five hard disks each of 147 Gigabit
capacity.

(Refer Slide Time: 19:15)

Although there are several alternatives like one can use tape, optical disk drive and
magnetic disk controller most of the course material is stored in hard disk in IIT
Kharagpur and others are not used but they can be used if necessary. Let us talk about
some important issues related to this video on demand service.
(Refer Slide Time: 20:05)

It has been observed that not all thousand movies are equally popular. Similarly if you
have hundred courses each of them may not be equally popular, some are very popular,
some are less popular, some are not at all used. So how do you provide better service for
the courses which are more widely used? This can be done in two ways. One possible
solution is to use memory hierarchy. As you know in computer the memory can be
hierarchically organized.

At the top layer you have got the random access memory which is the costliest. So cost is
higher for RAM, cost is lesser for this hard disk RAID then it is still lesser for optical
disk and still lesser for tape archive. On the other hand, for capacity it goes in the reverse
direction, RAM has lesser capacity than hard disk, optical disk has still higher capacity
than the hard disks and tape archive can have still higher capacity so what can be done is
some of the courses which are very popular can be stored in hard disk, lesser useful
courses can be kept in optical disks and courses which are very rarely used can be stored
in tapes. In this way one can hierarchically distribute different courses.

However, with time the cost of hard disk drives has come down significantly. Nowadays
the hard disk drive cost is not very high so this alternative is not being used in IIT
Kharagpur. Instead of that what is being done is instead of having one media server three
media servers are used each of the same configuration with the same IP address so it is
transparent to the user.

However, out of these three servers one server is loaded with lesser number of courses
that is most popular courses are loaded in the server and the number of courses loaded in
that server is fewer. In the second server the number of courses is little more which are
usually accessed but not very frequently and in the third server you can put a large
number of courses but the courses are not regularly used. So in this way using all the
courses in hard disk using RAID-5 is being done and use of RAID-5 also provides you
higher throughput which is necessary for streaming purposes.

As we can see here real time output streams has to meet timing requirements. For
example, the kind of encoding that is being done here requires streaming at the rate of
766 Kbps to get flicker free display. Whenever you read some data from disks it is read in
terms of sectors in discrete form.

On the other hand, whenever it is displayed it is done in a streamed way I mean


continuous manner so you have to use some kind of buffering. This is how the data is
being read so it is read and one sector is sent and another sector is read so this is the time
required then another sector is sent then after reading another sector it is being sent here
in this way it goes on. So, after buffering for some duration the transmission starts so here
the play starts and you can see the play is taking place continuously. So in spite of the
fact that the data is received in discrete form in terms of packets the transmission is
taking place continuously to get flicker free display. These are the important issues to be
remembered in the context of VOD service video on demand service.

I have discussed in nutshell the requirement for this video on demand service
infrastructure. You require broadband network service then media servers then the
operating system at the server end and windows media player at the receiving end. This is
how the video on demand is provided for educational purposes in IIT Kharagpur. Now let
us come to the next application that is your streaming live audio/video.

(Refer Slide Time: 25:34)

We are very familiar with the radio stations, TV stations which are continuously
transmitting audio and video signals and with the help of our radio receiver and TV we
can tune to one of the stations, can watch or listen to radio songs or news and similarly
we can watch the TV. But what we want here is we want to do it through internet. So,
whenever we try to do it through internet similar kind of audio and video services we
have to understand the requirement and see how it can be supported.

First of all just like the conventional audio/video service using radio and TV stations
through internet it is also sensitive to delay and also retransmission cannot be done
whenever we do it through internet. As I have mentioned because of lot of redundancy if
one frame is discarded it does not matter but after this transmission if that frame comes
back we cannot really play it then so retransmission has to be dismissed.

Also, we have to understand that in this case communication is really multicast and live it
is not unicast as it happens in case of video on demand. Video on demand is essentially
unicast from the media server to a particular user. Of course that media server is
simultaneously giving service to a number of users at a time. However, it is usually
unicast in nature. However, in case of streaming live audio/video it has to be multicast
and live, it is not stored.

One example of streaming live audio and video is through satellite network. As you know
the program material from studios are sent to uplink earth stations and through satellite
this goes to different networks.

(Refer Slide Time: 27:51)

You have the VSAT antennas and there it is received and you can have cable TV network
or you can have a transmitter like LPT low power transmitter or HPT high power
transmitter which can broadcast these or it can be broadcasted with the help of the cable
TV network.

As I mentioned earlier the cable TV network can also be used for distribution of video on
demand but the use of cable TV for video on demand is not popular particularly in India.
This is one of kind of service. So, through network it is distributed then either using LPT
or HPT high power transmitter or low power transmitter it can be broadcasted. Another
possibility is to use cable TV network for distribution.

Another new service has been introduced which is which is becoming gradually popular
that is direct to home service. Here there is no need for cable TV provider or low power
transmitter or high power transmitter is not involved. So, through satellite link each home
will have a small antenna and that antenna can be used to receive this signal from satellite
where we will require one set top box. With the help of set top box it can be played. The
set top box is essentially a computer it has got the CPU, ROM, RAM and through input
output port it goes to TV and remote control and in addition to that it has got the MPEG
decoder.

(Refer Slide Time: 29:51)

So the signal that is being received through the network, in this case it is a satellite
network, it can be other types of network as well from the network it receives the
compressed signal which is decoded by this MPEG decoder then through IO it goes to the
TV and you can select different channel and you can configure it in different ways with
the help of the remote control. So in residences you require a small antenna and a set top
box with the help of which you can have this direct to home service which provides you
this streaming live audio/video services.

Now let us consider the third service that is real time interactive audio/video. In case of
real time interactive audio/video you have to first understand the requirement for real
time interactive audio/video.

First let us consider the requirements then we shall discuss various applications and see
how these services are provided. First of all let us consider how a client and a server
communicate through some audio/video in real time and interactive manner.
(Refer Slide Time: 31:10)

Let us assume four files are there each is holding audio/video information each of twenty
second and this is being transmitted through internet and here it is assumed that the
internet takes about one second to send the packet from one end to another. Of course
there is propagation time here, one second is the propagation time and twenty seconds is
the transmission time of the audio/video file. so the server sends the audio/video file to
the client and where it is played. Let us assume the server has started sending the first file
from 0 0 0 10k this is 0 0 0 0 ten hours, this is the hour, this is the minute, this is the
second. So this is your hour, this is your minute and this is your second (Refer Slide
Time: 32:40).

As you already know this time information is provided by the encoder. The MPEG
encoder provides time stamping in each of these packets or files. So with the help of this
time stamping the time is known so at this time this is being sent and the receiver knows
what the starting time is. However, it reaches one second later so immediately after
receiving it, it starts playing so as it starts playing it will play for 20 seconds and after 20
seconds it will start receiving the second packet or second message we can say which
also has the twenty second information which comes in between 00 00 31 to 00 00 51
within this time it takes twenty seconds to receive here and it displays. That means play is
starting one second later and then it is continuously being played here in real time. In this
way four files are transferred in real time from the server to the client and it plays there
continuously.

So here we have assumed that the delay is constant and it takes one time. So this one
second delay of the network does not really matter because here the playback is starting
after one second and then it is continuously playing it, so it is received and played, so the
received and played time is shown here. but unfortunately the network does not have a
constant delay and as we know the video files are having variable bit rate because of the
compression of the different files can be of different amount this will lead to variable bit
rate traffic and as a consequence the files that will be reaching or the messages or the
information that will be reaching or the messages or the information that will be reaching
at the client end will be different that means some variation in delay as it is shown in the
next slide.

(Refer Slide Time: 35:16)

Here let’s assume the first file is having a delay of three seconds, this one (Refer Slide
Time: 35:30) is having a delay of three seconds, the second file is reaching after a delay
of about six seconds, third file is reaching after a delay of about ten seconds so this is
three, this is six seconds, this is ten seconds and this one is reaching after a delay of about
eight seconds. Now what you can do in such a case? In such a case in spite of the fact the
receiver does not know what is the starting time of each of the packet. Since they are
received at with different delay they cannot be played one after the other because of the
variation in delay which is known as jitter. How can you overcome this jitter? Apart from
the use of these time stamps the requirement is to use some buffer. What can be done is
before starting these files they can be buffered for certain duration.

It is assumed that maximum delay that can take place is eleven seconds. So what is being
done at the receiving end is that the first file is played after 21 seconds so 00 00 21, the
sender is sending at 00 00 10 but the play starts at 00 00 21.

However, to do that it is necessary to store the information of eight second. What has
been received in 8 seconds has a delay of three seconds so after eleven seconds you have
started playing so eight seconds of information are to be buffered and then it starts
playing. And as it plays the other message has come in and that too has to be stored.

Usually what can be done is some kind of double buffering can be used so the first file is
stored in one buffer, second file is stored in another buffer, third file is stored in another
buffer so that as the first file is played from the first buffer the information can come to
the second buffer because of variable delay and it can reach here. So you can see that in
spite of the variable delays of eight seconds, three seconds, six seconds, ten seconds it
can be played without any jitter by buffering it and playing after eleven seconds and at
different instants of time you can see when the playback play starts at different points this
is the buffer required (Refer Slide Time: 38:49).

(Refer Slide Time: 38:50)

In the first case information of eight seconds is required, in the second case information
of five seconds buffer is required, in third case information of one second buffer is
required and so on. This shows that in addition to time stamping there is need for
buffering. However, these two are not enough; it requires some kind of sequence number.
what can happen is, suppose you have received a message starting with time stamping 00
00 13, second one you have received with time stamping 00 01 00 and this packet has
been lost on the way this one has been lost on the way (Refer Slide Time: 39:40) so it has
been damaged in such a way the other side the receiver side has not been able to
recognize it, what will happen in such a case. So in such a case the receiver does not
know that a particular message packet has been lost.

Therefore, to overcome that situation it is necessary to use in addition to these two


sequence number. So, each of these messages are provided with a sequence number. That
means this is provided with sequence number one, this is provided with sequence number
two, this is provided with sequence number three, this is provided with sequence number
four.

So, if the message with sequence number two is not received, that means if one is
received and then three is received and two is not received then the receiver will know
that message three has not been received. This is how by using time stamping buffer and
sequence number it can be played one after the other in a continuous manner without any
jitter it can be played and even as in frame reaches one after the other, suppose this frame
(Refer Slide Time: 41:04) reaches later than this one that also will be taken care of by this
sequence number. So these are the facilities required for real time interactive audio/video
sequence transmission and suitable protocol is to be used to support this.

(Refer Slide Time: 41:21)

One application of this real time interactive audio/video is video conferencing. Using a
network a camera and a head set people can interact as if they are talking face to face in a
room. Nowadays if you want to hold a meeting you have to go somewhere to attend the
meeting and possibly people from different parts of the country will come to attend the
meeting. But nowadays busy executives don’t have time to travel so in such a case one
can do videoconferencing, it can be done for holding meetings, it can be done for holding
interviews etc.

For example, a person sitting in Bangalore can take an interview of a person sitting in
Calcutta or Kharagpur talking to each other and they can see each other as if they are
talking face to face. These are the general applications; conducting interviews, holding
meetings, setting up meetings or giving lectures, this has become popular in many
universities.

The professor sitting in his room can broadcast a lecture and it can be viewed in different
lecture rooms, may be not only in one campus but in several campuses or in different
cities and it can be done in an interactive manner. This has become a reality nowadays.

Apart from this general application another very important application is in telemedicine.
A person in a rural place can take the help or guidance of an expert doctor in a city with
the help of this internet by using teleconferencing. So if teleconferencing is deployed in
some places from there an expert doctor can give advice to a general practitioner for a
particular disease or sometimes an expert surgeon can guide another surgeon from a
remote place with the help of this teleconferencing. These are the various important
applications emerging based on this teleconferencing.

Two types of video conferencing can be done. One is point to point conferencing which
is basically a communication link between any two locations; another is multipoint
conferencing which is the link between a variety of locations. So you will require not
only point-to-point which requires unicast communication but also multicast where
multiple users can talk to each other. Similarly video conferencing can be done
simultaneously at different places.

Nowadays for example when we hold conferences there also we use this service. For
example, there two program committees one in India and another in USA. The two
program committee members of USA and program committee members of India can
interact by using videoconferencing; can hold meetings for deciding which papers are to
be selected for a particular conference and so on. So this has become very common and
widely used. As I mentioned to support this real time audio/video service such as video
conferencing it is necessary to have multicasting.

(Refer Slide Time: 45:15)

Sometimes you will require translation because all users may not be equally capable so in
some cases the bandwidth of the signal has to be reduced so that some users can take part
in videoconferencing or in some situation mixing has to be done, signals from a number
of places are to be mixed then they can be aggregated and sent to another place so these
facilities are necessary for video conferencing.

And as I mentioned TCP is not suitable for interactive traffic and we have to use
multicast services of IP and use of services of UDP along with RTP which is another
transport layer protocol. So UDP along with RTP is commonly used to support
videoconferencing.
(Refer Slide Time: 46:28)

The last application which we shall discuss is voice over IP which is essentially internet
telephony. You are very familiar with our telephonic conversation through Public
Switched Telephone Network (PSTN) this is essentially a circuit switched network. now
this similar kind of services are being provided through internet which is known as
internet telephony and this has been made possible with the increased deployment of high
speed internet connectivity and a growing number of individuals who are using internet.
Because of this a growing number of individuals are using internet for voice telephony.

I shall discuss two protocols which have been developed to support voice over IP. One is
known as SIP Session Internet Protocol and another is H.323 these two are used to
support voice over IP. First let us consider SIP. SIP can send different types of messages
like INVITE, ACK acknowledgement, BYE, OPTIONS, CANCEL and REGISTER.
Each message has a header and body.
(Refer Slide Time: 47:36)

For example to set up a connection, here is terminal one and here is terminal two (Refer
Slide Time: 48:16) they want to talk to each other so this terminal sends ‘invite’ message
with address and various options bandwidth requirement and other things then at the
other the other end the terminal responds with okay message and it also provides the
address then this terminal sends the ‘acknowledgement’ message and then they can
exchange audio. And after the audio conversation is over the initiator sends a ‘bye’
message to terminate the communication. So with the help of these three ‘invite and
acknowledge’ is used to set up a connection, ‘bye’ is used to terminate a connection then
‘options’ are used to negotiate various options and ‘cancel’ is used to discontinue of a
particular application. Let us discuss about ‘register’ a little later on.

There are different types of address options possible in case of this SIP protocol. For
example, one can use IPv4 IP address or one can use email address or one can use phone
number of course it has to be done using the SIP format. For example when IP address is
used it has to be SIP followed by colon apal then at the rate of some IP address
14416.192.110. So this is one address format or it can be the email address so this part is
same [email protected] or it can be the phone number in the SIP format. So addresses
can be provided in number of ways.
(Refer Slide Time: 50:23)

I was talking about the use of register. Particularly whenever we are using DICP then a
particular user for example a callee may not have a permanent IP address. In such a case
to track the callee the concept of register is used. What is being done, the caller sends the
invite message which is sent to the proxy server using the email address it does not know
the IP address then the proxy server with the help of a look up service sends the message
to the register and the register is responsible for holding the information of each of the
users so the register replies and provides the IP address. Then the proxy server sends the
invite message to the callee using the same IP address and whenever it sends okay that is
being communicated to the caller so proxy server and caller again communicate messages
like okay message and acknowledgement message and finally acknowledgement is send
by the proxy server to the callee and then the exchange of information can be done and
when the exchange of information is over the ‘bye’ message can be sent by the callee.

(Refer Slide Time: 52:01)


Another protocol available is the H.323. This is used to allow telephones on PSTN to talk
to computers. In the previous case the communication was between two computers, here
we want a telephone; an ordinary telephone has to communicate with another person on
the computer. So here you see the Public Switched Telephone Network (PSTN) is linked
to the internet through a gateway and a particular server known as gate keeper is required
and to support this you require some protocols like compression code, RTP, RTCP,
H.225 and for control and signaling Q.931 and H.245. here how it is being done, a
terminal sends a message to the gate keeper which responds with the IP address H.225
message is used to negotiate bandwidth between the caller and the gate keeper then
Q.931 is used to set up the connection in the same way then H.245 is used to negotiate
the compression method that can be used and RCTP is used for management then Q.931
is used to terminate the connection. Thus you require a number of protocols to support
this which is part of this H.323.

(Refer Slide Time: 52:54)


Now let us see what Skype is. Skype is the most voice over IP software service that is
being used nowadays.

(Refer Slide Time: 53:54)

Skype is a peer-to-peer VOIP client introduced in 2003 developed by KaZaa a US based


company. It has become very popular possibly having the largest user base. With the help
of this two people can speak with each other using handsets and microphones connected
to their computers directly. It is free between any two computers.

However, whenever one wants to talk between a computer and PSTN line then of course
one has to pay. Skype client can be very easily installed, within few minutes it can be
installed and can be used. Skype has used astonishingly good video compressor providing
very good quality audio, it also supports in addition to voice instant messaging, search
and file transfer. Moreover, it uses encryption so as a consequence communication
through is much secured.

Now it is time to give you the review questions.

(Refer Slide Time: 55:05)

1) Why compressed video is more sensitive to error than uncompressed video?


2) Why a media server is used for streamed audio/video service?
3) Distinguish between streaming of stored audio/video with the that of live
audio/video
4) Why TCP is unsuitable for interactive traffic?
5) Why the key features what are the key features of SIP protocol?

Answer to the questions of lecture -37.

(Refer Slide Time: 55:34)


1) Why do you need data compression?

As I have mentioned data compression provides the following benefits; reduced storage
space, reduces bandwidth of the network, reduce communication cost and emergence of
new applications as I have discussed today.

(Refer Slide Time: 55:58)

2) Distinguish between frequency masking and temporal masking?

In frequency making, a loud sound in a frequency range partially or fully masks another
sound in the nearby frequency range. On the other hand, in temporal masking a loud
sound can numb our ears for a short duration even after the sound has stopped. These are
two differences used for audio compression in MP3.

(Refer Slide Time: 56:30)

3) Compare the importance of luminance compared to chrominance in the context of


compression.

As our eyes are much more sensitive to the luminance signal than to the chrominance
signals the latter need to be transmitted accurately so different precision is used for
luminance and chrominance as you have seen. Better resolution is used for luminance
than chrominance because luminance is more sensitive our eyes so this facilitates better
compression.

(Refer Slide Time: 57:08)


4) Distinguish between spatial and temporal redundancy?

Spatial redundancy that exists in each frame is used for compression using JPEG. On the
other hand, temporal redundancy of a set of frame is used for compression by taking
advantage of the fact that consecutive frames are often almost identical. That is used by
MPEG and this is being used by MPEG.

(Refer Slide Time: 57:37)

5) What is motion compensation?

Differences between consecutive frames of a movie arise because of the result of moving
the scene, the camera or both the frames. The differences are usually very small. This
feature can be exploited to get compression by a technique known as motion
compensation as it is done in MPEG.

With this we have come to the end of today's lecture and also we complete our discussion
on multimedia communication.

Thank you.
Data Communication
Prof. A. Pal
Department of Computer Science & Engineering
Indian Institute of Technology, Kharagpur
Lecture - 39
Secured Communication - I

Hello and welcome to today's lecture on secured communication. Nowadays computer


come network or internet is used for many sensitive applications such as shopping,
banking, reservation of railway and airline tickets and in general for ecommerce. And as
a consequence communication of data securely over the network is glooming large on the
horizon as a massive problem.

In this lecture I shall try to discuss how secure communication can be achieved.
Obviously the subject is very vast. Usually a full course can be covered on secured
communication. But I shall try to give an overview of various aspects of secured
communication in two lectures. Here is the outline of today's talk. I shall give a brief
introduction where I shall explain the need for secured communication.

(Refer Slide Time: 01:57)

What is really meant by secured communication?

As we shall see cryptography will play a very important role in secured communication.
In fact cryptography can be considered as a panacea to this problem. We shall see there
are several cryptographic approaches. Basically it can be divided into two broad
categories. One is known as symmetric-key cryptography where there are several
variations like traditional ciphers using monoalphabetic substitution, polyalphabetic
substitution and transpositional cipher. Then there are block ciphers where different
transformations are used. We shall explain the different transformations and explain one
important cryptographic technique known as Data Encryption Standard DES which is
widely used. Then we shall discuss about another cryptography technique known as
public-key cryptography. We shall also see one application of public-key cryptography
that is the RSA algorithm.

(Refer Slide Time: 03:20)

On completion the student will be able to state the need for secured communication, they
will be able to explain the requirement for secured communication and they will explain
various cryptographic algorithm such as private-key and public-key cryptography.

Now let us focus on why you need secured communication for applications like
ecommerce. The basic objective is to communicate securely over an insecure medium.
(Refer Slide Time: 03:41)

Essentially what we are trying to do is to communicate privately over public network. So


whenever we use public network we encounter what is known as insecure medium. What
do we really mean by insecure medium?

As the message is communicated through these links or communication media


eavesdropper can listen in and try to tamper it so there are some people who can try to get
hold of the message or make use them for their own benefit or tamper them.

Moreover, as we have seen in public communication network the information has to be


stored in switches and routers before forwarding to the next node and in such cases
information needs to be stored and then forwarded through packet switches. These
switches can be programmed to listen to and modify data in transit. So these are the two
problems that we encounter while communicating through public medium. So we have to
find out the solution and solution is we have to develop techniques so that we can
communicate securely over insecure medium. It involves four different aspects or four
different components.
(Refer Slide Time: 05:17)

First one is privacy. A sender should be able to send a message to the receiver privately.
What do you really mean by privately? By that we mean obviously this message will be
exposed to other people but they will not be able to understand it. So it should be
intelligible un-intelligible to others but it should be intelligible only to the person to
which it is being sent.

Second is authentication. After the message is received by the receiver he should be sure
that the message has been sent by nobody else but by the sender. So by authentication we
mean that information message is coming from the intended sender and not from
somebody else we do not know. This is known as authentication. So the security service
has to provide this privacy and authentication.

The third parameter or third component is integrity. The receiver should be sure that the
message has been not tampered on transit. So by looking at the received message the
receiver should be able to sure that message has not been modified or tampered on transit.

Finally the fourth important component is non repudiation. The receiver should be able to
prove that the message was indeed received from the correct receiver. That means
suppose a subscriber tells a bank to pay some money to a particular customer now over
phone or over electronic medium after sometime he may come to the bank and say no I
did not send it then it is the responsibility of the bank to prove that the person actually
gave the instruction to pay. So, that is the function of non repudiation, these are the
services to be provided by the secured communication network. And as we shall see
possible solution is to use cryptography.
To achieve all these services we have to use cryptography. So in this lecture I shall try to
give an overview of the various techniques used in cryptography.

First let us have a look at the cryptography model. In cryptography we require three
characters. Actually to tell the story of cryptography we require three characters; one is
the sender, another is the receiver. So here is the receiver and in between there may be an
intruder

(Refer Slide Time: 8:45)

Usually we give three names. If you look at some textbook you will find Alice, Bob and
Eve, these kinds of names from Europe is taken. But I shall borrow the characters from
Ramayana. We shall consider the sender to be Sita, the receiver is Ram and on the other
hand the intruder we shall consider as Ravana. So we shall use three characters Sita, Ram
and Ravana to tell the story of cryptography.

Now a message is sent by the sender Sita known as plain text, usually that message is
known as plain text and that is being encrypted and for encryption purpose a key is being
used which is secret which is not known to Ravana or the intruder or the imposter which
is in between. With the help of that key encryption is being performed and to do the
encryption we have different algorithms. Actually the encryption can be done either by
hardware or by software or a combination of both. And after doing the encryption,
suppose this is the message (Refer Slide Time: 10:30) and encryption is done by Sita or
you can say this is the key, we can say that the key that is being used generates a message
modified message C is equal to encrypted by key and this is the plain text P.

So a plain text P which is the message is being encrypted to produce a modified message
known as cipher text. So cipher text is being transmitted through the media which reaches
the other end. And at the other end the receiver or Ram performs the decryption with the
help of the same key or a different key so another key is being used for decryption so
decryption is being performed on C to get back P so C is decrypted to get key P. This is
how the entire cryptography works. So a plain text generated by the sender or Sita is
encrypted with the help of a key and after encryption the message is known as cipher text
that is communicated through the insecure medium which reaches the other end and
which is decrypted with the help of another key to get the plain text, this is the model for
cryptography.

(Refer Slide Time: 12:15)

Now let us see what the various alternatives are. There are two basic approaches. One is
known as symmetric key cryptography and the other is public key cryptography. There
are two basic approaches we shall consider one after the other in detail.

In case of symmetric key cryptography a single key which is known as shared key is
being used by both the parties. That means both Sita and Ram which are at the two ends
they use the same secret key, let us name it SR being used by both Sita and Ram.
(Refer Slide Time: 15:36)

This particular key is known only to Ram and Sita and not by anybody else. And after
doing the encryption by using this key KSR this is being transmitted through the medium
in the form of cipher text and as it reaches the other end the same key is being used for
decryption to generate the plain text by Ram. So this is how the symmetric key
cryptography works and as I have told it uses the same key in both directions. This is
commonly used for long messages. When we have to use a long message say some text
of few pages or some thing like that we can use this symmetric key cryptography.

It has got several problems. First of all it has got it will require a large number of unique
keys. For example, if you have got two users you will require one key. So if you have 5
users how many keys will be required? It is 5 into 4/2 so you will require 10 keys and in
general if you have got n users you will require n into n minus by two keys. That means if
you have got say one million users then you will require 500 billion keys so that is the
problem of this symmetric key cryptography. The total number of key is very, very large.
Particularly when the number of user increases the number of keys also increase, this is
known as n square problem.

Moreover, another problem is distribution of the keys you have to distribute so many
keys secretly to all the users. That means this key KSR should be known only to Sita and
Ram but nobody else. So n square keys n into n minus 1/2 actually are to be known only
to a pair of users, each will be known to a pair of users so this makes the distribution of
keys very difficult. Moreover, each user has to store the key in the storage medium. for
example, when you have got one million users you will require about one million storage
space for each user to store the keys, actually it is one million minus 1 so one million
minus 1 keys are to be stored by each user to store the keys if he wants to communicate
will all the people. These are the problems we have to tackle. In the next lecture we shall
discuss how it can be done.
First I shall discuss the traditional ciphers where the characters are used as the unit for
encryption and decryption. as you know an ASCII character can be represented by 7-bit
and of course if you use one bit parity so that makes it 8-bit so each character is of 8-bit
and traditional ciphers performs the encryption character by character so the unit of
encryption and decryption is in terms of character.

(Refer Slide Time: 19:10)

The unit of encryption and decryption is in terms of characters. The first one is
monoalphabetic substitution. Here the relationship between characters in the plain text
and a character in the cipher text is always one-to-one. That means for each character
there is a unique character in the cipher text. For example, here the key that is used is 3,
key is equal to 3. By that we mean in place of ‘a’ we shall use a character which is three
characters away from ‘a’ that is ‘d’ and this was used by Julius Caesar that’s why it is
known as Caesar cipher. So Julius Caesar was not only a good soldier good fighter but he
developed the cryptographic technique for secret communication and this was the
technique that was being used.

So, if you are sending ‘a’ in place of ‘a’ ‘d’ will be used, in place of ‘b’ ‘e’ will be used,
in place of ‘c’ ‘f’ will be used and so on so in this way when you use key it has to be
done in this manner. Usually the characters can be represented by numbers. For example
‘a’ can be represented by numbers. For example, ‘a’ can be represented by 0, ‘b’ can be
represented by 1, ‘c’ can be represented by 2 and so on. Now if you add 3 to it to the
number that means 0 plus 3 will make it 3 so in place of 3 we shall be using ‘b’ in place
of ‘a’. Similarly when we do the decryption then you have to subtract 3. So we find that
in case of this monoalphabetic substitution the encryption and decryption operation are
just the opposite.

If we use addition for encryption we shall use subtraction for decryption, if we use
multiplication for encryption we shall use division for decryption. So, for encryption plus
three is added to generate the cipher text and minus three is added with the cipher text to
get the plain text. This is how the encryption and decryption is done. Obviously this is a
very simple technique but the code can be attacked very easily because from the
linguistic point of view each and every character appears with certain frequencies so
frequency of appearance varies from character to character. For example, characters like
‘e’ ‘t ‘’o’ and ‘a’ appear more frequently than other characters. So based on that it can be
easily found out, that’s why it is not very popular.

Another approach that can be used is known as polyalphabetic substitution. In this


particular case the relationship between a character in a plain text and a character in
cipher text is always one to many not one-to-one.

(Refer Slide Time: 21:38)

For example, in Vigenere cipher this polyalphabetic substitution is used. So as you can
see here you can use a matrix and this matrix provides you information for performing
the encryption. So the character ‘a’ is replaced by ‘w’ if it appears in the first row,
character ‘b’ is replaced by ‘r’ if it appears in the first row, on the other hand if ‘a’
appears in the second row it is replaced by ‘h’ or if it appears in the third row it is
replaced by ‘p’ or it can be represented mathematically. suppose you perform mod
operation on the number, suppose you perform mod ten let’s assume then the first
character says ‘a’ is zero mod zero is zero so in the first row all the characters remain as
it is, in the second row it is mod ten of one so it is one so ‘a’ is replaced by ‘b’, ‘b’ is
replaced by ‘c’, ‘c’ is replaced by ‘d’ and so on so in this way up to ten rows it goes that
means in the tenth row it will be added with mod 9.

Finally when it is row eleven then it will again added with one and then it will be
substituted. So in this way we can perform poly alphabetic substitution that means each
character is represented by different characters based on the appearance in different rows.
Obviously it is more complex than the Caesar cipher and the code is harder to attack
successfully. Therefore, this gives you a better security compared to Caesar cipher.

Another approach that can be used is known as transpositional cipher. In this case the
characters remain unchanged but change their position to create the cipher text.

(Refer Slide Time: 24:10)

In the previous case we have seen that the text was Sita and the letters s i t a are replaced
by some other characters either in polyalphabetic or monoalphabetic substitution. On the
other hand, in case of this transpositional cipher the same characters are used however
their positions are changed. This is explained here.

For example, here this is the plain text, this is the plain text P and this has to be encrypted
and the key is represented by this box which performs the transposition. So you can see
here that if the character in column one is now placed in position three in the cipher text,
the character in position two is placed in position six in cipher text and so on so this
particular diagram represents that one is in position 3, 2 to 6, 3 to 4 and so on. This plain
text (they alone live who live for others) gets transformed into this yt e h n e a o i l and so
on. as you can see here t h u i they are present here but their position has changed so you
are doing encryption by using this by changing the position and you can do the
decryption by repositioning in the opposite manner and you’ll get back this frame text. So
here you get the cipher text from this plain text by using this transpositional cipher and
you can get back the cipher text. So you can see it is not very secure, the attacker can find
the plain text by trail and error based on frequency of occurrence of characters. So the
positions can be altered by trail and error and it can be deciphered that’s why it is not
very popular.

Now we shall focus on block ciphers. So far we have seen in traditional cipher characters
were used as the unit for encryption or decryption. Instead of that in block ciphers a block
of bits as unit of encryption is being used. The block of bits can be 64-bit or it can be
128-bit or it can be 256-bit that can be different for different encryption techniques but
usually it is in terms of blocks of bits.

(Refer Slide Time: 26:05)

So to encrypt a 64-bit block one has to take each of the 2 to the power 64 input values
and map it on to 2 to the power 64 output values so here the mapping should be one-to-
one as it is shown here. This is the plain text P which is encrypted by this key to get this
cipher text C. Now you may be asking what is really meant by a key. What is a key? Key
is essentially a number which is secret and it is known only to the sender and receiver or
Ram and Sita in case of this symmetric-key cryptography.

So, by using this secret number encryption is done to generate this cipher text. This
cipher text can be decrypted with the using the same key to get back the original plain
text. Thus, the mapping from here to here has to be one to one and this mapping can be
done by two transformations. One is known as permutation which can be performed with
the help of either by hardware or software which is represented by P-Box.
(Refer Slide Time: 28:15)

As you can see here this is the 8-bit input and this 8-bit input positions are permuted to
generate a different number. So 1 0 1 0 1 1 0 1 gets transformed into 1 0 0 0 1 1 1 1 in the
cipher text when it passes through the permutation box. And to represent this permutation
operation you require k log 2k bits. If you have got k bits and for each position you will
require log two k bits to represent the position so you require total number of k log 2 bits
to represent this permutation operation. On the other hand, another transformation can be
done which is known as substitution which can be done with the help of S box as shown
here.

here this is the s box so here what is being done is, first with the help of a decoder it is
converted into another number for example here you have got three bit input that three bit
input is applied to a 3 to 8 decoder and it has got 8 outputs then this output is applied to a
P box and this P box does the permutation and then it is applied to a 8 - to - 3 encoder
which performs the reverse operation to get back three bits. Here you have got 8-bit input
and 3-bit output (Refer Slide Time: 27:57). So here obviously the number is completely
different and the substitution operations require k into 2 to the power k bits to represent
the substitution operation on k bits.

So by using these two transformations you can encryption. I shall explain how it can
implement an encryptography technique with the help of this permutation and
substitution. Here we see how it has been done for a 64-bit input.
(Refer Slide Time: 30:12)

First it has been divided into 8-bit pieces. Hence, in step one the number is divided into
8-bit pieces. So in 64-bit you have got 8-bit pieces. Now substitute each 8-bit based on
the functions derived from the key. So here you perform substitution based on a key and
obviously here you get a different 8-bit number for each of these 8-bits. After this is
being done you get 64-bit intermediate data, here eight 8-bits numbers are combined to
get 64-bit intermediate data and this is being permuted with the help based on another
key so permutation is performed again with the help of a key to get 64-bit output. Now
you can perform several rounds of looping to improve the encryption technique, to
increase the security of encryption.

Obviously the question arises how many rounds you will do. It has been found that as the
number of rounds increase, the effectiveness of encryption increases however it should
not be too large so you have to find out an optimal number of looping to do the
encryption. Let us see a standard technique that is being used which is known as Data
Encryption Standard or DES
(Refer Slide Time: 33:10)

This was developed by IBM and subsequently it was adopted by the US government for
use in encryption of non-classified data. How it works is explained here with the help of
this diagram.

This is again a monoalphabetic substitution cipher using 64-bit character. So, input is a
64-bit character, this is the plain text (Refer Slide Time: 30:51) and first step is
permutation. As you can see initial permutation is performed on this 64-bit input.

Here it is explained how it is being done.

(Refer Slide Time: 31:10)


It has got sixteen rounds and the sixteen iterations are performed in this way. Although it
has got 56-bit key actually a subset of bits are used as key one, key two and key sixteen
to generate keys in different iterations. So with the help of a key processor this k1 to k16
are generated as the information passes through each iteration and in each iteration the
operation performed is explained here.

Here (Refer Slide Time: 31:54) the 64-bit is divided into two parts say this is your left
part and this is your right part so right part is swapped to the left part and as you can see
the right part is encrypted with the help of that 48-bit key so this is the function of this
right 32-bit and the key and this is being exclusive OR with the left 32-bit so we get the
right 32-bit so we get right 32-bit and left 32-bit and this goes to the next iteration
process and again the same operation is done but with a different key. So in this way after
passing through sixteen such iterations the 32-bit, 32-bit are swapped and the inverse
permutation which is opposite of this initial permutation is performed. So altogether as
you can see it has got nineteen distinct stages starting with this initial permutation,
sixteen different iterations and then swap and then inverse permutation so altogether
nineteen distinct stages are there to generate the cipher text.

These are the typical features of this data encryption standard.

(Refer Slide Time: 33:18)

Although the input for data encrypt encryption standard is 64-bit long the actual key used
by DES is only 56-bit and the other bits are parity bits. So parity bits are essentially used
for error checking, they do not take part in generating the keys. The decryption can be
done with the same password and the stages must then be carried out in reverse order. So,
to perform the decryption operation you have to just do the reverse operation. So here
inverse permutation has to be done (Refer Slide Time: 34:01) then swapping then it is
passed through the reverse process by using the same set of keys to get back the plain
text.

DES has sixteen rounds meaning the main algorithm is repeated sixteen times to produce
the cipher text. As the number of rounds increase the security of the algorithm increases
exponentially. As you increase the round more and more security increases but there is
some optimal number and in case of DES the optimal number selected was 16. Once key
scheduling and the plain text preparation have been completed the actual encryption and
decryption is performed by the main DES algorithm.

Although DES works for only 64-bit blocks of data there is need for encrypting large
messages or sometimes small messages, how it can be done by using DES?

(Refer Slide Time: 35:45)

That can be done by using different modes of operation of DES. First one is known as;

 Electronic Code Book ECB


 Cipher Block Chaining CBC
 cipher feedback mode CFB and
 output feedback mode OFB

Let me discuss these four different modes of operation of DES. First one is known as
Electronic Code Book ECB. This is the regular DES algorithm. Data is divided into 64-
bit blocks. Since your message is long it is being divided into 64-bit blocks and each
block is encrypted one at a time and separate encryptions with different block are totally
independent of each other as it is shown in this diagram.
(Refer Slide Time: 37:37)

So each plain text is 64-bit which generates a cipher text which is also 64-bit. Thus, here
you perform some kind of mono alphabetic substitution, 64-bits number is s encrypted to
get another 64-bit number. As you can see for P1 we get C1, for P2 you get C2 and for Pn
you get Cn so here you have got n 64-bit inputs and we get n 64-bits output which is the
cipher text block.

Here as you can see if Pi is same as Pj then Ci will be same as Cj, this is the drawback of
this approach. For example, this may give a clue to the intruder or the Ravana. For
example, there are ten persons who are getting the same salary so the intruder can find
out those ten persons by looking at the cipher text because the cipher text also will be the
same for those ten persons; the salary part will be the same.

In other words it can provide some information to the eavesdropper or intruder. So the
message contains two identical blocks of 64-bit and the cipher text corresponding to this
block also will be identical. This is not a very good approach.
(Refer Slide Time: 38:55)

Another problem is someone can modify or rearrange blocks to his own advantage. For
example, here these two in the cipher text, since they are independent C2 and C3 can be
interchanged or C4 and C5 can be interchanged and at the other end the receiver will not
know that this has been done but this can be used for different purposes. Suppose if
somebody wants to increase his salary may be it was 16000 he can make it 61000 so this
kind of thing can be done and as a result this ECB or Electronic Code Book method is not
very safe and as a consequence because of these flaws ECB is rarely used. Thus, to
overcome this problem the second approach can be used which is known as Cipher Block
Chaining CBC.

(Refer Slide Time: 41:25)


In this mode of operation each block of ECB encrypted ciphertext is XORed with the
next plain text block to be encrypted. For example, here initially a random number is
chosen which is initialization vector or IV. So, that IV is generated randomly a 64-bit
number is generated and that is XORed with P1 and then that number is encrypted to get
C1 then C1 is XORed with P2 to get a number which is encrypted to get C2. So here
depending on the initialization vector C1 will be different for different cases and also C1
is exclusive OR with P2 and then encrypted by using the data encryption standard. So by
using DES the encryption is being done and C2 is used to perform exclusive OR with P3
and so on. Thus Pn is encrypted with Cn minus 1 and then exclusive ORed to do the
encryption to generate Cn.

In this case suppose if Pi and Pj are same Ci and Cj will not be same they will be different
and as a consequence the problem that we encountered in the previous case the ECB
Electronic Code Book does not appear in case of this Cipher Block Chaining CBC
method. So here (Refer Slide Time: 40:54) this makes all the blocks dependent on the
previous blocks and the only extra thing that has to be done is the initialization vector has
to be sent along with data. That means apart from sending the cipher text C1, C2 and Cn
you have to send the IV initialization vector to the receiving end so that the decryption
can be done by using the same initialization vector which is a random number. New
numbers are generated for different situations.

There are situations where the size of the plain text is smaller than 64-bit. In the previous
two cases we have seen we cannot do encryption unless each of these plain text block is
of 64-bit.

(Refer Slide Time: 44:18)

Now suppose from an interactive terminal data is coming out at the rate of one character
or 8-bit in such a case how do you use DES that can be done by using this Cipher
Feedback Mode or CFB. So it can receive and send 8-bits say k is equal to 8 at a time in a
streamed manner so 8/8 can be generated. How it works? Again in this case also we use a
initialization vector, that initialization vector is loaded in a 64-bit shift register and that is
being used initially to do the encryption by using DES and after encryption and only the
left most byte of the cipher text we get is used to perform exclusive OR with the
character or at that moment.

Therefore, here as you see that is being Exclusive ORed to get the cipher text. So in this
way character by character it is being done and then this number is loaded in the shift
register so as you can see here after ten such characters have been encrypted this is the
snapshot (Refer Slide Time: 43:14) so C3, C4, C5, C6, C7, C8, C9 and C10 are already there
and this 16-bit number is used as for encryption with the key and this is being encrypted
and the left most 8 byte of the resultant number is used to get the cipher text by using
exclusive OR with the plain text. So this is how it is being done.

On the other hand, at the receiving end, this is the receiving end (Refer Slide Time:
43:45) so in the receiving end the exclusive OR is being performed to get the cipher text
and of course as long as this number and this number is same you will get back the plain
text P11. This is also a snapshot after decrypting ten characters. When the eleventh
character is getting decrypted this is the situation. So, in a streamed manner character by
character encryption can be done again by using DES.

Now let us consider the output feed back mode. This is also a stream cipher and
encryption is performed by XORing the message with the one time pad.

(Refer Slide Time: 46:08)

Here the entire process is somewhat similar to the previous case. However, here as you
can see after encrypting the left most 8-bit is exclusive ORed with the plain text to get the
cipher text and instead of the cipher text the output of this encrypted 8-bit data is applied
to the shift register. As a consequence it is not dependent on the plain text. that means
this one time pad can be generated before hand and then exclusive OR can be performed
as the bits are received bit by bit to generate the data. So bit by bit exclusive OR
operation can be done with the plain text.

In this particular case if some bits of this cipher text get garbled only those bits of the
plain text get garbled because here bit by bit operation is being performed. And in this
case another advantage is the message can be of arbitrary length, it need not be even
multiples of 8-bit but it can be 6-bit, 7-bit, 2-bit or any number because here you are
performing bit by bit operation. And at the receiving end also one time pad can be
generated then exclusive OR operation can be performed with the cipher text to get back
the plain text. However, this is a less secure method than the other modes of operation
that we have discussed. As a consequence this output feedback mode is rarely used unless
it is absolutely necessary.

One limitation of this Data Encryption Standard is the small size of the key. We have
seen the key size is only 56-bit and as a consequence it is not very secure, that was the
limitation pointed out by the critics of Data Encryption Standard.

(Refer Slide Time: 48:05)

So what was done was to improve the effectiveness of this encryption the key length was
increased not directly increasing the key length or changing the size of the plain text but
the triple DES, 3DES technique was developed. What is being done here is each block of
plain text is subjected to encryption followed by decryption followed by encryption by
using two keys K1 and K2 and again K1. So this effectively increases the size of the key.
You can say that 56 into 3 is the size of the key although you are changing the basic
algorithm that is being used by data encryption standard so you are performing the
standard DES algorithm which operates on 64-bit plain text and the keys are of 56-bit but
you do it in chain one after other that means encryption decryption encryption to generate
the 64-bit cipher text.

Then again decryption can be done in the reverse order; decryption encryption decryption
to get back the plain text. So this particular approach gives you much better security.
There is no known method for breaking this code. You can use CBC Cipher Block
Chaining whenever the size of the data is long. So CBC is used to turn the block
encryption scheme into stream encryption scheme when you have got messages of 64-
bits.

We have discussed the symmetric-key encryption technique of cryptography technique


where we have seen for a pair of users you require a single key and we have large number
of users. To overcome this problem there is a more recent method known as public-key
cryptography.

In this particular case two keys are used for encryption and decryption. The two keys are
known as public-key and private-key. Here for example Sita is trying to send a plain text
so C does encryption with the help of the public key of the Ram, this is the receiver Ram
(Refer Slide Time: 49:30) so by using the public key of Ram encryption is being
performed to generate the cipher text then it is sent over the medium to the other end and
Ram receives it and does the decryption by using the private key.

(Refer Slide Time: 52:18)

As you can see in this particular case this public key is for the rest of the world and
private is the secret key that is being used by the receiver. So here we can see the pair of
keys is used with any other entity. That means instead of Sita if somebody else wants to
send another message they can use the same public key to perform encryption and send it
to Ram. As long as the private key is not disclosed and it is known to Ram, Ram can
decrypt the entity or message sent by any other person apart from Sita when it is
encrypted by the public key of Ram. So in this particular case the number of keys
required is very small. Suppose you have got n users you require only two n keys. So, if
you have got one million users you will require two million keys in case of this public
key cryptography.

The main difference as we can notice here is instead of using the same key for encryption
and decryption you are using two different keys; one is public key and another is private
key for encryption and decryption. That is the basic difference between the public key
cryptography and symmetric key cryptography.

Now question arises what is disadvantage or advantage of this technique. Advantage as


you have noticed is the number of keys required is small and the main disadvantage is
that it is not efficient for long messages because the encryption technique is quite
complicated. However, another problem present here is the association between an entity
and its public key has to be verified. That means this public key is received may be
through network. So the public key whether it is coming from Ram or somebody else that
is the imposter has to be verified. Otherwise this will fail that means in this particular
case this verification of this public key has to be performed before using it for encryption.

One popular approach for public key public key cryptography is RSA technique. It was
invented by Rivest, Shamir and Aldeman. This public key algorithm that performs
encryption as well as decryption is based on number theory.

(Refer Slide Time: 53:30)

This RSA algorithm is based on number theory and it can perform variable key length;
long for enhanced security and short for efficiency. That means the size of the key is not
fixed here it can be long or small depending on the security level that you want or the size
of the message. And typically the size of key is 512 bytes and variable block size can be
used. However, it has to be smaller than the key length. So if it is 512 bytes the key
length has to be smaller than that. The private key is a pair of numbers (d, n) and the
public key is also a pair of numbers (e, n).

So using this (d, n) encryption is done and decryption can be done by using (e, n). So
here it chooses two large primes p and q typically around 256 bits then compute n is
equal to p into k q and z is equal to p minus 1 into q minus 1 and third step is choose a
number d which is relatively prime to z which is p minus 1 into q minus 1.

(Refer Slide Time: 55:25)

Find e another number such that e into d mod p minus 1 into q minus 1 is equal to 1.
Then for encryption you should use c is equal to p to the power e mod n and for
decryption one can use p is equal to c to the power d and mod n. this is illustrated with an
example.

As you can see 5 and 199 is the public key that is being used for encryption so the
message or the plain text sent by Sita is 6 so 6 to the power e is 5 here mod one mod one
119 is 41 that is the cipher text C and C is sent to Ram and the decryption is performed
by using c to the power d so 41 to the power 77 because here d is 77, mod is 119 and to
get back the plain text 6, this is the message received by Ram. This is how encryption and
decryption can be done and here you can see (Refer Slide Time: 55:02) the advantage is
the keys can be generated both d and e and d n e these are all known but in spite of that it
cannot be broken by the Ravanas.

Now it is time to give you the review questions.


(Refer Slide Time: 55:28)

1) What do you mean by encryption and decryption?


2) What are the two approaches of encryption decryption technique?
3) For n number of users how many keys are needed if we use private and public key
cryptography schemes?
4) How triple DES enhances performance compared to the original DES
5) Explain how RSA works.

Here are the answers to the question of lecture minus 38.

(Refer Slide Time: 56:02)


1) Why compressed video is more sensitive to error than uncompressed video?

The main reason is if error occurs in compressed video the recovered video is more
distorted because each error bit affects larger number of bits compared to the
uncompressed video. That means if smaller number of bits affect larger number of bits
after recovering that that’s why it is more sensitive.

(Refer Slide Time: 56:35)

2) Why a media server is used for streamed audio/video service?

We have seen that the streamed audio/video service cannot be obtained directly from the
web servers which use TCP and TCP cannot be used because the error control techniques
used in TCP is unsuitable for downloading audio/video files so the use of media server
avoids the use of TCP. However, the media player uses the URL in the metafile to access
the media server to download in the UDP so UDP does not use the error control and other
things and as a consequence it is very suitable for streamed audio/video service. That is
why a media server is needed which does not use TCP instead of that it uses UDP.
(Refer Slide Time: 57:31)

3) Distinguish between streaming of stored audio/video with that of live audio/video.

We have seen that although both the services are sensitive to delay and neither can accept
retransmissions. There are differences between the two. In the first application the
communication is unicast and on demand. That means whenever you are streaming of
stored audio and video usually it is unicast.

A person is requesting for some service and as a consequence it is unicast and usually on
demand. On the other hand, when are performing live audio/video transmission streaming
of live audio/video it is multicast in nature and it is live in the second application. So
these are the key differences between the two although both are sensitive to delay.
(Refer Slide Time: 58:28)

4) Why TCP is unsuitable for interactive traffic?

TCP is unsuitable for interactive traffic because of several reasons such as it has no
provision for timestamping and it does not support multicasting. These are the limitations
of TCP. On the other hand, for interactive traffic these are the two services needed.
Moreover, interactive traffic cannot allow retransmission of packets which is used in TCP
in case of error.

As we know whenever error occurs it does retransmission that cannot be done in case of
interactive traffic that’s why TCP is unsuitable for interactive traffic.
(Refer Slide Time: 59:10)

5) What are the key features of SIP protocol?

SIP is an application layer protocol that establishes, manages and terminates a media
session. It can be used to create two-party, multiparty and multicast sessions. It is
independent of the underlying transport layer protocol; it can be either UDP or TCP.

In this lecture we have discussed the cryptographic techniques. In the next lecture we
shall discuss how these techniques can be used to provide the four services I have
mentioned.

Thank you.
Data Communication
Prof .A.Pal
Dept of Computer Science & Engineering
Indian Institute of Technology, Kharagpur
Lecture - 40
Secured Communication - II

Hello and welcome to today's lecture on secured communication. This is the second
lecture on this topic. In the last lecture we have considered various issues on
cryptography which has been found to be the panacea to the problem of secured
communication. And in this lecture we shall cover various topics essentially which are
applications of cryptography. We shall see how cryptography can be used to achieve
secured communication particularly before different type of services required in secured
communication.

Here is the outline of today's talk. First I shall give a brief introduction then I shall
discuss about the applications of cryptography. As I mentioned, particularly this will lead
to privacy, integrity, authentication and nonrepudiation. These are the four services to be
provided for secured communication.

(Refer Slide Time: 01:38)

Then apart from message privacy message, integrity message, authentication and
nonrepudiation it is necessary to provide user authentication which is important in the
context of key management. Then we shall discuss how key management can be done by
using suitable techniques then we shall consider application layer protocol pretty good
privacy PGP which can be used for secured communication of emails. And I shall
conclude my lecture after discussing virtual private network which is essentially a
network used for secured communication.
(Refer Slide Time: 02:49)

And on completion of this lecture the students will be able to state various services
needed for secured communication, they will be able to explain how the four services
such as privacy, authentication, integrity and nonrepudiation are achieved using
cryptography, they will be able to state how user authentication is performed, they will be
able to explain how PGP protocol pretty good privacy protocol works and also they will
be explain how virtual private network works. So as I mentioned in the last lecture, these
are the four important services required for secured communication.

(Refer Slide Time: 03:27)


In cryptography we normally use three characters; the sender, the receiver and the person
in the middle sender here we have used the character Sita. The receiver is Ram and the
person in the middle who is essentially the intruder or eavesdropper who is trying to
impersonate is Ravana. By privacy we mean a person say Sita should be able to send a
message to another person called Ram privately. Also, we mean, even if the message is
received by other than Ram then it will not be intelligible to him or her so it should be
intelligible only to Ram and not by anybody else. That is the basic reason for privacy.

Second is authentication. After the message is received by Ram he should be sure that
message has been sent by nobody else but by Sita. That means the authentication of the
sender has to be authenticated.

Then we have integrity. Ram should be sure that the message has not been tampered on
transit so when the message is passing through an unreliable network it may be tampered
by the intruder Ravana. So he will try to modify and to his advantage it may necessarily
corrupt it and that should be clear to Ram, Ram should be able to identify if a message is
tampered on transit.

Finally is the nonrepudiation. Ram should be able to prove at a later stage that the
message was indeed received from Sita so there may be a situation when Sita may say at
a later stage that a particular message was not sent by her to Ram so Ram should be able
to prove that it was indeed sent by Sita so that is the requirement service requirement in
the context of non repudiation.

First we shall focus on privacy, how privacy can be achieved by using cryptography. It
can by achieved by cryptography by using shared key or symmetric key approach. Here
what can be done Sita can convert the plain text into cipher text by encrypting with the
help of a shared key.
(Refer Slide Time: 06:22)

So encrypted message is now sent through the medium which is not safe. Thus in the
middle Ravana may try to get access of it but it will not be intelligible, it will be
unintelligible to Ravana. After it reaches Ram he should be able to decrypt it with the
help of the same shared key and get back the message so the privacy is satisfied here
because even if Ravana gets the message he may not be able to understand the message.
So here the basic issue is that key is shared between Ram and Sita and particularly in
shared key approach we require a very large number of keys that is one important
problem. Later on we shall discuss on how it can be resolved.

Then comes the privacy by using public key cryptography. Here what is being done is
Sita performs encryption. That means Sita converts the plain text into cipher text with the
help of the public key of Ram.
(Refer Slide Time: 07:37)

So the public key of Ram is used to do the encryption and the encrypted message
traverses through a reliable medium and as it reaches Ram then decryption can be done
with the help of the private key of Ram. So Ram can do the decryption by using his
private key to get back the plain text from the cipher text. This is how the privacy of the
message is preserved by this technique.

In this case the problem is Sita is using the public key of Ram so it is necessary to verify
that the public key indeed belongs to Ram and not by some imposter. So, if an imposter
sends a public key to Sita and if she uses that public key for sending through the medium
then the purpose will be lost. Hence, this has to be verified in the context of public key
cryptography when you use public key cryptography.

So you have seen how privacy can be maintained by using shared key cryptography and
by public key cryptography. Let us now focus on the other three aspects or services
required that is authentication, integrity and nonrepudiation. These three services can be
achieved by using a technique known as digital signature. What is digital signature? Let
us try to understand. The digital signature is very similar to signing a document.
(Refer Slide Time: 09:28)

Normally whenever we send a document, and particularly if it is a legal document or


something else, on each page signature is given so that signature actually authenticates
the document. So a somewhat similar technique is used here. However, here where by
signature we mean some kind of encryption is being done. How it is done is explained
here by using the public key cryptography.

Here Sita does the encryption by using the private key of Sita. So private key of Sita is
being used here for doing the encryption of the plain text and the cipher text passes
through the network and as it reaches the receiver Ram he does the decryption by using
public key of Sita. So here you see the public key of Sita is used to do the decryption and
private key of Sita is being used to do the encryption. By this process Ram is able to
achieve all the three services required. For example, authentication is done because here
Ram is doing the decryption by using the public key of Sita. That means if it was not
received from Sita then obviously the encryption and decryption would not have been
possible by using the public key of Sita. So it does authentication and integrity is also
verified because the entire message was encrypted. And after decryption it is possible to
regenerate the plain text so since it is possible to regenerate the plain text the integrity is
maintained. That means if the message was tampered on transit then it is not possible to
get back the plain text after decryption so the integrity is also achieved by using this
approach.

Finally comes the question of nonrepudiation. Here since the decryption has been done
by using the public key of Sita, later on if somebody asks Ram can you prove that the
message you received was indeed received from Sita then the plain text that was
generated can be encrypted again by the public key or private key of Sita and then
decrypted using the public key of Sita to get back the same plain text. That proves that
the message was indeed received from Sita because it was possible to do the decryption
by the public key of Sita. So, all the three aspects are satisfied by using this digital
signature technique.

However, this digital signature can be done by using two approaches. First one is signing
the entire document and second one is signing the digest. So, if signing the entire
document is done that is possibly better but it is not very efficient because the message
can be very large so encryption will take lot of time.

(Refer Slide Time: 12:48)

But particularly by using the public key cryptography as you know in public key
cryptography the size of the key is quite large more than 256 bits and as a consequence
the encryption takes very long time. So if it is done on the entire message it will take very
long time so it is not very efficient. That is why another approach can be done which is
known as digital signature which is essentially a miniature version of the original
message. So a miniature version of the original message is used for the purpose of
authentication, integrity and nonrepudiation verification. However, this digital signature
will not be able to provide privacy as we shall see.

Now let us see what we really mean by this message digest and how it can be done. That
means the message digest can be done by using a technique known as hashing.
(Refer Slide Time: 17:42)

What the hashing does is, suppose this is the message (refer Slide Time: so message is
applied to a hashing function and to generate a digest this is your digest. What it
effectively does is it converts a variable length message into a fixed length as it is written
here fixed length digest. So it is somewhat similar to generating CRC or checksum. We
have seen that for the purpose of error detection checksum or cycle redundancy codes are
used and those checksum and cycle redundancy codes are either 16 bit or 32 bit long and
those are generated from a long message. So here also in a somewhat similar technique a
fixed length digest is created from the variable length message and some standard
techniques have been developed to perform hashing.

The most common hashing functions are MD5 message digest 5 approach which uses one
twenty bit or 128-bit digest that means a fixed length 120 bit digest is generated by using
this approach message digest five which is very popular and widely used.

There is another approach secured hash algorithm one SHA-1 which uses 160-bit digest
and obviously SHA-1 is much more secure than the MD5 because it uses 160-bit instead
of 120-bit. However, SHA-1 is less popular than MD5.

Now you must understand two very important properties of this hashing operation. First
of all it is one-to-one. What we really mean by one-to-one? For each message the digest
has to be unique. That means the digest that is created has to be unique for a given
message and that requirement is satisfied if the digest is about 128-bit. So if it is chosen
above 128-bit then that condition is satisfied and 128-bit or more than 128-bit then
another condition that has to be that is to be satisfied is it is one way. That means from
the message you can generate the digest but you cannot really generate the message from
the digest. That means the reverse process is not possible. So a message cannot be
generated from the digest and that’s why it is one way. So these two properties are to be
satisfied for the message digest to be successful.
Let us see how the signing is done on the digest, signing the digest is performed so here
again we are using the private key by public key approach to do the signing on the digest.

(Refer Slide Time: 19:09)

Here Sita is generating a message then some hashing operation is performed may be
MD5. MD5 version five is used to do the hashing to generate 120-bit digest. This 128-bit
digest is encrypted by using Sita’s private key and then that signed digest is sent along
with the message to Ram. So to Ram not only the message is being sent but the signed
digest is also sent. Thus, here you see the signed digest is the encrypted version of the
digest. On the other hand, the message is not encrypted. As a result it does not give you
the privacy. So privacy is not provided by this approach. However, it provides the other
three functions that I mentioned namely authentication, integrity and nonrepudiation.
Now let us see what is done at the other end.
(Refer Slide Time: 20:18)

At the other end Ram receives the message along with the signed digest then the signed
digest is decrypted by using Sita’s public key. Since the encryption was done by using
Sita’s private key the decryption can also be done by using Sita’s public key. So, after
doing the decryption the digest is created. Therefore, the message is now received in its
original form that is in plain text form.

This plain text form message again is used to generate the digest by using the hashing
operation. That is, again the MD5 is used to generate the hashing and the digest is
created. And if these two digest are same they are compared.

If they are same then the authentication integrity and non repudiation is satisfied so digest
created using the received message is compared with the decrypted digest. In this way the
message digest is used to perform the three operations.

Now, apart from this message authentication checking authentication of the integrity of
the message it is necessary to do user authentication and this is related to essentially key
management and it is necessary how the user authentication can be done is explained here
this can be done in various ways.
(Refer Slide Time: 23:15)

The first approach is based on symmetric key cryptography. Here what is being done is
Sita sends her identity and password so identity and password is being sent in an
encrypted form by using that shared key KSR. KSR is essentially the shared key between
Sita and Ram. SR stands for Sita and Ram so the shared key is being used to do the
encryption where the identity of Sita along with password is sent to Ram. Then Ram now
knows that the message is indeed coming from Sita so the user authentication is being
performed then the message communication takes place.

Sita can send message by using the encryption based on that shared key KSR. Hence, in
this way message communication can go on. However, here this particular approach has
some problem. First problem is that the intruder can cause damage without accessing it.
That means whenever this particular message is going through the network then the
intruder or Ravana can get hold of it, can modify it and send it so it can cause damage
without accessing it. Therefore in such a case Ram will not be able to authenticate Sita.

Secondly, what the intruder can do is, instead of this a particular message can be
replayed, can be generated twice.

Suppose Sita is sending a message to banker ton make some bank payment now one
message is sent to pay let’s assume 10000 rupees, another message can be resent by the
imposter or intruder and as a consequence two payments are being made. Although Sita
has sent only one message for making payment so the banker is making two payments
instead of one payment because the same message was resent by the intruder twice. This
is known as replay attack. You have to overcome this replay attack. How it can be done is
explained later. This problem can be overcome by using an approach known as nonce.
(Refer Slide Time: 25:00)

A random number is used only once to challenge to per to do the authentication. What is
being done is after Sita sends her identity the Ram receives that identity then Ram sends a
nonce. That means one random number our r that means it has been generated by Ram a
random number which is being sent to Sita so as a challenge it is being sent to Sita and
after Sita receives it she does encryption she responds by sending the encrypted form of
this random number RR and after receiving that Ram is happy because now he is
confident that the message is indeed coming from Sita. This particular approach is known
as challenge response protocol. By using this challenge response protocol the above
problem is verified because after a session is created the resending of message cannot be
done twice. However, in this particular situation Ram is able to authenticate the user Sita
but Sita is not able to authenticate Ram. So, to do that one can use two-way
authentication.
(Refer Slide Time: 26:10)

In this particular case Sita sends her identity to Ram ram sends the challenge by sending
our r which is there that random number to Sita Sita sends the encrypted version of rr to
Ram

Ram now authenticate is now confident that the message is coming from Sita then Ram
also sends one message Ram also sends one message here Sita also sends that another
message random number RS to challenge Ram and after receiving that RS Ram responds
with the encrypted version of RS by using the symmetric key which is being shared by
both of them then they can communicate under the cover of KSR. So in this way two-way
verification is done. However, this particular approach requires more number of
communications of messages and this can be reduced as it is shown in this approach.
(Refer Slide Time: 27:10)

Here instead of five messages only three messages can suffice that’s why it is a shortened
two-way authentication approach. What is being done here is Sita not only sends her
identity but also that random number in a single message to Ram. Ram in turn does the
encryption of that random number and sends the encrypted version along with another
random number generated by Ram nonce and that is being sent to Sita. Now Sita in turn
does the encryption by using that shared key of RR and sends it to Ram. In this way by
sending three messages the two party authentication is being performed. This is how user
authentication can be done by using, the symmetric key cryptography, the user
authentication can also be done by using public key cryptography in this manner as it is
explained here.
(Refer Slide Time: 28:35)

Here again Sita sends a message by sending her identity and the random number RS and
that is being encrypted by using the public key of Ram. Thus encryption is done by using
the public key of Ram, it is known to Sita. Now, after receiving that Ram responds to the
encrypted version of the RS, RR and KS.

Ks is the key generated for that session so a new session key is being generated and that
is being sent in encrypted form by using KS that is the public key of Sita which is known
to Ram. So, after messaging that Sita knows that it has been done by Ram and Sita does
the decryption by using her private key and gets back RS, RR and KS then she sends RR in
an encrypted form by using the session key KS to Ram. So in this way by using public
key cryptography the user authentication can be done.

Now, as I mentioned in the last class particularly the symmetric key cryptography is not
very efficient because of large number of keys to be generated.
(Refer Slide Time: 29:36)

When the numbers of users is large, if there is 1 million people who are interested in
communication then you will require 500 billion keys t that has to be generated, that is
very difficult to do that’s why a simpler and efficient technique has to be used for the
purpose of key management and that can be done by one-time session key for two parties.
That means a key is generated only for a particular session is being used by both the
parties for that session and again in the next session another new key is generated and
that can be done by using a protocol known as Diffie-Hellman key exchange protocol.

(Refer Slide Time: 33:42)


This is used to establish a shared secret key per session basis. The prerequisite is in the
beginning both the parties should know two large numbers N and G so these two
numbers have some properties. First of all N is a large prime number such that N minus 1
by 2 is also a prime number and G is also a prime number. As an example for simplicity
let’s assume G is equal to 7 and N is equal to 23.

In practice the numbers are much larger. To explain the approach I have taken smaller
numbers. After N and G are known to both the parties Sita chooses a large random
number x to calculate R1 and R1 is equal to g to the power x mod N. Let’s assume x is
equal to 3. So he creates R1 is equal to g to the power x mod N that means R1 is equal to
7 to the power 3 mod N where N is equal to 23 and that is equal to 21 so this 21 is sent to
Ram. Now, after receiving this number Ram generates the key by using this 21 to the
power 6, 6 is here (Refer Slide Time: 31:51) y is equal to 6 and that number has been
used by Ram 2 to the power 6 mod 23 is equal to 18 so 18 is the key. This is how the key
is generated at Ram’s end which can be used as a shared key. That means 18 can be used
as a shared key. Now let’s see what Ram does.

Ram calculates R2 is equal to g to the power y mod n, so what is g to the power y mod n?
As you already know g is equal to 7 to the power 6 which is the value of y and mod 23 is
equal to 4, 4 is being sent by Ram to Sita and after receiving 4 what Sita does is she
generates the key, key is equal to 4 to the power 3 that means g to the power y and mod g
to the power y I means g to the power x and Sita has g to the power x that is (R2) to the
power x mod N so mod 23 and again it gets 18.

Therefore both the parties have got the shared secret key. This is essentially working
based on number theory. So I have illustrated with small numbers but when large
numbers are used it is quite safe, it serves the purpose of generating a shared key per
session basis. So this is how key management can be done.

Apart from this approach there are other approaches. Another important approach is
based on what is known as key distribution centre (KDC). So key distribution centre is
essentially a trusted third party which maintains a database of the keys and the key
distribution centre maintains the keys and delivers the keys to different parties so it
assigns a symmetric key to both the parties that is the role of the trusted third party KDC.
(Refer Slide Time: 35:20)

However, this approach is not full proved and a popular authentication protocol has been
developed which is known as Kerberos which uses the concept of key distribution centre
but in addition to key distribution centre it does more and it actually uses an
authentication server (AS) and a ticket granting server (TGS) apart from the real data
server. So it generates three servers where one of them is the authentication server,
second one is ticket granting server and third one is real data server essentially this can be
Ram’s server and by using these three servers the authentication is being done. Let’s see
how it is being done with the help of this particular diagram.

(Refer Slide Time: 38:40)


Here as you can see Sita is sending a message and before the messages are sent a six-way
protocol is being performed. Sita sends her name to the authentication server (AS) and
after receiving the identity of Sita the authentication server generates a session key that is
KSC and a TGS token to be used by Sita. So, using that token I mean after receiving this
Sita will simply type her password to get the session key and using that session key she
will do the encryption of a timestamp and also inform the ticket granting server with
whom she wants to communicate and that is Ram and also she will send the token in an
encrypted form which is the shared key between Sita and the ticket granting server so that
is KTG. So this is being sent to the ticket granting server and since it is being encrypted
by using the shared key between the ticket granting server and Sita it will accept these
requests and it will generate two tickets, TGS responds to her creating a session key for
Sita to use with Ram so you can see session ks is Ram, KSR is being generated and KSR,
KR, Sita, SR so these two tickets are being generated where KSR is the session key to be
used by Sita for the communication with Ram.

These are being sent to Sita and after receiving this Sita sends a message to Ram along
with the session key. This is the session key to some ticket (Refer Slide Time: 37:46)
encrypted by using session key and also the timestamp. After receiving this Ram will do
the encryption by using KSR that is the shared session key to be used by both of them for
communication by incrementing the time stamp by one. And after this is performed then
both Sita and Ram not only have authenticated their identity but also they have got a
shared key to be used by both of them which is this KSR. Thus, under the cover of this
KSR this shared symmetric key both of them will exchange messages. This is how
Kerberos works and this is very popular nowadays.

Now the security measures can be implemented by using the cryptographic technique at
different layers particularly in network layer, transport layer and application layer.
Implementing in network layer and transport layer is very difficult because there are
many protocols, for each of them you have to develop a suitable protocol for the purpose
of security. So it has been found that the protocol for the application layer is most
efficient and simple and that is the reason why I shall consider an application layer
protocol which has been used for sending emails in secured form.
(Refer Slide Time: 40:45)

It was invented by Phil Zinmermann and it provides all aspects of security that is all the
four aspects in sending an email message. I will mention these. Let’s see how it works.

Sita is sending an email. first the hashing is done by using a popular approach may be the
HD5 approach and after hashing digest is created, the digest is encrypted by using Sita’s
private key and that digest in encrypted form is sent along with this email and again the
entire thing is encrypted by using one time secret key that is generated and not only it is
encrypted but this one time secret key is also encrypted by using Ram’s public key and
all the three are now sent that means the secret key in encrypted form then message and
digest in encrypted form. All the three are being sent to Ram at the other end. After
receiving the encrypted secret key and message plus digest the decryption is done in this
manner.
(Refer Slide Time: 42:28)

Ram performs decryption. As you can see here the secret key is decrypted by using
Ram’s private key because it was encrypted by using Ram’s public key. Since it was
encrypted by using ram’s private key it is decrypted by using ram’s private key so the one
time secret key is now generated and this message plus digest which was encrypted by
using one time secret key which is your symmetric key is being used now for decryption
to get back the message digest along with the email. now the message digest was in
encrypted form by using Sita’s private key but now it is decrypted by Sita’s public key to
get back the digest and email is also passed through a hashing function and digest is
created and both the digest are compared to check whether they are same or not and if
they are the same then it means that all the three functions are satisfied. That means
privacy, authentication, integrity and nonrepudiation are satisfied. That means message is
being communicated in secured form, however, if it does not match then there is some
problem such as the email message has been modified or corrupted.

Now we shall discuss an approach by which a secured communication network or


secured link can be created which is known as virtual private network and that is very
popular. Without this our discussion on secured communication cannot be completed.
(Refer Slide Time: 42:36)

One possible approach for realizing private network in communication of


interorganization communication messages is to connect various sides by using lease line.

(Refer Slide Time: 44:58)

If the organization is located in a single building or in a single campus then there is no


problem. A LAN local area network can be created and they can be communicated with
each other. But in practice that is not so. Particularly organizations like National
Insurance Company, banks such as State Bank of India and other organizations have their
offices throughout the country, they may have thousands of offices and in each place they
have got a local area network. Now they are connected with the help of lease line for
communication of their private messages so only two sites are shown site A and site B
but in practice there can be thousands of sites. In this case the intraorganization messages
can be communicated through this private wide area network created with the help of
these lease lines. This particular approach is simple but it also ensures privacy.

The advantage is here one can use private IP addresses because the IP address used in
router R1 and R2 need not be a global IP so there is no need to take permission from the
global organization so the organization itself can create their own IPs and use it under
this approach. But it has limited applicability because these organizations as I mentioned
banks, insurance companies etc have to serve many people and to do that there must be
some global communication where they have to communicate through internet. So the
second approach can be used by using hybrid network.

(Refer Slide Time: 46:19)

In this case what is being done is the intra organization communications are
communicated with the help of lease line using the router R1 and R2. On the other hand,
the global internet communications are being performed through the internet by using
another set of routers R3 and R4 so separate path is there for communication of the global
messages. This is hybrid because both private network and public network are being used
as shown here (Refer Slide Time: 45:58). But the main problem in this case and as well
as the previous case is where private wide area network is created by using lease lines,
satellite links and various other communication things which is very costly so these
networks are very costly, the private wide area networks are very expensive. To
overcome that a technique known as virtual private network can be used.
(Refer Slide Time: 47:24)

Virtual private network is used for both private and public communication. Not only
private message or data but public communications is also performed by using the
internet so a global internet is used for both private and public communications. You may
be asking how you are achieving security in this particular case. In this particular case
you are using a pair router R1 and R2 between two sides and in fact all these sides are
now connected to this global internet so another site is also connected to this internet,
then another site is also connected to this internet (Refer Slide Time: 47:10) so there is no
need to deploy a public wide area network because this internet is now available
throughout the world so each organization has to link it to the internet. However, question
arises how security can be achieved. That can be done by using the concept of virtual
private network or VPN.
(Refer Slide Time: 48:50)

VPN uses IPsec which is a protocol in the network layer known as internet protocol
security in the tunnel mode. It uses two sets of addressing. As you can see here this is site
one and this is site two, they are connected by using routers so apart from the IP header
and the packet that is being used by this station 150 to send the message to station 250
when it is being sent by R1 it adds two more headers first one is IPsec header and in front
of that a new IP header is being used for communication between R1 and R2 through the
internet and this part is now used as some kind of a payload. So this IP address is not
being used but it is sent as payload using these four components or four fields of this
message. Here it is given in more detail. Particularly there are two protocols.

(Refer Slide Time: 51:15)


The second one is known as encapsulating security protocol or ESP. ESP provides you
source authentication, integrity and privacy. This is very useful because it provides all the
three functionalities or services required and here is the frame format. Here you can see it
adds a ESP header and a ESP trailer, this is the remaining payload, this part (Refer Slide
Time: 49:44) IP header and the rest of the packet is now being sent as payload and a new
IP header is being generated which is being used and this ESP header has got two
important components security parameter index and sequence number and ESP trailer has
got padding, provides pad length and next header. By using this, a connection oriented
service is created.

Here you have the authentication data that means some kind of digest is created which is
being sent along with the packet and this is communicated through the internet. So you
can see here the packet is in the tunnel mode, the message along with the original IP
address is sent as payload and additional fields are used to achieve this authentication,
integrity and privacy by using the virtual private network.

It is virtual because you were not really creating a private network but you are achieving
the benefits of private network that’s why it is virtual private network. That means by
using public network you are creating a virtual private network for secured
communication. Now it is time to give you the answer to the questions of lecture 37.

(Refer Slide Time: 51:25)


1) What you mean by encryption and decryption?

As we have already discussed on many occasions, encryption transforms the message that
is plain text into a form called cipher text which is unintelligible to an unauthorized
person but the authorized person can get back the plain text by doing decryption. So, on
the other hand, decryption transforms an unintelligible message into meaningful
information by an authorized person by doing the decryption.
(Refer Slide Time: 52:05)

2) What are the two approaches of encryption and decryption technique?

There are basically two approaches as follows. First one is known as one key technique
which is also known as symmetric key encryption technique. In this case the same key is
being used for encryption as well as decryption and the second approach is known as
public key approach which is also known as asymmetric key encryption. In this case the
transmitting end key is known as the public key whereas the receiving end key is known
as secret key or private key. So we have made use of these two approaches in these
lectures as you have seen.

(Refer Slide Time: 54:22)

3) For n number of users, how many keys are needed if we use private and public key
cryptography schemes?

As we have seen for private key cryptography or symmetric key cryptography we require
n (n minus 1) by 2 keys. For example, suppose there are four nodes then node 1 will
require at least three keys that means three keys are required for communication by node
A, then D will require for communication with B C and D, B will require additional two
keys for communication and D will require another node. So in this way the number of
edges are essentially the number of keys required. So in case of 4 it is 4 by 3 into 2 that is
6 edges. As you can see there are six edges so in this way total number of keys is quite
large n (n minus 1) by 2.

On the other hand, in case of public key cryptography or asymmetric key you require
only 2n keys. The reason is each user will require one private key and another key for the
rest of the world, for all the other users so in this way the total number of keys required is
2n.
(Refer Slide Time: 55:47)

4) How triple DES enhances performance compared to the original DES?

One common complaint given is this DES uses a very small. We have seen the key size
was 56-bit and obviously with 56-bit key it is not possible to achieve very good security.
So we have to see how to improve the security without discarding the DES. That means
triple DES was used to make DES more secure. This effectively increases the key length.
what is being done is two keys are used to do encryption followed by decryption
followed by encryption and by doing that the size of the effectively is 56 plus 56 that is
112 bit and 112 bit is considered to be quite reasonable, secrecy or security. That’s why
this triple DES was invented and it is quite popular nowadays.
(Refer Slide Time: 56:35)

5) Explain how RSA works?

The steps for RSA is as follows:

It chooses two large prime numbers p and q typically around 256 bits.
It computes n is equal to p into q and z is equal to (p minus 1) into (q minus 1)
It chooses a number d which is relatively prime to z that is p minus 1 and q minus 1 and it
then finds e and e such that e into d mod p minus 1 into q minus 1 is equal to 1 then for
encryption it is p to the power d mod n is used and for decryption to get back the plain
text c to the power d mod n is used.

This is how RSA works as I have explained in the last lecture. Now it is time to give you
references to the various books that I have used in this lecture.
(Refer Slide Time: 56:43)

So far as textbooks are concerned the main textbook that I have used is Data
Communications and Networking by Behrouz A Forouzan and apart from these I have
used two more books. on the left side is the book by William Stallings, the name of the
book is data and computer communications published by Prentice Hall of India, the third
book is Computer Networks by Andrew S Tanenbaum again it is published by Prentice
hall of India.

Apart from these textbooks I have used several reference books. For example, for TCP
when we shall discuss internetworking this book is very useful, TCPIP by Douglas E
Comer
(Refer Slide Time: 57:21)

Then for Wireless Communication there were some lectures. I have used this Wireless
Communication Principles and Practices by Theodore S Rappaport and this is actually
published by Pearson then you have got a book on Satellite Communication which is
published by Wiley and two more reference books that has been used when I discussed
Data Compression, this book is by David Solomon and on Network Security the book is
by Charlie Kaufman and two more authors.

(Refer Slide Time: 57:56)


Hence, these are the reference books I have used during my lectures. So with this we
have come to the end of not only today's lecture but to the end of forty hour lecture series
on data communication.

You might also like