Unit 1 Introduction To Layer Functionality and Design Issues
Unit 1 Introduction To Layer Functionality and Design Issues
1.0 Introduction
1.1 Objectives
1.2 Services of the Network Layer
1.3 Packet Switching
1.3.1 Virtual Circuit Approach (Connection-oriented Service)
1.3.2 Datagram Approach (Connection-less Service)
1.3.3 Comparison of Virtual Circuit and Datagram Approach
1.3.4 A view of some Network Service models
1.4 Network Addressing
1.4.1 IP Address
1.4.2 Hierarchy in Addressing
1.4.3 Getting an IP Address
1.5 Congestion
1.6 Routing
1.6.1 Classification of Routing Algorithms
1.7 Delay in Packet Switched Networks
1.7.1Types of delay
1.7.2Computation of delay
1.7.3Numerical
1.8 Summary
1.9 Solutions to the problems
1.10 Further Readings
1.0 INTRODUCTION
This chapter discusses about the network layer, which is the third layer of the OSI
model. Job of this layer is to send the packets from a source to destination. This layer
responds to the service requests of the transport layer and takes the services from the
data link layer. This chapter starts with an overview of the services of the network
layer. Switching is the backbone of network architecture. This important concept of
packet switching is elaborated with diagrams in section 1.3. How the address will be
assigned to a host and the different concepts of addressing are discussed in further
section. This is followed by congestion and routing concepts. Delay is an important
concept in packet switched networks. The various types of delay have been discussed.
The delay computation in different scenarios is illustrated with various examples in
further section.
1.1 OBJECTIVES
After completing this unit, a student will be able to
1
Introduction to Layer
Functionality and Design Elaborate and utilize the concepts of addressing
Issues define congestion and policies to overcome the congestion in the
network layer
explain the concept of routing;
calculate the delay in a given network scenario
Let us understand the clear distinction between routing and forwarding with an
example. We are planning a drive from JIIT, Noida to IGNOU. There are various
possible paths like one is via GT road, another is via Indirapuram and so- on.
Which path is the best one as per the time taken or road conditions? This decision
process is routing and here our metric to decide the best route could be any one
like traffic conditions on the road, infrastructure of the road, etc. Suppose the
selected route is via Indirapuram, and the person started the journey. At one of the
intermediate junctions, there are various directions. Which direction to be chosen
at the junction is the forwarding decision taken by the router.
Let us understand the concept of packet switching more clearly with the following
scenario. Consider two banks where bank 1 requirement is book an appointment
before coming to the bank. If you reach directly, you would not be entertained.If
already booked an appointment, your waiting time is negligible.There is no such
requirement for 2ndbank. As soon as you reach to the bank, you will be entertained
based on the number of people already waiting. The services will be provided to you
without any hassle, if no one is there. If already a large number of people are waiting,
then your waiting time would be large or in some situations, bank will say it’s already
full, kindly come on next day. But on the other hand, there is no hassle of calling
before leaving from home. The scenario of 2ndbank describes how packets will be
handled during packet switching.
Network layer receives the data from the transport layer and divide into manageable
units known as packets. Based on different forwarding mechanisms used by connected
devices to forward the packets from a given source to a particular destination, packet
switched networksare further divided into two categories: virtual circuit approach and
datagram approach.
Before going into the detail of virtual circuit approach, first let us understand the
meaning of connection oriented service. Connection oriented service in which an
end to end logical connection would be established between the source machine and
destination machine. All the data between a source destination pair would be sent
through the same connection. After sending the data, connection would be terminated.
A connection oriented service has the following properties
a) All data would be sent in order and without any error to the destination
machine.
b) All the received data would be acknowledged by the destination machine.
c) The underlying service guarantees the in-order delivery of packets without
any loss or duplication of packets.
d) There is a retransmission policy which will handle the lost packets.
Due to all these properties, connection oriented service is also known as reliable
service. A connection oriented service is a three step process which are described as
follows
A network layer packetcontains the source and destination address as a part of header
information because it provides logical communication among the machines.In virtual
circuit approach, along with the source and destination addresses, packet contains a
VC-ID. It is necessary to mention VC-ID in the packet as all packets has to follow the
same virtual connection. When packet reaches to a router it will consult the
forwarding table on the basis of VC-ID mentioned in the packet and decide the output
port.
A virtual circuit approach involves three phases. The phases are a) setup phase b) data
transfer and then connection termination. All are explained as follows:
4
Network Layer
4 1
R1 R3
A
1 2
3
2 3
3 2
1 2
4 1
R2 3 R4 B
As shown in figure 1, machine A wants to send the data to machine B. Let us chosen
path between machine A and B is A-R1-R2-R4-B. Thus, a virtual connection needs to
be established among A and B by involving all the intermediate routers. The process
is as follows
1) Machine A will chose a VC-ID from its available list of VC-ID’s and send the
request packet to R1. As shown in figure 1, chosen VC-ID by A is 5.
2) As soon as Router R1 will receive this request packet, it will create an entry
for this virtual circuit in its forwarding table as shown in figure 1. In this
entry, Router R1 notes that the packet is coming from incoming port 1 and
incoming VC-ID 5. Outgoing port is 2 and leave blank in place of outgoing
VC-ID.
3) Now, R1 will forward this request packet to R2. In the similar manner, R2
will create an entry of this virtual circuit request in its forwarding table.
Suppose, the chosen VC-ID by R1 is 15, thus the values of incoming port,
incoming VC-ID, outgoing port and outgoing VC-ID are 1, 15, 3 and blank
respectively.
4) R2 will forward the packet to R4. R4 complete the three entries of its
forwarding table in the similar manner as shown in figure 1.
5) R4 sends the packet further to machine B. Machine B will chose a VC-ID and
let this value is 60. In future communications, the VC-ID 60 is an indication
for B that this packet comes from machine A.
All these five steps show the forwarding of request packet for setting the
virtual connection from source machine A to destination machine B. But this
forwarding completes the only three entries in the forwarding table. To
complete the 4th entry of forwarding table, B will send an acknowledgment
packet back to A via same path that is B-R4-R2-R1-A. The process can be
visualized in figure 2 and explained as follows.
5
Introduction to Layer
Functionality and Design
Issues 1) Destination machine B sends an acknowledgement packet carrying VC-
ID 60 to R4. By knowing this value, Router R4 will complete the 4th
column i.e. outgoing VC-ID of its forwarding table as shown in figure 2.
2) Router R4 will forwardthis acknowledgement packet to router R2. This
packet contains the incoming VC-ID 35 which will be copied at the place
of outgoing VC-ID in the table of R2.
3) Similar process will happen at R1. Router R1 receives the incoming VC-
ID 15 from the R2 table. It will be copied at the place of outgoing VC-ID
in the table of R1.
4) Finally, R1 forwards the acknowledgement packet to machine A which
carries incoming VC-ID as 5. This VC-ID is chosen by A only in the
initial process. Machine A knows that this VC-ID is to be used for
communication to B.
4 1
5 R1 R3
A
1 2
3
2 3
15
3 2
1 2
4 1 60
3 35 R4
R2 B
As discussed initially in the setup phase the virtual circuit establishment implies three
works (deciding the path, assignment of VC-ID to each link, change in forwarding
table) to be done. So, as explained in figure 1 and 2, all three mentioned works has
been completed.
b) Data transfer:All the packets between A and B will be sent through the same
established virtual circuit between them. As a result, thus all reach to the
destination in order. Each intermediate router changes the value of VC-ID
by seeing the forwarding table as shown in figure 3. As soon as the packet is
reached to a router, it will see the VC_ID of this packet. In this example it is
5. Thus, R1 will see its forwarding table for the VC_ID 5 and incoming port
6
Network Layer
1. It can be visualized from figure 3, for these values as an index; the
outgoing port is 2 and VC_ID is 15. R1 will change the VC_ID value in the
packet and forward it further. Similar process will be followed at the other
routers as well and finally the packet will be delivered to the destination
machine B via established virtual connection. The figure3 shows the process
for one packet. The same process would be followed by all the packets.
5, PKT 4 1
R1 R3
A
1 2
3
2 3
15, PKT
3 2
1 2
4 1
3 35,PKT R4
R2 B
60,PKT
c) Connection termination: Once A has sent all the packets to B, machine A will
send a termination request packet to B and in return, B will send an
acknowledgment of the same. As a result, all the routers delete the entry from the
routing table.
Datagram approach is used in today’s Internet scenario. This approach follows the
concepts of connection-less service. So, let us first understand the basic concepts of
connection less service.
Connection less service, as its name implies no virtual connection would be made
between source and destination. Both the entities does not do any handshaking. When
a machine wants to send the packet to another machine, it simply starts sending. The
message is divided into manageable units called packets where each packet is treated
individually. Each packet can follow same or different paths, thus they may reach out
of order or can be lost in between. Sender machine does not have any clue regarding
the loss of packets as there is no provision of acknowledgement of packets. Due to all
these properties, connection less service is an unreliable service.
7
Introduction to Layer
Functionality and Design
Although this service is unreliable but it is required in some situations like where we
Issues want immediate transfer of data or where loss of some packets does not affect the
overall quality of message or the situation where less overhead is required. The
overhead of handshaking or sending acknowledgements is not present in connection
less service.User Datagram protocol (UDP) and Internet protocol (IP) are the
examples of connection less protocols which works at the transport layer and network
layer respectively.
As datagram approach is a connection less service, thus all packets either belonging
to same source destination pair or different, are treated individually. Here, packet is
called as a datagram. Datagram contains the source and destination address.
Forwarding decision is taken individually for each packet on the basis of destination
address. Each router looks into the forwarding table for the mentioned destination
address in the datagram. It returns the output interface based on matching on which
the datagram will be forwarded further. If more than one entry are matched then based
on the principle of longest prefix matching, the output interface would be selected.
Both the approaches havetheir own advantages and disadvantages. Both the
approaches can be compared on the basis of following points.
Table 1 provides a brief overview about the difference in virtual circuit and
datagram approach.
Route is decided for all packets of a Route is decided for each packet
conversation between S and D
Overload may block connection setup Overload increase packet delay
and increase packet delay
Connection set up delay along with Only packet transmission delay
packet transmission delay
Forwarding decision based on VC_ID Forwarding decision based on
destination address
Congestion avoidance is easy Congestion avoidance is difficult
Till now, we had discussed the overview of network layer services. This section
discusses some of the network architectures to get an idea about their services.
Internet is most widely used network architecture. Internet’s network layer provides
best effort service. Best effort service implies it will try but does not guarantee
anything. Therefore, it can be visualized from table 2, Internet network service model
does not guarantee on any issue like ordering of packets, packet loss, bandwidth etc. It
does not even preserve the timings difference among packets when the packets reach
9
Introduction to Layer
Functionality and Design
at the receiver side. But, there are other network architectures which provide more
Issues than best effort service. Table 2 compares three network service architectures on the
basis of their services. For more details please refer [1]
ATM CBR (constant bit rate) works on the principle of a virtual pipe between
source and destination. Thus, it is able to provide some of the services like
ordering of packets, guaranteed bandwidth to each user, etc. No packet would
be lost and there are no chances of congestion as the resources are reserved
while establishing the connection. ATM ABR (available bit rate) provides a
minimum amount of bandwidth guarantee and delivers packets in order. But it
does not provide any guarantee about the loss of packets and jitter among
packets. Thus, ATM ABR is a little bit better than best effort service model of
Internet.
a) IP
b) TCP
c) UDP
a) Internet
b) Telephone networks
Q4. Compare Virtual circuit approach with the datagram approach. Provide at least
two differences.
...................................................................................................
10
.................................................................................................... Network Layer
....................................................................................................
Network layer provides end to end communication i.e. it delivers the packets from
source machine to the destination machine. This communication could be at a global
level, thus, a unique identifier for every machine is required. This identifier is the
logical address of machine, also known as Internet address or IP address. In actual
terms, this address is not associated to the machine; it is associated to the interface.
The portion between machine and the link is called as an interface. Generally, a host is
connected to a single network through a link so it has one interface thus one IP
address. On the other hand, a router is connected to many networks or hosts so it has
multiple interfaces and each interface will have an IP address.
1.4.1 IP address
IP address is generally written in dotted decimal notation (base 256). The other two
notations are binary (base 2) and hexadecimal (base 16). As shown in figure 4, binary
notation is just writing of all 32 bits in binary form. However, to increase its
readability, the bits are written in a group of 8 bits that is a byte and some space will
be provided between each byte. If we write decimal value of each byte and put a dot
to separate the group is referred as dotted decimal notation. This is most commonly
used notation. If we write hexadecimal value with respect to a group of 4 bits, then
that notation is called as hexadecimal notation.
C020030D Hexadecimal
11
Introduction to Layer
Figure 4: IP Address notations
Functionality and Design
Issues
1.4.2 Hierarchy in addressing
In the similar manner, IP address is divided into two parts where first part signifies the
network portion and second part is the host address. Network portion can be fixed or
variable. If the network portion is fixed, it refers to classful addressing which is
widely used in earlier days. But nowadays people switch onto the concepts of variable
network portion which refers to classless addressing. The next chapter discusses the
concept of classful and classless addressing in complete detail. Suppose 𝑏 bits are
used to denote network address, then the remaining (32 − 𝑏) bits would be used to
denote host address.
32 bits
IP Address
12
Network Layer
Q2. If a host portion is of 8 bits then how many bits denote the network portion?
a) 32 bits
b) 24 bits
c) 16 bits
a) Dotted decimal
b) Binary
c) Hexadecimal
...................................................................................................
....................................................................................................
....................................................................................................
1.5 CONGESTION
If the packets are coming at a faster rate than the handling capacity of the network,
this leads to a situation known as congestion. Initially, when the packet arrival rate
starts getting higher than the packet processing rate, queue starts filling up. As a
result, packet delivery time gets increased. If the same situation continues, queue
becomes full and packet drop starts. In this situation, source does not receive any
acknowledgement and for a large number of the packets, the timer is up; which leads
to unnecessary retransmissions. Sometimes the situation becomes worse and reaches
to a deadlock point and whole system gets collapsed.
To understand the situation of congestion, let us see the behaviour of two important
performance metrics i.e. delay and throughput with respect to the capacity of the
network. Figure 6 shows the delay and throughput as a function of load.
13
Introduction to Layer
Functionality and Design
Issues
No
Congest
congestion
Congested Area
Throughput
Area
No
Delay
congestion
Initially, when the less number of packets, they would be delivered without any delay
and throughput is good. As the system load increases, packets experience queuing
delay which in turn affects throughput as well. When the system load reaches the
capacity, queue becomes full and some packets get discarded. As a result, delay
approaches towards infinite value and throughput starts decreasing.
The issue of congestion is not only handled at the network layer; it is also handledat
the transport layer. The main idea behind congestion is try to avoid the situation i.e.
take preventive measures before reaching to a threshold and if the situation happens,
try to come out of that situation. First one is known as congestion avoidance phase
and second one is congestion removal. The policies used for congestion avoidance
phase is called as open loop congestion control policies. Closed loop congestion
control policies are for congestion removal phase.
d) Discard policy: If a router implements the good discarding policy then it can
also prevent congestion at some level. The good discarding policy which does
14
Network Layer
not impact the overall quality of transmission. For example, in a multimedia
transmission, some less priority packets gets discarded at the time of chances
of congestion then it will not impact the overall quality of transmission.
1.6 ROUTING
Job of the network layer is to send the datagram from a source end system to
destination end system. Data may travel through different paths or through multiple
hops to reach to the destination. The process of deciding about the path to reach to the
destination is known as routing. There are routing protocols or which helps in
constructing the forwarding table. The forwarding table or routing table is stored at
every end system and router. Whenever a router receives a packet, router consults its
routing table to decide the output interface. Looking into the routing table and
choosing the output interface, this process is known as forwarding. Filling up of
routing tables, their maintenance and regular updation is done at continuous intervals
by routing protocols or routing algorithms.Routing algorithms is a part of network
layer software.
There are various desirable properties ofa routing algorithm. These are as follows
A
2
3 C
7
2
B 5
E
4
4
D
Path Cost
A-C-E 2+7=9
A-B-C-E 3+2+7=12
16
Network Layer
A-B-D-E 3+4+4 = 11
The cost of a path is sum of the costs of all traversed edges from A to E. On
the basis of low cost, routing algorithm chose the path A-C-E. Suppose, all
the edges have same unit cost, then the path with less number of hops is
chosen. For this scenario, again path A-C-E would be selected because it
has less number of hops in comparison to other paths.
Now, revise all this scenario of finding the paths in your mind and tell me,
how you have chosen the path. Have you tried all the combinations? I think
no, you have tried just 2-3 paths and convinced yourself that this is the least
cost path. All this work of your mind is done actually by a routing algorithm
but to do this job, routing algorithm shouldknow the complete knowledge
about the network.
a) Global Routing Algorithm: These routing algorithms compute the best path
(least cost one or shortest one) by gathering the complete knowledge about
the network. It is also known as centralized routing algorithm. It can be run at
one central location or a replica to be run at multiple locations. All the
information about the nodes and links to be collected at the central algorithm
and then this algorithm computes the routes. The computed routing tables
would be distributed to all nodes. In this way, global optimal routes will be
computed and distributed to all. It will reduce the burden on each node. Link
state algorithms are a kind of global routing algorithms.
17
Introduction to Layer
Functionality and Design
do not respond to failures automatically, that’s why these algorithms are also
Issues known as Static algorithms.
...................................................................................................
....................................................................................................
Q4. What are the various policies that can be used to avoid congestion?
...................................................................................................
....................................................................................................
....................................................................................................
The packet transmission process starts from a source and ends at a desired destination.
In this transmission process, packet travels through a number of intermediate routers
and paths. Thus, a packet will not reach immediately to the destination. Rather it
experiences a number of delays.
b) Propagation delay:As soon asthe bit is put on the link, this bit has to travel
through a number of intermediate links. For a single intermediate link,
propagation delay is calculated as distance of this link divided by the speed of
18
Network Layer
the link. Speed of the link depends on the physical type of link. Generally,
speed is considered as 3*10 8 m/s, which is propagation speed of the vacuum.
d) Queuing delay:As its name suggests, this is the amount of time a packet
waits for its turn to get to be transmitted. Each router has an input queue for
incoming port and an output queue for outgoing port. Summation of both the
waiting times is known as queuing delay. Queuing delay mainly depends on
the packets already waiting for their turn. If there is no packet in the queue,
queuing delay is zero.
Total delay is the summation of all the above types of delays defined in above
subsection. The following notations represent four delays.
𝑑𝑡 − 𝑇𝑟𝑎𝑛𝑠𝑚𝑖𝑠𝑠𝑖𝑜𝑛 𝑑𝑒𝑙𝑎𝑦
𝑑𝑝 − 𝑃𝑟𝑜𝑝𝑎𝑔𝑎𝑡𝑖𝑜𝑛 𝑑𝑒𝑙𝑎𝑦
𝑑𝑝𝑟𝑜𝑐 − 𝑃𝑟𝑜𝑐𝑒𝑠𝑠𝑖𝑛𝑔 𝑑𝑒𝑙𝑎𝑦
𝑑𝑞 − 𝑄𝑢𝑒𝑢𝑖𝑛𝑔 𝑑𝑒𝑙𝑎𝑦
Q1. Suppose two hosts Y and Z are directly connected by a link. Length of this link is
10,000 Km and this link transmission rate is 1Mbps. The propagation speed of the
link is 2.5*108 m/s. Based on this information answer the following parts
a) Y sends a file of 400K bits to Z. How long does it take to send the file assuming
it is sent continuously?
b) Suppose now the file is broken up into 10 packets with each packet containing
40K bits. Z sends an ACK for each packet and Y cannot send a packet until the
preceding one is acknowledged. Transmission time of an ACK packet is
negligible. How long does it take to send the file?
19
Introduction to Layer
Functionality and Design
Answer:
Issues
a) File size = 400,000 bits , Transmission rate = 1 Mbps
In this scenario, there is no processing delay or queuing delay. Therefore, total delay
is
i. Second packet is sent only when Y receives the acknowledgement of first packet,
thus twice of propagation delay is used.
ii. Acknowledgement can be sent only when the first packet is received completely
at the receiver end. Thus, twice of transmission delay is used.
Therefore, for one packet total delay is = 2*Trans delay + 2*Propagation delay
And the total delay for all 10 packets is = 10*(2*Trans. delay + 2*Propagation delay)
Q2. Compute the end to end delay for circuit switching and packet switching for a
network. This network is having 5 hops to switch a message of 1200 bits where all the
links have a data rate of 4800bps. Size of the packet is 1024 bits along with a header
of 32 bits. In case of circuit switching, consider 0.5sec as a call setup time. Hop to hop
delay is .02 sec. Assume zero processing delay.
Answer: This answer is divided into two parts a) Computation of delay in circuit
switching scenario b) Computation of delay in packet switching scenario
The given packet size is 1024 bits. It implies 32 bits of header and the leftover 992
bits are data bits. Thus, to send the total message of 1200 bits, two packets are
required.
First packet is of 1024 bits (992 bits of data and 32 bits of header).
Second packet is of 240 bits (1200 – 992= 208 bits of data and 32 bits of
header).
Q1. If there is no buffer at the router, each incoming packet directly forwarded further
onto the outgoing port. In this situation which kind of delay is negligible?
a) Processing delay
b) Queuing delay
c) Transmission delay
d) Propagation delay
Q2. Host X is connected to Y via switch S. The link bandwidth is 10Mbps and
propagation delay on each link is 20μs. S is a store and forward switch, it begins
retransmitting a received packet 35μs after it has finished receiving it. Calculate the
total time required to transmit 10,000 bits from X to Y.
a) As a single packet
b) As two 5,000 bit packets sent on right after the other
...................................................................................................
....................................................................................................
....................................................................................................
21
Introduction to Layer
Functionality and Design
....................................................................................................
Issues
1.8 SUMMARY
In this unit, we understood the concepts of packet switching. Network layer follows
the concept of packet switching as packet is the basic data unit used at this layer.
There are two types of packet switching techniques, virtual circuit and datagram
approach. In virtual circuit approach, before sending any data between a source
destination pair, end to end logical connection needs to be established between them.
Datagram approach is a connection less service. There is no handshaking between
source and destination and each packet follows its own route.
1.9 SOLUTIONS/ANSWERS
2) c
3) a
4) Virtual circuit approach decides the output port on the basis of VC_ID of a
packet whereas datagram approach decides the output port on the basis of
destination address mentioned in the packet. If a router gets failed, only the
packets waiting in the queue of that router gets lost in datagram approach.
However, in virtual circuit approach, all the connection passing through that
router or whose state information is maintained in this router gets lost.
2) b
22
Network Layer
3) a
2) b
4) There are policies used to avoid the congestion known as open loop
congestion control policies. But if congestion occurred, then some of
the policies are used to remove the congestion.
The following open loop congestion control policies can be used to avoid
congestion. Receiver’s acknowledgement policy can control the congestion at some
level. For example, if receiver sends the acknowledgement packet after receiving
some packets, it will slow the sender as well as not add a burden of sending
acknowledgement packets. Another policy implemented by router that the router can
visualize the possibility of congestion and if there are chances of congestion then the
new virtual connection request can be rejected. Sender can implement the
retransmission policy to avoid the problem of congestion. For how long the sender has
to wait or after how many lost packets, the packet needs to be retransmitted, all these
kinds of retransmission policies should be designed in such a way that it will not add
more congestion in the network. Sometimes, even after taking all the preventive
measures, congestion occurred. In this situation, closed loop congestion control
policies to be used to avoid stucking into deadlock.
These policies are mainly about informing all about the situation of congestion. One
of the policy is sending a choke packet. A choke packet is a control packet sent by the
router to the source node. This packet informs the sender about congestion
occurrence. Another is signaling in which a signal would be sent by the congested
node to inform the sender about congestion. Rather than sending an explicit packet
like choke packet, here, a signal will be sent in the existing packets carrying data.
23
Introduction to Layer
Functionality and Design Check your progress 4
Issues
1) b
24
Network Layer
UNIT 2 ROUTING ALGORITHMS
2.0 Introduction
2.1. Objectives
2.2. Flooding
2.3. Shortest Path Routing Algorithm
2.4. Distance Vector Routing
2.4.1. Comparison
2.4.2. The Count-to-Infinity Problem
2.5. Link State Routing
2.6. Hierarchical Routing
2.7. The Internet Protocol (IP)
2.7.1. IPV4 addressing
2.7.2. Datagram Format
2.7.3. IP Datagram Fragmentation
2.7.4. IP V6
2.7.5. Internet control message protocol
2.7.6. Dynamic host configuration protocol
2.7.7. IP Security
2.8. Routing with Internet
2.8.1. Inter Autonomous System Routing in the Internet: RIP & OSPF
2.8.2. Inter Autonomous System Routing BGP
2.9. Multicast Routing
2.10. Mobile IP
2.11. Summary
2.12. Solution/Answers
2.13. Further Readings
2.0 INTRODUCTION
Network layer is responsible for finding the optimal route from source to a
destination. Multiple paths may exist between a pair of source and destination.
A path with minimum cost is considered to be the optimal route. Routing
algorithms construct and maintain a table called Routing Table which is
referred while looking for a route. A routing algorithm is responsible for
selecting the most appropriate route in the network between source and
destination. Router is the network layer device which is responsible for
performing routing for the network. A cost is associated with each path in the
form of bandwidth, delay, congestion, security etc. Router performs routing
and selects the minimum cost path for all the remote networks. A router is
implemented with a number of algorithms to find the optimal routes.Based on
the criteria of optimal path according to the requirement of the network traffic,
appropriate routing algorithm can be chosen.
1
Routing Algorithms
In this unit section 2.3 is about flooding which uses broadcasting. In section
2.4 shortest path routing algorithmi.e. Dijkstra’s algorithm is discussed. In
section 2.5 Distance vector routing algorithm: Bellman-Ford Algorithm is
discussed. In this section comparison between Dijkstra’s algorithm and
Bellman-Ford Algorithm and count-to-infinity problemis also discussed in this
section. Section 2.6 covers a link state routing protocol and its working. In the
section 2.7 hierarchical routing is discussed. Section 2.8 deals with the Internet
Protocol (IP). In this section Ipv4 and IPv6 along with ICMP, DHCP and IP
security are covered. Section 2.9 discusses routing in the Internet and protocols
RIP, OSPF and BGP. Section 2.10 is about multicast routing. In section 2.11
Mobile IP is introduced. Section 2.12 summarizes the chapter. In Section 2.13
review questions and their solutions are covered. Section 2.14 lists further
readings.
2.1 OBJECTIVES
2.2 FLOODING
t=1 R2 t=1 R5
R2 R5
t=0 t=2
t=0 t=1
t=1 t=2 t=2 R1 R4
R1 R4 t=2
R7
R7
t=1 t=2
t=2 t=0 t=1
t=2
t=0
t=1 R3 R6
R3 R6
(a) (b)
Figure 1: Packet Flooding (a) without spanning tree (b) With spanning tree
Construction
Considering the figure 1(a).Suppose a change in topology is observed by node
R1. R1 will send the notification packets toR2 and R3. R2 will send the packet
to R4 and R5. R2 will send the packet to R1 (as it has received the same from
R1). In similar way node R3will send the packet to R4 and R6. Node R4 has
received the same notification packet originated by same origin with same
sequence number. So R4 will further send the packet arrives first and will
discard the later one. Similarly R6 will discard the later one and forwards the
first received packet to R7. Node R5 receives packets both from R2 and R4, so
in same way the packet arrives first will be forwarded and later one will be
discarded. R7 will receive the packet from both R5 and R6.
Another way to reduce the redundancy of packets in the network and to avoid
the cyclic forwarding of the packets is to construct a logical spanning tree of
the topology.
Considering L as the number of bi-directional links of the network, for a
packet to be broadcasted the total number ofpacket transmission lies between L
and 2L. Arrows on the links show packet transmissions with the time of
3
Routing Algorithms
transmission (assumed to be 01 unit for each packet) shown. In figure 1, the
flooding is shown with both the methods without and with spanning tree
construction. In flooding without spanning tree the number of packets
transmitted is in generally many more as in the case of with spanning tree.
Also, in both the cases, the broadcast packet reaches to all nodes within same
time. For a graph many spanning trees are possible, hence the flooding time
depends on the spanning tree constructed.
4
Network Layer
5
Routing Algorithms
Bellman-Ford Algorithm
Each router also constructs/ maintains routing table known as Distance Vector
table storing the path cost (in terms of distance or the hop count) to reach ALL
feasible destination nodes from itself. The path cost iscalculated using
information received from the neighbour’s distance vectors.
A router maintains following information for Distance Vector table -
Router ID (each router has an ID)
Link cost for each link connected to a router
Intermediate hops
Initially the Distance Vector table is initialized as follows:
Cost to itself (C = 0
Cost to all other routers = infinity.
6
Network Layer
3
1
5
X Z
Step 1: Each node knows the distance to reach to its direct neighbour nodes.
The distance to itself is 0 and the distance to nodes which are known
discovered yet is considered as ∞.
Initially, the DV/ routing table of each node can be as:
X Y Z
X
Y 3 0 1
Z Y
3
1
X Y Z X Y Z
X 0 3 5 5 X
Y X Z Y
Z Z 5 1 0
7
Routing Algorithms
dx(z) = min { [c(x,y) + dv(z)], [c(x,z) + dz(z)]}
= min {[3 + 1], [5 + 0]} = 4
X Y Z
X
Y 3 0 1
Z Y
3
1
X Y Z X Y Z
X 0 3 54 5 X
Y 3 0 1 X Z Y
Z Z 5 1 0
X Y Z
X
Y 3 0 1
Z Y
3
1
X Y Z X Y Z
X 0 3 4 5 X
Y 3 0 1 X Z Y
Z 5 1 0 Z 5 1 0
X Y Z
X
Y 3 0 1
Z Y
3
1
X Y Z X Y Z
8 X 0 3 4 5 X
Y 3 0 1 X Z Y 3 0 1
Z 5 1 0 Z 54 1 0
Network Layer
Finally the routing table for all –
X Y Z
X 0 3 4
Y 3 0 1
Z 4 1 0 Y
3
1
X Y Z X Y Z
X 0 3 4 5 X 0 3 4
Y 3 0 1 X Z Y 3 0 1
Z 4 1 0 Z 4 1 0
At the end of convergence process, the DV information of all the nodes are
same until a new change in the topology occurs.
2.4.1 Comparison
The comparison of the two routing algorithms approach should be based on the
new path processing time and the traffic generated by these for the routing
information convergence process.
The evaluation of an algorithm is moreover depends onthe implementation
approach and the specific implementation.
The discussed routing algorithms can be compared on following points:
1. Message complexity
• Link State algorithm:Link state algorithm sends order of O(nE)
messages with n nodes, E links.
• Distance Vector algorithm: Messages areexchange between directly
connected neighbors only
2. Speed of Convergence
• Link State algorithm:It takes order of O(n2) to converge, where n is the
number of routing nodes
• Distance Vector algorithm: Convergence time is not standard with this
and varies due to following situations:
– may be routing loops
– count-to-infinity problem
3. Robustness: Robustness is the confidence of getting the correct result under
any circumstance.
• Link State algorithm:Link State algorithm can face issues like:
• node can advertise incorrect link cost
• each node computes only its own table
• Distance Vector algorithm:Distance Vector algorithm can face issues
like:
• Distance Vector node can advertise incorrect path cost
9
Routing Algorithms
• each node’s table used by others, so if a node shares incorrect
path it could be propagated further to the network on a large
scale.
R1 R2 R3 R4
The routing table for above topology can be (considering the distance of each
link is to be 1 unit). Each cell shows the pair (distance, predecessor node):
R1 R2 R3 R4
R1 0, - 1, R1 2, R2 3, R3
R2 1, R2 0, - 1, R2 2,R3
R3 2, R2 1, R3 0, - 1, R3
R4 3, R2 2, R3 1, R4 0, -
In the above table it is understood that the network will not be able to converge
ever. The root cause of this issue is the sharing of routing information with the
node (R2) from which it (R3) first discovered that node (R1).
As discussed above one of the possible resolution of this problem is Rout
Poisoning and another resolution is with Split Horizon.
In Split horizon Rule says that, the information about the path for a
destination (say for R1) is never sent back in the direction from which it was
received i.e. R3 discovered node R1 through R2 so, R3 will not send back the
same path information which was received from R2 about R1 to R2.
11
Routing Algorithms
2.5 LINK STATE ROUTING
The Distance Vector routing algorithm is driven by the sharing of self routing
information with neighbours which leads to routing challenges as the count to
infinity problem. In DV routing algorithm rumors can be spread in a many fold
speed and strength.
For these reasons, a new routing algorithm introduced namely: Link State
Routing algorithm also known as shortest path first.
Link state routing approach is inspired by road navigation map. In contrast to
DV approach, in LS each router has a complete view of thenetwork topology.
In link state protocol each routershares information about itself, its directly
connected links, and the state of thoselinks. Instead of sharing routing
information (routing table containing cost) as in DV, in link state information
regarding the status of the link is shared. On receiving information shared by
other routers, each router keeps a copy of it and further passes it without any
change in it. Each router independently computes the best route (route with
minimum cost) to reach to every possible destination in the topology. That is,
after convergence each router has the map (topology) of the entire network
designated to it. In link state routing each router has the same routing
information. When there is a change in the topology, directly affected router
send the change in routing information to all routers in the topology.
Link state routing protocol maintains three tables namely: neighbor table,
topology table and actual routing table to perform routing.
Link state routing protocols are the most widely used protocols in the Internet.
Some of the widely used link state protocols are: Open Shortest Path First
(OSPF) and Intermediate System to Intermediate System (IS-IS).
Working of the Link State Routing protocol:
Link state routing protocol can be divided into five parts as written by
Tanenbaum. Each router of the topology uses link state routing protocol
performs following actions:
1. Building of Neighbour table:
In link state algorithm a special type message/packet namely: HELLO is
used to discover neighbour nodes in the network. A router sends HELLO
message on each of its connected link. Neighbor routers reply with their
network addresses. The router uses this information and the port on which
it received this information to build up its neighbor table.
2. Path cost measurement Measure to neighbour nodes:
12
Network Layer
path cost may be a composite metrics with factors like the end-to-end delay,
throughput, or a combination of these.
3. Once the node has discovered the neighbours and their path cost it
constructs a packet called as link state packet (LSP) including the link cost
to these neighbours. The structure of the LSP is shown in table below. This
packet is broadcasted in the network.
One of the major drawbacks to the link state routing protocols is that the
CPU overhead to recalculate the route due to change in topology is very high.
Another drawback is the amount of memory required to store the routing
information i.e. the neighbor tables, routing table and the full map of the
topology.
If a node advertises wrong neighbor information, the error is propagated to the
whole topology.
As discussed in link state and distance vector routing algorithms, each router
has to store routing information in the form of routing table. In the routing
table router stores information of path for remote networks i.e. the path cost,
the exit interface.
The amount of routing information to be stored is directly proportional to the
number of routers in the network. That is for a small size network with few
numbers of routers the routing information to be stored by routers can be
handled easily. Whereas, for a network with large number of routers, the size
of routing information to be stored is highly voluminous in size. The purpose
of routers is to route (finding the path) the packet to destination. As a result the
routing tables will become big in size and will consume more space on router
as well as more bandwidth in the network when shared.
To overcome this problem, instead of a flat structure the network can be
designed as a hierarchical structure.
Considering the following example with distance vector routing algorithm
node A has to store 21 entries into its routing table (considering each path
13
Routing Algorithms
having a cost of 1 unit). Exit interface is the interface through which the
destination is connected (here I1 and I2 are considered as the interfaces of A) .
H I
F
A I1
I2
B D E G
C
A
T S O L K
R P
U N M
A Q
From table and figure above, it is visible that the traffic due to exchange of
these routing tables will be high.
One possible solution of this can be, if routers are divided into small
groups(called as Regions) in which they have to store routing information of
the routers of the region they belongs to. A router stores only one entry
collectively for all routers of a region.
In the example discussed here, the complete network can be classified into 6
regions as shown below:
Region 2
Region 3
Region 1
H I
F
A I1
I2
B D E G
C
A
Region 6 Region 4
Region 5
T S O L K
R P
U N M
A Q
Let’s again construct the routing table of router A with hierarchical routing
approach:
15
Routing Algorithms
A’s Routing table for Hierarchical routing
Destination Exit Interface Cost
A --- ---
B I1 1
C I2 1
Region 2 I1 2
Region 3 I1 3
Region 4 I1 4
Region 5 I2 3
Region 6 I2 2
If A wants to send packets to any router in region 3 (E, F or G), it sends them
to the interface I1. From the above table it is clear that the routing table size is
reduced leading to improved efficiency due to less overhead of the traffic in
the network.
Hierarchical routing further can be classified into levels. In the example
discussed above a two-level hierarchical routing is implemented. The level of
hierarchical routing is chosen according to the size (number of routers) of the
network. A three or four level hierarchical routing can also be used. In a three-
level hierarchical routing, the network is classified into a number of clusters.
Where, each cluster contains a number of regions, and each region contains a
number of routers. In Internet at a wide scale commonly hierarchical routing is
used.
16
Network Layer
Address representations
IPv4 addresses are commonly written in dot-decimal notation. In this 32-bits
are divided into 4 octets separated by periods(.). In dot-decimal notation each
octet is written in decimal format.
For example, IP address 172.16.1.32. For computing purpose, sometimes it is
convenient to use binary notation of IPv4 addresses.
The 32 bits of the IPv4 addresses are divided into two parts: network portion
and host portion. Five classes of IPv4 addresses are defined
Private networks
Private IP addresses are used in private networks managed by single authority.
Private IP addresses are reusable among private networks i.e. private IP
address used in one private network can be reused in another private network.
Private IP addresses are not routable in the public Internet; that is they are not
recognized by public routers. Therefore, hosts with private IP address cannot
communicate with public networks directly, there is a need of network address
translation (NAT) system for this purpose.
IPv4 address classes:
IPv4 address range is classified into five classes; A through E. These classes
are identified by the first octet of the IP address. The details are as shown in
table below:
17
Routing Algorithms
1st Byte 2nd Byte 3rd Byte 4th Byte Description
Class A 0 to 127 (in decimal) Open Open Open The 1st MSB bit
00000000 to 01111111 (can take (can (can take of the 1stoctet is
(in binary) any take any any value always set to 0
value value between 0 (zero) (as shown
between between to 255) in red color)
0 to 255) 0 to 255)
Class B 128 to 191 (in decimal) Open Open Open The first two
10000000 to 10111111 (can take (can (can take MSB bits of the
(in binary) any take any any value 1stoctet are
value value between 0 always set to 10
between between to 255) (as shown in red
0 to 255) 0 to 255) color)
Class C 192 to 223 (in decimal) Open Open Open The 1st three bits
11000000 to 11011111 (can take (can (can take of the 1stoctet are
(in binary) any take any any value always set to 110
value value between 0 (as shown in red
between between to 255) color)
0 to 255) 0 to 255)
Class D 224 to 239 (in decimal) Open Open Open The 1st four bits
11100000 to 11101111 (can take (can (can take of the 1stoctet are
(in binary) any take any any value always set to
value value between 0 1110 (as shown in
between between to 255) red color)
0 to 255) 0 to 255)
Class E 240 to 255 (in decimal) Open Open Open The 1st four bits
11110000 to 11111111 (can take (can (can take of the 1stoctet are
(in binary) any take any any value always set to
value value between 0 1111 (as shown in
between between to 255) red color)
0 to 255) 0 to 255)
The data unit of IP is named as packet. For each of the IP packet, control
information is added which is used by intermediate nodes to make it delivered
successfully to the destination and also used by end to end nodes for
confirmation of correctness of the message. This control information is added
at the starting of the content called as header of the packet. An IP packet has
two sections: a header section (with control information of IP) and a data
section (payload handed over by upper layer).
Header of IP packet
The header of the IPv4 packet consists of 14 fields, of which first thirteen field
are necessary to be included and the last 14thfield is (options) optional to add.
The header of the IP packet is formed as big endian format (most significant
byte first). The most significant bit is numbered 0. The version of the IP
protocol is the first field of the header (four bits of the 1st byte). The structure
of the IP header is shown in figure below.
18
Network Layer
Version
The first field of the IP header the protocol version of the Internet Protocol
(IP). This field is of four bits length. For IPv4, as it is the 4 th version of IP so
version field contains 4 (0100).
Internet Header Length (IHL)
As the header of the IPv4 packet is not of the fixed size due to the 14 th field
options, it is necessary to include the size of the header so that the receiver is
able to separate the header fields from the payload of the packet. This field is
of 4 bits. The minimum value for IHL is 5 and the maximum is 15. The value
of this field is calculated by multiplying IHL value with 32 and the result will
be in bit, i.e. IHL field value 5 means: 5 x 32 bits =160 bits = 20 Bytes. The
maximum value of IHL can be 15 (4 bit length), that is the maximum size of
the IPv4 header can be 15 × 32 bits = 480 bits = 60 bytes.
Differentiated Services Code Point (DSCP)
This field is used to specify the type of service (ToS) for the packet in
transmission. At present this field specifies differentiated services (DiffServ).
This field is commonly used by the real-time data streaming applications. An
example is Voice over IP (VoIP), which is used for interactive voice services.
Explicit Congestion Notification (ECN)
This field is used to provide end-to-end congestion control mechanism to avoid
dropping of packets.
Total Length
This field is of the size 16-bits. This field defines the size of the entire packet
in bytes, that is header and data. This field can take a value between 20 bytes
(only header with no data) and 65,535 bytes.
Identification
This field is used for the purpose of identification of the packets for uniquely
identifying the group of fragments (breaking the packet into smaller size due to
constraints of routers or the network links) of a single IP datagram.
Flags
A total of 3 flags are defined each with 1 bit. The purpose of these flag values
is to control or identify fragments. These flags are as follows:
19
Routing Algorithms
than the MTU (Maximum Transferable Unit]) value, that is fragmentation is
required, the packet will be dropped.
The MF field denotes that there are more fragments available after this one of
the original packet. MF flag is 0 (zero) for unfragmented packets, and the last
fragment of a packet. The last fragment of a packet has a non-zero Fragment
Offset field, differentiating it from an unfragmented packet.
Fragment Offset
This filed is of the size 13bits. Fragment offset of a fragment is measured in
units of 8 byte blocks. This field represents the position of a particular
fragment with respect to the beginning of the unfragmented (original) IP
packet. The first fragment has an offset of zero. The maximum value of this
field can be (213 – 1) × 8 = 65,528 bytes, which would exceed the maximum IP
packet length of 65,535 bytes with the header length included (65,528 + 20 =
65,548 bytes).
Time To Live (TTL)
Under some circumstances many of the times packets got stuck in loops in the
network and consumes the bandwidth of the network unnecessarily. The size
of this field is 8 bit. This field restricts packets to enter to live infinitely long
time in the Internet. This value is in seconds, but time intervals less than 1
second are rounded up to 1. The value of this field is set to a number (known
as maximum hop count limit). When the packet arrives at a router, the TTL
field value is decrement by one. The packet is dropped by the router if it
encounters TTL field value to be 0 (zero).
Protocol
This field contains the protocol used at the upper layer (transport layer) in the
data portion of the IP datagram.
Header Checksum
IP supports error-checking of the header. On arrival of the packet at a router,
the checksum is calculated again of the header and compared with the
checksum field of the header. If both values do not match, the router discards
the packet. On each of the intermediate router the TTL field value is decreased
by one so the router must recalculate the checksum value of the header.
Source address
IPv4 address of the sender of the packet is included in this field. The size of
this field is 32 bit. This field is the IPv4 address of the sender of the packet. If
the sender belongs to the private network (having private IP address), this has
to be changed in transit by a network address translation (NAT) device.
Destination address
IPv4 address of the receiver of the packet is included in this field. If the
receiver belongs to the private network (having private IP address). If the
receiver belongs to the private network (having private IP address), this has to
be changed in transit by a network address translation (NAT) device.
Options
In general this field is not used while forming IP packets.
When a packet arrives at the router, its destination address is examined to get
the outgoing link on which it has to be forwarded. Once the outgoing link is
identified its MTU is determined. If packet size is more than the MTU value,
and the IP packet header flag field ‘Do not Fragment (DF)’ value is 0 (zero),
then the router fragments the packet into smaller parts and sent one by one on
the link. The allowed maximum size of any fragment can be MTU value
minus the IP header size i.e. it ranges from 20 bytes to 60 bytes).
Considering the following example, the MTU of the exit link is 1500 bytes. A
datagram of the size 4000 bytes with identification number 777 and DF flag bit
set to 0 is received (given that including 20 bytes of transport layer header).
Here, MTU size is 1500 bytes that is the maximum size of the packet
(header (20 bytes) + payload 1480 bytes can be allowed on the link.
So, for the 1stfragment:
– Payload/ data in the packet =1480 bytes
– offset = 0 (data of this packet should be inserted at byte 0 at the
time of reassembling)
– identification number = 777
– MF flag value= 1 (there are more fragments after this fragment
of the original packet)
nd
2 fragment
– Payload/ data in the packet =1480 bytes
– offset = 1480 (data of this packet should be inserted after byte
1480 at the time of reassembling)
– identification number = 777
– MF flag value= 1 (there are more fragments after this fragment
of the original packet)
rd
3 fragment
– Payload/ data in the packet =1020 byte (=3980-1480-1480)
information field
– offset = 2,960 (data of this packet should be inserted after byte
2960 at the time of reassembling)
– identification number = 777
– MF flag = 0 (this is the last fragments of the original packet)
21
Routing Algorithms
Reassembly of the fragmented parts of the packet is performed only at
the end system, routers are not allowed to reassemble the fragments in
transit.
2.7.4 IPv6
Due to increase in users of the Internet, IPv4 addresses are exhausted. Hence,
the size of IP address required to be increased to accommodate all the users of
the Internet. IPv6 is the 6th version of Internet protocol and the size of the IPv6
address is 128 bit. Interoperability between IPv4 and the IPv6 is not provided,
and thus shifting to IPv6 was not easy at all. There was a need of intermediate
system which sits in between these two protocols and acts as a convertor for
both. Several transition mechanisms have been proposed to make the
communication between these two protocols possible.
IPv6 not only provides a large addressing space but also permits hierarchical
address allocation methods that facilitate route aggregation across the Internet,
and thus limit the size of routing tables even in a very large network.
Address Representation
IPv6 addresses are represented in hexadecimal format. 128 bits of the address
are divided into eight groups, separated by colons, each of size 16 bits
represented into 4 hexadecimal digits. Example of IPv6 address is
2001:0db8:0000:0000:0000:8a2e:0370:7334. Further, the IPv6 address can be
shortened by omitting continuous 0(zero) groups and placing double colon (::)
instead. The leading 0 (zeros) in a group can also be omitted. For example,
2001:0db8:0000:0000:0000:8a2e:0370:7334 address can be written as -
2001:db8::8a2e:370:7334.
The host portion of IPv6 address is fixed of the size 64 leaving remaining 64
bits for the subnet size.
IPv6 packets
Payload/Data
22
Network Layer
The header consists of fixed portion and the variable/ optional portion. The
fixed portion is of size 40 octets and consists of eight compulsory fields to
support minimal functionality required for all packets and optional extensions
to implement special features.
Version field is to define the version of the IP protocol, here its value will
always be 0110 (version 6). Traffic class and the flow label fields are used to
provide traffic specific QoS. As the fixed header length is not variable so
header length field is not necessary here. But there is need to identify the last
bit of the packet, hence the field Payload length is added which tells the size of
the payload (total size = header size + payload size). After the header either the
optional header fields will be inserted or the payload is inserted. Next Header
field is used to help the receiver to interpret the data which follows the header.
The "Next Header" field of the last option points to the upper-layer protocol
that is carried in the packet's payload.Without options, at max 64kB of the
payload can be inserted.
2.7.7 IP Security
2.8.1 Intra Autonomous System Routing in the Internet: RIP & OSPF
An intra-autonomous system routing protocols are responsible to provide
routing capabilities to routers within an autonomous system (An autonomous
system (AS) is a very large network or group of networks with a single routing
policy.). Intra-AS routing protocols are also known as interior gateway
protocols. RIP (the Routing Information Protocol), and OSPF (Open Shortest
Path First) are the most widely used Intra-AS routing protocols.
1. RIP
RIP (Routing Information Protocol) is based on distance vector routing
algorithm. RIP uses Bellman Ford routing algorithm. RIP is a proprietary
routing protocol of Cisco and available with Cisco routers. In RIPv2 is
capable of preventing routing loops in the network. A maximum Hop count
value 15 is used for this purpose. RIPv2 is implemented with mechanism like
split horizon, route poisoning and hold-downto prevent spreading of rumours
and routing loops. RIP is suitable for a small size network. RIP uses UDP as
transport layer protocol making it light weight protocol.
2. Open Shortest Path First (OSPF) :
Routing Information
Protocol is based on the Open shortest path first
Bellman Ford protocol is based on
2 algorithm. Dijkstra algorithm.
24
Network Layer
SR.NO RIP OSPF
It can connect any network of ASirrespective of the topology used. The only
requirement to connect many AS together is that each of the AS should have
at least one router running with BGP. BGP’s is responsible to exchange
network reach ability information with other BGP systems. BGP constructs
an ASs’ graph based on the information exchanged between BGP routers.
1 R1
1,2 2 2
R2 1 R3 1,2
26
Network Layer
Figure below shows the pruned spanning tree for group 1 for the spanning tree
constructed for R1 above.
1 R1
1 2 2
R2 1 R3 1
Similarly, the pruned spanning tree for the group 2 of spanning tree of router
R1 is as shown in figure below:
1 R1
2 2 2
R2 1 R3 2
After pruning is completed, the multicast packets are forwarded only along the
corresponding spanning tree. As the basic requirement of this algorithm is to
store separate pruned spanning tree for each member of every group. Hence
this method is not suitable for large networks.
27
Routing Algorithms
2.10 MOBILE IP
Increasing the number of mobile devices with Internet access leads to invent
the new modified protocol for these devices namely: Mobile IP. This
protocol is designed by extending standard Internet Protocol. It is designed
by keeping in mind the mobility of the devices and it provides the ability to
users that they canswitch to another network with the same IP address
without dropping out the connection.
This protocol allows location-independent routing of IP packets throughout the
Internet. A mobile device is always recognized by the home address assigned
irrespective of its current location in the Internet.
2.11 SUMMARY
In this unit we have learnt about the routing of a packet. That, how a packet
reaches to destination from the source following the best route. Shortest path
routing is simple to understand and implement hop count based routing
approach. It is static in nature, means a graph is constructed by the source node
for all the destination nodes in the network. Dijkstra’s algorithm is one of the
widely used shortest path approach based routing algorithm. It is also known
as the greedy approach based routing algorithm. Distance Vector routing
algorithm is another solution for routing in the network. DV approach is
dynamic in nature that individual node constructs a complete map of the
network topology. Each node constructs the routing table with cost and the
vector component for each of the remote network. Distance vector routing
algorithm is based on the Bellman-Ford equation. Distance Vector based
approach face the count to infinity problem which is addressed by
implementing split horizon and route poisoning together. Above both methods
of routing are not suitable in a large network due to huge traffic generated by
routers to exchange routing information. Hierarchical routing is one of the
possible solutions to perform routing in large networks with huge number of
routers. Complete network is divided into smaller sub-networks in the form of
hierarchical network. Further,we have learnt about Internet protocol and its
two versions: IPv4 and ipv6. The address space of IPv4 is of the size 32 bits
and that is of IPv6 is of 128 bits. IPv6 also include optional headers fields. One
of the features provided by IPv6 as optional header is the IPSec. ICMP and
DHCP are major network management protocols. ICMP is used for many
services like, congestion control, flow control, network diagnosis etc. DHCP is
responsible for assigning IP address, subnet mask, and gateway and DNS
information to the clients in a network. It is very difficult to manage this
information manually so managed efficiently by inserting a DHCP server.
28
Network Layer
2.12 SOLUTIONS/ANSWERS
Review Questions:
Solution:
1)
2) Distance vector routing algorithm faces the count to infinity problem. The
convergence is slow. Routing information isexchange among direct neighbors
only. Chances of rumors about false routing information are always there in
distance vector routing. Due to which a packet may enter into routing loop.
3) A spanning tree is a tree with no cycles and constructed such that all the
vertices are covered with minimum possible number of edges.
29
Network Layer
UNIT 3 CONGESTION CONTROL
ALGORITHMS
3.0 Introduction
3.1 Objectives
3.2 Reasons For Congestion In The Network
3.3 Congestion Control Vs. Flow Control
3.4 Congestion Prevention Mechanism
3.5 General Principles Of Congestion Control
3.6 Open Loop Control
3.6.1 Admission Control
3.6.2 Traffic Policing And Its Implementation
3.6.3 Traffic Shaping And Its Implementation
3.6.3.1 Leaky Bucket Shaper
3.6.3.2 Token Bucket Shaper
3.6.4 Difference Between Leaky Bucket Traffic Shaper And Token Bucket Traffic
3.7 Congestion Control In Packet-Switched Networks
3.8 Summary
3.9 Solutions/Answers
3.10 Further Readings
3.0 INTRODUCTION
In the Internet nodes acting as transmitting nodes are inserting packets into the
Internet and nodes acting as receiving nodes consume the packets from the Internet.
Internet has a capacity to handle the traffic load (packets). When the rate of insertion
of packets into the Internet is higher than the rate of consumption of the packets from
the Internet at last Internet is unable to handle the traffic and the performance of the
resources of the Internet is degraded. This situation is termed as congestion.
Hence, the goal of congestion control algorithms is torefrain the transmitter from
inserting packets in the network more than the handling capacity of the Internet.
In this unit section 3.3 discusses about the reasons for congestion in the network.
Section 3.4 differentiates congestion control from flow control. Congestion prevention
mechanisms are covered in section 3.5. In section 3.6 general principles of congestion
control are elaborated. Further in section 3.7 congestion control technique namely:
Open loop control is discussed. Section 3.8 is about the congestion control in packet-
switched networks. Section 3.9 summarizes the unit. Problems and their solutions
covering the entire unit are discussed in the section 3.10. Section 3.11 enlists further
readings.
3.1 OBJECTVIES
In the Internet there can be several reasons to occur the congestion. When many
transmitters insert data packets on to input lines at a time and are to be sent on the
same output line, assuming that the capacity of the output line is much less than that
of the packets received then a long queue will be build up for that output line. In this
situation if the buffer memory if not big enough to hold all these packets, then extra
packets will be dropped. To stop dropping of the packets, if the memory available is
made infinitely large even then the congestion may be reduced but the overall quality
of the service of the traffic will be worse; because by the time packets reach to the
output line to get dispatched, there TTL (time to live) value gets expired and their
duplicate packets been already inserted into the network. If all the packets carried to
the final destination, these duplicate packets will increase the traffic load only in the
Internet and will be discarded, due to time out. So, it will be good for the Internet to
drop these packets as soon as their TTL value gets expired.
Another reason of the congestion in the Internet is the slugging performance of the
processors of the intermediate devices. If any of the intermediate router’s CPU is
performing slower than expected speed, their jobs (i.e.Queuing buffers, routing
packets, updating tables, reporting any exceptions etc.), will be slowed down. The
arrival rate of the packets at input line is greater than the processing and removal of
the packet from output line. This again creates a situation of congestion.
Another point of issue is the LowBandwidth. Due to low bandwidth capacity of the
lines amount of the traffic increases in the network causing congestion.
Resolution of any one of the issues discussed above will not handle the
congestion;instead it will just shift the bottleneck to some other point. The root cause
of the real problem is the mismatch of the capacity (computing or the carrying) of
various components of the system. Once congestion happens in the network the
routers respond to overloading by simply dropping the packets.
The bursty nature of traffic is one of the major causes of congestion. This could be
controlled by restricting the insertion of the traffic at a uniform rate.
Congestion control and flow control are two different things, which are mixed up at
times. As discussed earlier congestion control is the entity of the network whereas
flow control is about regulating the transmission of data between devices on the
connection/link between them, and not what is happening in devices between them.
2
Network Layer
Flow control is about point-to-point traffic control between sender and receiver for a
specific transmission to avoid packet drop at receiver. If the incoming traffic rate is
higher than the processing rate at receiver, the receiver is flooded and packet will be
dropped. To overcome this situation the receiver should send some kind of feedback
to sender to inform the sender about the drop of packets and to slow down the sending
speed. This is called flow control between sender and receiver and is handled at
transport layer responsible for end-to-end data delivery. Congestion is a situation
when the traffic in the network is higher than the handling capacity of the network.
Congestion control is to restrict the traffic load below the handling capacity of the
network. As shown in Figure below, the congestion control techniques can be broadly
classified into two categories:
• Open loop: Methods to prevent or avoid congestion are classified as open loop
techniques. Open loop methods ensure that congestion state never exists in the
network. Open loop policies are applied in the network to prevent congestion before it
happens. The congestion control policies are applied either at the source or the
destination.
• Close loop: these methods acts to treat or alleviate the congestion once it happens.
Once the system enters to congestion state, closed loop techniques used to detect it,
and then take action to bring the system out of it.
Open Loop solutions are static in nature. These policies are not adaptive in nature and
do not change according to the present state of the system. These methods take
decisions about when to accept packets, when to dropthem etc. These methods make
decision without taking into consideration the present state of the system. The open
loop congestion control methods are further classified on the basis of whether these
are applied on source or on destination.
Close loop congestion control techniques are based on the concept of feedback. These
techniques are dynamic in nature and actions are taken during transmission. Some
system parameters are continuously measured in the network and whenever a
congestion state is observed, feedback system is used to take action to reduce the
congestion. Open loop techniques work as per the following 3 steps:
3
Congestion Control
Step 3: Take the necessary actions to remove the located congestion.
Algorithms
Congestion in the network can be measured in terms of various Metrics like: the
average queue length, timed-out packets, delay, packets dropped due to unavailability
of buffer space, etc.
As discussed in previous section, open loop methods are preventive measure for
congestion control.
Some of the open loopmethod based policies of congestion control are discussed
here:–
Retransmission Policy :
A packet transmitted with reliable data delivery protocol, if it’s TTL value expired
before it reaches to destination, gets dropped. The sender has to retransmit such
packets until get delivered successfully. More the congestion in the network leads to
more packet drops which leads to retransmission of these packets leading to more
traffic in the network. This retransmission leads to congestion in the system.
To prevent congestion due to this issue, value of timers used by retransmission policy
must be set such that state of congestion in the network is prevented and also able to
optimize the efficiency of the network.
Window Policy:
Discarding Policy :
Routers has to adapt a good discarding policy such that congestion in the network is
prevented at the same time a router must attempt to discard corrupted or packets of
unreliable services (i.e. UDP) by maintaining the quality of the messages with reduced
number of retransmission of dropped packets.
Packets transmitted with UDP services may be discarded before the packets
transmitted with reliable services i.e. TCP. The video streaming over the internet may
tolerate some loss of packets while text messages may not tolerate loss of any packet.
Acknowledgment Policy :
Acknowledgment is sent by receiver to notify the sender about receipt of the packet or
not. Even though the size of acknowledgement packets issmall in comparison to the
data packets but still they also offer traffic load in network. In order to reduce the
number of acknowledgement packets sent, the receiver should wait for the next
incoming packet and if it is in sequence with the previous packet instead of sending
the acknowledgement of individual packet a cumulative acknowledgement is sent for
both the packets. That is sending a cumulative acknowledgement of packets received
in sequence can save the bandwidth of the network.
Admission Policy :
The policies discussed above can be used to prevent congestion before it happens in
the network.
5
Congestion Control
Algorithms Admission Control Methods:
Admission control methods are generally classified into two categories: parameter
based and measurement based admission control. Parameter based methods are based
on the characteristics of the active traffic and do not consider new incoming traffic,
hence are not optimal. Measurement based methods consider the real time network
conditions by serving new incoming traffic, hence a higher network utilization may be
achieved.
Each admission control method follows the principle that, allocatethe
availablebandwidth tothe incoming traffic flows only in case of not exceeding the
capacity of the line. For a node to implement admission control policy, it should have
access to QoS parameters i.e delay, packet drop rate etc.By doing so the traffic can
achieve the QoS as desired, but should be independent of the type of traffic
underwent.
Similarly, in case of virtual circuit subnets, no more new virtual circuits are accepted
once congestion state in the network is identified.
3.6.2 Traffic Policing and its Implementation
Traffic policing is to monitor the traffic flow in the network. If the traffic flow rate is
greater than the specified rate, traffic policing methods simply discards the overflow
packets.Traffic policing can be used to control both inbound and outbound traffic.
Traffic policing methods maintain a constant flow (pre-defined) of traffic.
Traffic policing does not hold packets received above the allowed flow rate, hence
does not require buffer. It is easy to implement traffic policing in comparison to traffic
shaping as it does not require maintaining packet buffers. Traffic policing does not
cause delay, and queuing, rather it simply discards the packets.
Components of an implementation of traffic policing system are as follows:
6
Network Layer
Meter: this component measures the traffic and provides the measurement
result to the next component (marker) for further action.
Marker: marker assigns colorsto packets out of green, yellow, or red based
on the measurement result provided by the meter. Marker provides this
coloring information to the next component namely: Action.
Action: this component performs actions based on packet coloring results
received from the marker. This component performs following actions in
accordance to the pre-defined rules:
Pass: a packet will be forwarded further if it meets network
requirements.
Re-mark + pass: local priority of the packets not meets the network
requirements are changed and forwarded.
Discard: packets not meeting network requirements are dropped.
If the rate of traffic is below the threshold value, packets are marked with green and
yellow color and forwarded, whereas if the rate of traffic exceeds the threshold value,
packets are either marked with yellow, lowers the priority and forwarded or marked
with red color and dropped according to the traffic policing configuration.
3.6.3 Traffic Shaping and its Implementation
In contrast to traffic policing, traffic shaping tries to adjust the rate of outgoing traffic
instead of dropping the packets to ensure an even transmission rate. Traffic shaping
makes use of a buffer to hold bursty traffic for a while to control the traffic. Packets
are delayed if the system is unable to forward all of them at a time and will be
forwarded as the link is found free. It is a congestion control technique which delays
some packets to remove the congestion state. Traffic shaping is not practically
applicable to traffic of real time applications. Traffic shaping can be used to control
the outbound traffic only.
Further, Traffic shapers can be classified into two categories based on their
capabilities; simple traffic shaper and advanced or more sophisticated traffic shapers.
Simple traffic shapers shape all traffic uniformly. Whereas advanced traffic shaper
can classify the traffic and can be used as a technique to provide Quality of Service
7
Congestion Control
Algorithms
(QoS) to a traffic category by delayingother category of trafficto bring them into
compliance with a desired traffic profile.
Two of the widely known traffic-shaping algorithms are leaky bucket and token
bucket, discussed in next section in detail.
Leaky bucket shaper as its name says, is based on the way aleaky bucket functions.
It sends out the traffic at a fix rate even if the incoming traffic is bursty in nature.
Bursty traffic could not be sent out at a time,, will be stored in the buffer (called the
leaky bucket) and will be sent out once the outgoing line is free.
In the figure above, it is considered that the capacity of the network to carry the
traffic is of 3 Mbps. The leaky bucket traffic shaper will not send traffic above 3
mbps in the network. Here, the host inserts a burst of data at a rate of 8 Mbps for 2
sec, and sends 16Mbits of data. Further, it does not send any data for next2 sec and
then sends data at a rate of 2 Mbps for 4 sec, by sending 8Mbits of data. The host
inserts a total data of 24Mbits in a duration of a 8 sec. After applying the leaky
bucket traffic shaping policy the traffic is sent out at a rate of 3 Mbps for the
duration of 8 sec. Here, traffic shaping policy smooth the traffic in the network.
There are not data in the duration 2 to 3 sec, and a burst of data during the interval
0 to 2 forcing the network to congestion state. Leaky bucket policy can transmit
this whole data in a smooth manner without any congestion in the network.
8
Network Layer
The leaky bucket traffic shaping policy discussed in previous section, does not
consider the input traffic pattern. It shapes the traffic with a fixed defined rate.
Token bucket traffic shaping policy considers the input traffic bursts and allows
sending the traffic on a higher rate also to prevent the drop of packets.
Token bucket policy uses the leaky bucket which holds tokens generated at regular
intervals (one policy to add tokens is to generate a token per clock tick). Token bucket
policy works as follows:
Token are generated at regular intervals and placed into the bucket.
The bucket has a maximum capacity of holding the tokens.
A packet can be sent to the output line only if a token is available in
the bucket.
Once a packet is sent on output line, is removed from the bucket.
As many tokens are removed from the buckets as number of packets
are sent from the bucket.
If there is no token available in the bucket, the packet cannot be sent.
Token bucket policy shapes the bursty traffic by allowing bursty traffic on output line
but to a limit of available number of tokens. Figure below shown the working of the
token bucket traffic shaping mechanism. In the figure host A sent 3 packets and there
are only 2 tokens available in the bucket, hence only 2 of these packets are transmitted
on the output line and 1 is hold back and will be sent once a token is placed in the
bucket.
Host A Host A
Unregulated
Flow 1 Packet left (no
more tokens
2 tokens
available No tokens
left in bucket
Regulated
2 tokens
Flow
3.6.4 Difference between Leaky Bucket Traffic Shaper and token Bucket Traffic
Shaper
Source
1 10-M
bps
Ethe
rnet Router
Destination
1.5-Mbps T1 link
I
FDD
bps
Source 1 00-M
2
In the figure above, source 1 and 2 inserts traffic at a rate of 10 and 100 Mbps
respectively. The router can transmit the traffic on output link limited to 1.5 Mbps.
The packets will start dropping at router once the buffer is full and the state is kown as
the congestion state. Congestion control mechanism in packet switched network can
be applied on either transport layer or network layer. Flow is a sequence of packets
flowing between a source/destination pair and following the same route through the
network.TCP provides connection oriented reliable service at transport layer while
Internet protocol (IP) provides connectionless packet delivery service.Routers does
not maintain any state of the flow for connectionless service whereas state of the flow
is maintained at routers for the connection oriented service.The Internet
Protocol (IP) provides the basis for packet delivery and the Transmission Control
Protocol (TCP) provides a best-effort delivery mechanism.Best-
effort delivery service is the basic packet deliveryserviceswithout guarantee of
delivering it. The best efforts are made to deliver packets to the destination, but there
is no mechanism to recover lost packets. At transport layer a TCP window is used to
control the transmission rate according to feedback received from the sub network. As
the congestion is a network layer issue and happens in the network, the routers play
crucial role in handling the congestion state. Each router is installed with certain
10
Network Layer
buffers to hold the incoming packets could not be sent at the moment due to
congestion. Many policies are applied to these incoming packets to handle them in the
buffer queuing. Some of the possible choices in queuing algorithms are:FIFO also
called Drop-Tail, Fair Queuing (FQ), Weighted Fair Queuing (WFQ), Random Early
Detection (RED) etc. Routers also send a special type of packet namely: choke packet
for the purpose of congestion handling. Routers monitor the utilization of their output
line and send choke packets back to hosts using output lines whose utilization has
exceeded some warning level. Another solution frequently used to control the
congestion state is Explicit Congestion Notification (ECN) used by routers at network
layer to notify the sender about the congestion state.An ECN-aware router sets a field
in the header of the IP instead of dropping a packet to signal about the congestion.
The receiver of the packet notifies about the congestion to the sender, which reduces
its transmission rate.
3.8 SUMMARY
In this section we have discussed about the congestion control state in the network. A
network is congested when traffic in the network is more than its capacity to handle it.
The congestion occurs when the number of packets into the network is more than its
handling capacity. The bursty nature of traffic is the root cause of congestion.When
part of the network no longer can cope with a sudden increase of traffic, congestion
builds upon. Other factors, such as lack of bandwidth, ill-configuration and slow
routers can also bring up congestion.
Flow control is an issue of data link layer whereas congestion control is an issue of
network layer. Flow control is meant to prevent a fast sender from crushing a slow
receiver. Flow control can be helpful at reducing congestion, but it can't really solve
the congestion problem. Many congestion control techniques are applied in the
network to avoid the congestion state in the network. Open loop and closed loop
congestion control techniques are the broad categories of the congestion control
algorithms. Traffic policing and traffic shaping are the main techniques of open loop
congestion control. Traffic policing is, sending the traffic at a fix rate irrespective of
the incoming traffic pattern. In contrast to traffic policing, traffic shaping tries to
adjust the rate of outgoing traffic instead of dropping the packets to ensure an even
transmission rate.
3.9 SOLUTIONS/ANSWERS
Ans :In the Internet nodes acting as transmitting nodes are inserting packets into the
Internet and nodes acting as receiving nodes consume the packets from the Internet.
Internet has a capacity to handle the traffic load (packets). When the rate of insertion
of packets into the Internet is higher than the rate of consumption of the packets from
the Internet at last Internet is unable to handle the traffic and the performance of the
resources of the Internet is degraded. This situation is termed as congestion.
Ans :Congestion in the network can be addressed in two ways: preventive method and
recovery method. In preventive method,actions are taken such that congestion doesn’t
occur and recovery method allows the system to enter in congestion state and then it
tries to remove it.
Ans : In leaky bucket algorithm, packets are inserted into the bucket. In case of bucket
overflow, packets are dropped. The packets are exited from the bucket at a constant
rate allowing bursty incoming traffic into the network at a constant rate.
Q4. In what way token bucket algorithm is superior to leaky bucket algorithm?
Ans : The leaky bucket algorithm is very conservative in nature in the sense that it is
not adaptive to the incoming traffic. Token bucket algorithm is made sensitive
towards incoming traffic. The output rate is not dependent on the predefined upper
limit rather, it depends on the availability of the tokens in the bucket. In the starting if
the tokens are available in enough quantity the rate can be more and once there are no
tokens available after that the output is limited by the rate of token generation.
The packets with rates that are It buffers the packets with rates
greater than the traffic policing that are greater than the traffic
2. rate are discarded. shaping rate.
The token values are calculated in The token values are calculated in
4. bytes per second. bits per second.
12
Network Layer
S.NO. Traffic Policing Traffic Shaping
Computer Network, S. Tanenbaum, 4th edition, Prentice Hall of India,New Delhi 2002.
Data Network, DrnitriBerteskas and Robert Galleger, Second edition,Prentice Hall of
India, 1997, New Delhi.
Data and Computer Communication, William Stalling, Pearson Education,2nd Edition,
Delhi.
13
Network Layer
UNIT 4 EMERGING NETWORKING
TECHNOLOGY
Today, we live in a digital age and are constantly surrounded by digital devices, such
as laptops, mobiles, cameras, music players, etc. These devices are connected through
various communication networks like WiFi, Bluetooth, Cellular Networks etc. In the
earlier times, communication infrastructure had a fixed wired backbone with
stationary cell towers and access points, as shown in figure 1. Such infrastructure
networks, for example, cellular networks, were suitable for locations where access
points were easy to install.
Infrastructure
AP
AP
1
Emerging Networking
Technology However, today, the nodes (laptops, mobiles, etc.) are not always fixed at one point
but can move in the system shown in figures 2, they connect to the fixed
communication infrastructure of cell towers and communication access points
(routers) further connected to the Internet backbone. However, networks can be
configured without APs too. A collection of at least two or more electronic devices
equipped with wireless communication and networking capability forming
infrastructure-less networks without base stations is called a mobile ad-hoc network
(MANET). MANET is a self-organizing and self-configuring multi-hop wireless
network with a dynamically changing topology. The devices (nodes) in the network
not only act as hosts, receiving data but also as routers that send data to other nodes in
the network. MANET supports peer-to-peer communications and peer-to-remote
communications with significantly reduced administrative costs. Therefore, the
MANET can be defined as an infrastructure-less, fully distributed network in which
each node acts as a transmitter, receiver, and router. The way communication is
accomplished between the components present in the network is the main difference
between the wireless and wired networks.
The term "Ad Hoc" network means a network that is temporary or implemented
immediately when needed for a particular purpose. With the technological
advancements in recent times, it is possible to use mobile nodes and mobile routers (a
node can act as a router) to create a communication network where everything is
mobile and nothing is fixed. In such a network, the topology of the network changes
continuously as the neighbours of any given node are likely to change frequently. A
few nodes will join the network at any point in time, while a few will disengage from
the network, making the environment ad-hoc. With minimal or no dependence on
infrastructure, a MANET is easy to deploy to support communication and computing
anywhere, anytime.
ad-hoc network
Homogeneous network
Figure 2: Ad-hoc and Homogenous network
In heterogeneous networks, the devices have varying capabilities, while all the nodes
have identical capabilities and responsibilities in homogenous networks.
MANET has a dynamic topology in which node scan freely move in and out
of the network arbitrarily. It causes the network topology to change rapidly
and unpredictably over time. For nodes to communicate in such dynamic
2
topologies, alternative paths are automatically discovered and data packets are Network Layer
forwarded across these multi-hop paths. Various route discovering
mechanisms are used in MANETs to accomplish this.
A MANET is a network of nodes that are often hand-held devices with small
battery backups. The battery-powered devices run out of battery support quite
frequently. Thus MANETs are characterised by energy-constrained operations
in which power conservation is essential. As a power conservation method,
the network uses nodes in a manner so that they radiate power as little as
possible and transmit data only when essentially required. Nodes can be
configured to go into sleep mode.
The present and future need for dynamic ad-hoc networking technology is
tremendous. This highly adaptive networking technology, however, still faces various
limitations.
Delay Time
The time taken by a data packet to arrive at its destination after leaving the source
node is referred to as delay. Nodes are busy with the transmission and receiving the
packets to increase the network's throughput. Because of it, the queue at each node is
always not empty or small which can lead to a longer delay.
3
Emerging Networking
Technology The first generation of ad hoc network, called Packet Radio Network (PRNET), can
be traced back to the 1970s when the Defence Advanced Research Project Agency
(DARPA) launched packet-switched radio communication to provide reliable
communication. PRNet provided an efficient means of sharing broadcast radio
channels among many radios. PRNET, with low throughput (2 kbps per subscriber
approximately), was not entirely infrastructure-less as it required a static station for
routing. The PRNET then evolved into the smaller, cheaper, energy-efficient and
cyber-resilient Survivable Adaptive Radio Network (SURAN) in the early 1980s. The
United States Department of Defence (DOD) developed Globe Mobile Information
System (GloMo) and Near Term Digital Radio (NTDR),providing a self-organizing
and self-healing network. The NTDR, which uses clustering and link-state routing,
was a widely used ad hoc network. The Internet Engineering Task Force (IETF)
adopted IEEE 802.11 as the standard routing protocols for MANET for using mobile
devices like PDA's, palmtops, notebooks, etc.
The applications of MANET are not limited to a single area. There are many
applications of MANET in industry, defence, medical and environmental
sciences. A few of the applications include:
4
BAN coverage extends to about 2 m and is usually deployed using wearable Network Layer
devices. When the communication requirement is of the order of 10-15m, PAN
is used. MANETs using LAN easily cover an area ranging up to 500m and are
widely deployed for communicating within residential colonies, markets, or a
cluster of buildings. WAN can also be used for even larger areas, although
large networks have limitations in addressing, routing, and security
management.
Link Layer:
The joint design of the link and physical layer can significantly enhance a MANET's
efficiency and reliability. The link layer identifies priority packets and schedules
packet delivery according to priority levels. It is accomplished through a Media access
control (MAC) layer with a MAC discriminator and priority classifier. MAC is a
protocol that enforces a methodology to allow multiple devices access to a shared
media network.
The queue management section schedules the packets according to the priority levels.
Data for real-time applications like live movies, video conferencing, etc., get higher
priority than the packets for applications such as FTP and Email. When the network is
congested, some packets may be dropped. Assigning higher priority to real-time
packets ensures that they are not dropped in case of network congestion. Queue
management thus helps in the timely delivery of data packets in real-time applications
and improves the packet delivery ratio.
The MAC discriminator in the layer differentiates data packets that arrive from the
wireless channel and sends them to the network layer. The address resolution protocol
(ARP) packets go to the queue directly. The MAC packets used in IEEE 802.11 stay
5
Emerging Networking
Technology in the MAC layer. The bandwidth estimation control packets are sent to the bandwidth
estimation module for use in the routing layer's adaptive scheme.
The priority classifier classifies the data packets that arrive from the packet queue,
whether required for real-time or non-real-time applications. Post classification, they
are sent to the packet scheduler to schedule the packet delivery according to priority
levels. This helps the distributed ad hoc network offer both real-time and non-real-
time applications by granting real-time packets higher priority to capture the channel.
Transport layer: Transmission control protocol (TCP) and user datagram protocol
(UDP) are two transport layer protocols widely used in wired networks. In TCP, a
connection must be established between communicating devices before the
transmission of data. After transmitting the data, the connection must close. On the
other hand, in UDP, there are no overheads for opening, maintaining and terminating
a connection. In TCP, the delivery of data at the destination is guaranteed and it has
extensive error checking mechanisms. UDP has a primary error control mechanism
and the delivery of data to the destination is not guaranteed. UDP is faster, simpler
and preferred for broadcast and has no congestion control mechanism to react to
network congestion. Due to this, the applications using UDP as the transport protocol
to transmit packets in MANETs can easily overwhelm the network with data. Such a
scenario may result in considerable power wastage and limited use of available
bandwidth in transmitting packets. TCP is comparatively slower and doesn't support
broadcasting but has an inherent congestion control mechanism (not clogging the
network and overloading the capacity in the routers). The standard TCP assumes that
the end-to-end congestion is due to increased flow in the transport pipe and/or reduced
available bandwidth. In wireless situation, congestion in TCP might be shown due to
frequent breaks in the end-to-end TCP connection. Hence, the standard TCP needs to
be modified and such modifications are available.
The nodes in an ad hoc network are initially unaware of the network's topology. They
eventually have to discover the topology. During the discovery process, every node
will learn about its neighbor nodes and the distance between them. This way, it also
lets the other nodes know about its existence in the network. For efficient routing,
routing tables are initialized and maintained by the routers using routing protocols and
these tables are stored in their memory. The routers use the routing protocols to decide
the path of the packet from the source node to the destination. Finding stable routes
decrease the route-related overhead and finding the shortest routes are the main goals
of the routing protocols.
Fig. depicts a peer-to-peer multi-hop ad hoc network
Mobile node A communicates directly with B (single hop) when a channel is
available
If a channel is not available, then multi-hop communication is necessary, e.g.,
A->D->B
For multi-hop communication to work, the intermediate nodes should route
the packet i.e., they should act as a router
Example: For communication between A-C, B, or D & E, should serve as
routers
B C
A D E
Regardless of the variety of mobile ad hoc network applications, there are still some
issues and design challenges that we have to overcome.
With these problems, there are some other challenges and complexities:
7
Emerging Networking
Technology Bandwidth Constraints: The bandwidth of wireless links is much lower than its
wired counterparts. For example, while several Gbps are possible in wired LAN, the
wireless LANs typically work around 2 Mbps (nowadays up to 50 Mbps).
Energy constraints: The power of the batteries in the devices is the limiting factor
that defines the operative time for the nodes.
High Latency: Nodes sleep or are idle when not in use to conserve energy. In data
exchanges involving sleeping nodes, the delay might be higher if the routing
algorithm needs to wake them up.
Transmission Errors: Attenuation and interferences are the effects of the wireless
link that increase the error rate.
Scalability Concerns: Scalability is a crucial aspect of MANET, mainly when used in
military communications. MANETs, when expanded, according to the need, each
node must be capable of handling the expansion or intensification of the network.
Fault Detection and Management: MANET is an infrastructure-less network with
decentralised administration. Faults become challenging to detect and manage as
every node can communicate with every other node. In addition, dynamic topology,
mobility and ever-changing routes add to the number of faults and data packet losses.
Security concerns: MANETs have more significant threats than the wired networks.
Limited physical security: The physical sizes of the devices involved in these
networks are so small that they can easily be victims of theft.
Cyber security: They are vulnerable to cyber-attacks and can be compromised
easily.
Each node acts as a router and a receiver in the network, so each node plays
two roles becoming significantly more vulnerable.
1.2 What is the type of network in which the topology change from time to time?
--------------------------------------------------------------------------------------------------------
--------------------------------------------------------------------------------------------------------
--------------------------------------------------------------------------------------------------------
8
4.Differentiate between cellular and ad hoc network in terms of cost Network Layer
As the name suggests ‘wireless sensor network’ is a network of sensor nodes. Now a
question may come to your mind ‘what are sensors’? Any hardware equipment which
has the capability of sensing the physical characteristics of its surroundings may be
called a sensor. To understand sensors, let’s compare these with our body sensors,
through which we sense our surroundings, smell by our nose, feel heat or cold by our
skin, taste, etc. Similarly, if these senses are measured by any hardware device, we
can say it is a sensor.
You must have seen the Fire Alarms installed in the buildings or in houses. Fire
alarms have a sensor that continuously measures smoke or temperature around them
and if the surrounding smoke or temperature increases by a certain limit, it starts
beeping/ alarming. Generally, these fire alarms have wired connections, or they can
have a wireless connection. So, can we put this fire alarm into the category of wireless
sensor network? The answer is ‘No’ because generally, the fire alarms are a
standalone system of sensors. It is not a network of sensors, and sensors are not
talking to each other. However, we can create a wireless sensor network for the
application of fire alarms in forests.
In the last decade, wireless sensor networks have gained popularity because of their
applications and benefits. Some of the popular applications of wireless sensor
networks are in Military, Agriculture, Security, and Healthcare. Let’s discuss some
examples of how wireless sensor networks have been changing the world around us.
There are various applications of wireless sensor networks that we can see happening
around us and many applications which are coming soon. In this section we will
discuss some of these applications.
Agriculture: The deployment of wireless sensor networks in the field of agriculture
has shown a lot of enthusiasm among the farmers. In agriculture, these networks are
highly useful in monitoring and controlling the temperature, humidity, soil
parameters, etc. So that farmers are getting better information and they can improve
their production. In livestock management, wireless sensor networks have shown
great relief to the farmers in monitoring the location and controlling the livestock and
also their biological parameters.
Military: At present, we can see many wireless sensor network applications in the
military. One of the most popular applications which many countries are currently
using is protecting their border from any illegal trespassing or any intrusion attack. To
do so, a network of sensors continuously observes the vibrations, sound, visuals,
positions and many other physical characteristics in the border areas. If these sensors
detect any suspicious activity, they can give this information to the nearby station or
9
Emerging Networking
Technology to the military officers. Similarly, the military can do effective surveillance, intrusion
detection, and collect intelligence with minimum risk on the battlefield in a better way
by using wireless sensor networks.
Environment: There are a large variety of applications of wireless sensor networks in
controlling and managing our environment.
● Air or Water Pollution: One application is monitoring the air pollution or
monitoring the water pollution. We can have a network of sensors monitoring
the pollution level in the air. If certain chemicals/air particles are increased
more than a limit in the air it can provide us the information that pollution has
increased in a particular location. Similarly, pollution management can be
effectively handled in the rivers and oceans by sensing appropriate
parameters.
● Flood and Landslide Warning: wireless sensor networks can be deployed on
the bank side of rivers or in possible flood areas. These networks can give us
a warning whenever the water level increases to a certain limit or water
crosses into certain areas. Based on these warnings residents can be informed
in a timely manner to evacuate the place. Similarly in the hilly areas, landslide
warnings can be issued so that we can help residents to take the appropriate
safety measures.
● Forest fire: As we discussed in the beginning regarding the fire alarm, we can
create a wireless sensor network in a forest that continuously measures the
temperature and the smoke level. Whenever there is a fire in some forest area
the forest authorities can be informed about the fire, and they can take the
appropriate measures to stop or the spread of fire.
Health: Wireless sensor networks are also useful in the health sector for effectively
monitoring the patients’ clinical data. We can provide various kinds of sensors to the
patients to continuously measure their health parameters like a heartbeat, ECG,
temperature, blood pressure, etc. These medical sensors keep recording the
information from the patient’s bodies and provide this information to a server. Server
can analyse the information and assist doctors to make suitable decisions. These
wireless sensor networks can be installed within the hospital where patients are
admitted, or wearable sensors can be used so that patients can do their daily routine
things and their clinical data can be recorded continuously.
Traffic: Another application of wireless sensor networks is in controlling and
managing traffic. Various kinds of sensors can be installed in vehicles that
continuously provide information to the base station so that the authorities can
observe the number of vehicles in a particular area. Whenever there is a possibility of
congestion in some location other vehicles can be informed or diverted to the other
routes to avoid congestion. These sensor networks are also used to manage the
parking lots and whenever there is any free parking area the vehicles can be directed
to that location. Similarly, these sensors can also communicate from vehicle to vehicle
so that any possibility of accidents is avoided.
Industries: Wireless sensor networks can be used within the industries to monitor the
functioning of machines and if there is any change in the parameters such as
temperature, pressure, or increase of chemicals then appropriate decisions can be
taken for the effective working of machines.
There are applications of WSN in structural monitoring, like monitoring bridges and
dams to reduce the human supervision and reduce the cost of monitoring, as well as
improve the efficiency of the processes. As wireless sensor networks will grow; we
will be able to see many more applications in the future.
10
Network Layer
4.10 STRUCTURE OF WSN
In this section, we will discuss a typical structure of wireless sensor networks. This
structure will be helpful for you to understand the overall picture of wireless sensor
networks and their components. You will be able to understand how a wireless sensor
network is interconnected with the Internet or with the local network and how they are
able to perform some desired actions.
11
Emerging Networking
Technology distance. Hence, each sensor node also works as a relay to pass-on the data to other
nodes. Let’s see the structure iinside
nside a sensor in the following diagram.
There are different types of sensors available at present which can easily sense
physical characteristics around it like temperature, pressure, vibrations, sound, flow,
humidity, radiations, motion, position, light, etc. Also, many other types of sensors are
under development for various purposes which will be available for the desired
implementations.
As depicted in the abo
above
ve diagram, a sensor has a sensing unit which transfers its data
to the processing unit, this data is processed and stored by the processing unit and
further it can be communicated to the base station and/or to the nearby sensors. The
power unit is generally a battery-based power unit which provides the required power
to all these above units of a sensor.
13
Emerging Networking
Technology
4.12 WSN TOPOLOGIES
In this section we will discuss different topologies of wireless sensor networks, which
means how these radio frequency sensors and base station(s) can be arranged as a
wireless network.
Star Topology WSN
This is one of the most common kinds of wireless sensor network, where all the
sensors are connected to a base station and this base station is commanding all the
sensors and receiving the data provided by the sensors.
14
Network Layer
Also, Mesh topology can be integrated with star topology and form a hybrid topology,
which can have advantage of both the topologies in Wireless sensor network.
IoT is a network of physical objects, called “Things”, embedded with hardware like -
sensors or actuators or software, for exchanging data with other devices over the
internet. With the help of this technology, it is possible to connect any kind of devices
15
Emerging Networking
Technology like simple household objects for example- kitchen appliances, baby monitors, ACs,
TVs, etc to other objects like- cars, traffic lights, web camera, etc. Connecting these
objects to the internet through embedded devices, allows seamless communication
between things, processes or people.
Some of the applications of IoT devices are – smart home voice assistant Alexa, smart
traffic light system. IoT devices when connected to cloud platforms can provide a
huge and wide variety of industrial or business applications. As the number of IoT
devices are increasing, the problem of storing, accessing and processing is also
emerging. IoT when used with Cloud technology provides solutions to these problems
due to huge infrastructure provided by the cloud providers.
The participating devices in IoT are required to have few characteristics like
communication, power, sensing & actuating (not necessarily) and data processing (to
some extent). Many of the devices also have some processing and decision making
capabilities and it depends on the requirement of the system. These all are
characterized in four different classes of fundamental characteristics namely
heterogeneity, dynamic, interconnected and scaling.
As we are now aware about the IoT as an intelligent connected network of smart
things/devices, we can see the following application of the IoT in our real life.
Apart from the above mentioned scenarios, there are several other application of IoT
where a smart device with some power, sensing capability and communication
capability is required.
Now you have a basic understanding of the characteristics of the IoT devices and their
purpose of use. Let us now discuss few of the use cases where IoT can help users in
many ways. We have a number of use cases in various sectors of which a few are
mentioned here as examples.
A typical home intrusion detection system makes use of smart cameras with night
vision and motion detection capabilities. Some of the intrusion detection systems also
use Passive Infrared (PIR) and door sensors to detect the motion and inform the users.
Many of such systems also send short clips or images of the intruders to the users in
very short/ real time. These systems can also be controlled using cloud based
platforms and also possess on-system memory to store some vital information. The
below given figure 9 represents the typical setup of the intrusion detection system for
a house.
Here in the figure, the door sensors or motion detection sensors are connected to the
gateway wirelessly. Sensors come with wireless connectivity like Bluetooth, WiFi or
17
Emerging Networking
Technology ZigBee. The smart camera system often comes either with WiFi or Ethernet
connectivity. The gateway shares the collected information to the cloud based
analytical engine which processes the real time information and makes relevant
decisions. The users get appropriate notifications/ alerts about the decision made by
the analytical engine. These engines can be trained/ configured for the user specific
requirements and hence provide security to the home.
Lighting in homes, buildings, roads and parks is one of the major areas of energy
consumption. Majorly the electricity is produced using non renewable energy sources
worldwide. These include petroleum products like oil and gas, coal and other
resources. These resources are not in abundance and also contribute to environmental
pollution in many ways. The more demand of electricity increases the consumption of
these resources. Apart from some essential sectors, many sectors can contribute to
energy saving strategy to save electricity. IoT enabled devices and smart monitoring
of consumption helps in saving electricity. Today, we are focused on developing more
energy efficient green buildings to save our environment and lower carbon footprint.
The concept of green building itself incorporates the energy efficient and environment
friendly ecosystem. IoT enabled devices helps in scheduling the lighting and
controlling light source intensities depending upon the ambient conditions and
requirements and hence reduce energy wastage in these buildings. These devices can
also be controlled at the remote locations by the authorized operator/ persons. Sensors
on these devices also gather environmental data and device parameters. This helps in
better monitoring of the working of devices in these environments and improves the
device life. These devices also monitor the building’s structural health and monitor
indoor environment health. Smart street lighting, park lighting and condition
dependent lighting (like in fog or in storm) are some examples of IoT use case for
smart lighting.
Small size of devices and sensors with minimum battery/ power requirements are the
key advantages for any air quality monitoring sensor/ device. These can be placed at
desired places with ease and can monitor the air quality. A typical air pollution
monitoring IoT devices consists of a few sensors for monitoring CO, CO2, NO, SO2
and other poisonous gas levels in environment. These are also equipped with GPS
(Global Positioning System) system for location and wireless transmission of data to
the cloud. In the cloud, analysis of the information received is carried out and
outcomes are reported to the users/ authorities. Every year during the winter season
many Indian states face severe air pollution condition. Various factors contribute
towards it and regulatory policies are implemented. IoT devices have contributed it a
lot for air pollution monitoring. Using IoT devices, government collect all the
required data at one go from different places and governments can issue guideline for
the citizens accordingly. The constituents of the pollution can also indicate the
industry sector which is responsible for majority of the pollution at a particular place.
The authorities can take better and fast decisions to tackle the situation. On the other
hand, various air purifiers are being made to address the issue in a closed door
environment. This contribution of IoT devices can make authorities swiftly issue
18
guidelines for residents if higher levels of air pollutants are reported in the nearby Network Layer
areas.
Health monitoring is major area where IoT is contributing. Many smart gadgets are
now available in market which monitor vital body parameters and logs them in the
personal devices. We are recording our pulse rate, oxygen saturation, ECG,
temperature and many other body parameters which are majorly non invasive in
nature using IoT devices. These devices are small, and battery powered. These devices
collect body parameters and store the data either at the local storage or in the cloud
where Artificial Intelligence based algorithms extract useful information which can be
shared with the doctors for seeking any medical advice. Various health devices are
now capable of making emergency communications to family members or health
officials/ doctors in any emergency situation. These devices are also useful in tracking
the school going children or elderly people who require extra safety and attention.
IoT connects physical things to the network and hence various network devices are
involved. The physical things are made smart by placing some sensors on them to
sense their state, orientation or environment. These sensors require power modules to
operate and communication modules to send the sensed data. Communication can be
either in wired mode or in wireless mode. As an example we are here considering the
wireless mode only. This includes WiFi, Bluetooth, ZigBee, LORA and many other
technologies. A typical system architecture of IoT for physical world network is given
below in Fig.
The given figure is self explanatory and you can make out the communication of data
from the flow of information signals shown in the figure 10. The figure also shows the
communication via gateway, without gateway and device to device direct
communication.
19
Emerging Networking
Technology
There are many protocols that IoT devices use for communication and message
sharing on the network. A few of the IoT protocols are shown in the figure 11 given
below. These protocols work at different layers of the protocol stack like Application,
Transport, Network and Link layer. The devices which connect real world things to
the network are small and battery powered (most of the devices). These devices
communicate with each other or other networking infrastructure for data exchange
using various communication protocols. The below given figure shows the various
protocols used by IoT devices at different layers.
MQTT
XMPP
AMQP
HTTP
CoAP
DDS
NTP
Application Layer
TLS DTLS
ZigBee
Transport Layer
TCP ICMP UDP
IPSec
Network Layer
RPL DSR MPL OLSR AOD
6LoWPan
Physical/Data Link Layer RFID/ NFC Wi-Fi
IEEE 802.15.4 BLE
NOTE: The details of these protocols are given in a later course MCS-221. You are
advised to refer MCS-221 for further details.
20
Network Layer
4.16 Summary
Solution to Problems
Answers to CYP-1
1.1 Answer: A
1.2 Answer: D
21
Emerging Networking
Technology Criteria Cellular Network Ad Hoc Network
Cost high network Low network maintenance
Effectiveness maintenance cost cost due to self-organisation
in network
Bandwidth easy bandwidth Access needs complex
Usage reservation access control protocol
Bandwidth can be Shared radio channel hence
guaranteed variable bandwidth
Answers to CYP-2
4. Different topologies in WSNs are Mesh, Star, Ring, Circular, Bus and
Grid topology.
Answers to CYP-3
1. Things in IoT are any physical thing which is of the interest or use to
the user. This may include a chair, a toaster, a water bottle etc. These
can be converted to smart objects by placing a sensor on them which
enables them to communicate and share data with other devices. The
characteristics of IoT are Heterogeneity, Dynamic, Interconnected and
scaling.
22
Network Layer
ZigBee
It is a wireless technology based on IEEE 802.15.4 used to address needs
of low-power and low-cost IoT devices. It is used to create low cost, low
power, low data rate wireless ad-hoc networks. It is resistant to
unauthorized reading and communication errors but provides low
throughput. It is easy to install, implement and supports a large number of
nodes to be connected. It can be used for short range communications
only.
NFC
Near Field Communication (NFC) is a protocol used for short distance
communication between devices. It is based on RFID technology but has a
lower transmission range (of about 10 cm). It is used for identification of
documents or objects. It allows contact less transmission of data. It has
shorter setup time than Bluetooth and provides better security.
Bluetooth
It is one of the widely used types of wireless PAN used for short range
transmission of data. It makes use of short range radio frequency. It
provides data rate of appx 2.1 Mbps and operates at 2.45GHz. It is capable
of low cost and low power transmission for short distances. Its initial
version 1.0 supported upto 732kpbs speed. Its latest version is 5.2 which
can work upto 400m range with 2 Mbps data rate.
***
23