0% found this document useful (0 votes)
26 views37 pages

CN Unit-3

Computer networks jntuk

Uploaded by

Sainadh Challa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views37 pages

CN Unit-3

Computer networks jntuk

Uploaded by

Sainadh Challa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 37

UNIT-3 Network Layer

Network Layer Services

• The network Layer is the third layer in the OSI model of computer
networks. Its main function is to transfer network packets from the source
to the destination. At the source, it accepts a packet from the transport
layer, encapsulates it in a datagram, and then delivers the packet to the
data link layer so that it can further be sent to the receiver.
• At the destination, the datagram is De capsulated, and the packet is
extracted and delivered to the corresponding transport layer.

Features of Network Layer


• The main responsibility of the Network layer is to carry the data packets
from the source to the destination without changing or using them.
• If the packets are too large for delivery, they are fragmented i.e., broken
down into smaller packets.
• It decides the route to be taken by the packets to travel from the source to
the destination among the multiple routes available in a network (also
called routing).
• The source and destination addresses are added to the data packets inside
the network layer.

Services Offered by Network Layer


The services which are offered by the network layer protocol are as follows:

1. Packetizing
2. Logical Addressing
3. Routing
4. Forwarding

1. Packetizing

• Packetizing refers to the process of encapsulating data received from the


upper layer of the network at the source, and then decapsulating it at the
destination.
• The host adds a header that includes the source and
the destination addresses along other relevant information that is required
in the process of packetizing.
• The receiver host receives the network layer packet from the Data Link
layer, de-capsulate it, and sends the payload to the upper layer protocol.
The routers cannot change the header or the address.

2. Logical Addressing:

• The data link layer implements the physical addressing and network layer
implements the logical addressing. Logical addressing is also known as
Internet Address or Network Address.
• The network layer adds a header to the packet which includes the logical
addresses of both the sender and the receiver.

3. Routing:

• Routing is the process of moving data from one device to another device. In
a network, there are a number of routes available from the source to the
destination.
• The network layer specifies some strategies which find out the best possible
route. This process is referred to as routing. There are a number of routing
protocols that are used in this process

4. Forwarding
• Forwarding is simply defined as the action applied by each router when
a packet arrives at one of its interfaces. When a router receives a packet
from it needs to forward the packet to attached networks.
• Then the Routers are used to forward that packet from the local network
to the remote network. So, the process of routing involves packet
forwarding.

Packet Switching :

• Packet switching is a method of transferring data to a network in form of


packets. In order to transfer the file fast and efficiently manner over the
network and minimize the transmission latency, the data is broken into small
pieces of variable length, called Packet.
• At the destination, all these small parts (packets) have to be reassembled,
belonging to the same file. Each packet contains a header, which includes
information about the packet’s source and destination, as well as the data
payload. All packets are transmitted independently across the network.
• One of the main advantages of packet switching is that it allows multiple
packets to be transmitted simultaneously across the network, which makes
more efficient use of network resources than circuit switching. However,
packet switching can also introduce delays into the transmission process,
which can impact the performance of network applications.

Here are some of the types of delays that can occur in packet switching:

1. Transmission delay: This is the time it takes to transmit a packet over a


link.
2. Propagation delay: This is the time it takes for a packet to travel from the
source to the destination. It is affected by the distance.
3. Processing delay: This is the time it takes for a packet to be processed by a
node, such as a router or switch.
4. Queuing delay: This is the time a packet spends waiting in a queue before
it can be transmitted.
While packet switching can introduce delays in the transmission process, it is
more efficient than circuit switching and can support a wider range of
applications. To minimize delays, various techniques can be used, such as
optimizing routing algorithms, increasing link bandwidth, and using quality of
service (QoS) mechanisms to prioritize certain types of traffic.

Advantages of Packet Switching over Circuit Switching:


• More efficient in terms of bandwidth, since the concept of reserving a
circuit is not there.
• Minimal transmission latency.
• More reliable as a destination can detect the missing packet.
• More fault tolerant because packets may follow a different path in case any
link is down, Unlike Circuit Switching.
• Cost-effective and comparatively cheaper to implement.
Disadvantage of Packet Switching over Circuit Switching:
• Packet Switching doesn’t give packets in order, whereas Circuit Switching
provides ordered delivery of packets because all the packets follow the
same path.
• Since the packets are unordered, we need to provide sequence numbers for
each packet.
• Complexity is more at each node because of the facility to follow multiple
paths.
• Transmission delay is more because of rerouting.
• Packet Switching is beneficial only for small messages, but for bursty data
(large messages) Circuit Switching is better.
Modes of Packet Switching:

1. Connection-oriented Packet Switching (Virtual Circuit):


Before starting the transmission, it establishes a logical path or virtual
connection, between sender and receiver and all packets belongs to this flow
will follow this predefined route
2. Connectionless Packet Switching (Datagram):
In Datagram Packet Switching, each packet is treated independently. Packets
belonging to one flow may take different routes because routing decisions are
made dynamically, so the packets that arrived at the destination might be out
of order. It has no connection setup and teardown phase, like Virtual Circuits .
(Diagrams Previous Units)

Internet protocol (IP)

• IP stands for internet protocol. It is a protocol defined in the TCP/IP


model used for sending the packets from source to destination.
• The main task of IP is to deliver the packets from source to the
destination based on the IP addresses available in the packet headers.
• An IP protocol provides the connectionless service, which is
accompanied by two transport protocols, i.e., TCP/IP and UDP/IP, so
internet protocol is also known as TCP/IP or UDP/IP.
• The first version of IP (Internet Protocol) was IPv4. After IPv4, IPv6
came into the market, which has been increasingly used on the public
internet since 2006.
The development of the protocol gets started in 1974 by Bob Kahn and Vint
Cerf.
• The main function of the internet protocol is to provide addressing to the
hosts, encapsulating the data into a packet structure, and routing the data
from source to the destination across one or more IP networks.
• In order to achieve these functionalities, Internet protocol provides two
major things which are given below.

• Format of IP packet
• IP Addressing system

Format of IP packet
Before an IP packet is sent over the network, two major components are added
in an IP packet, i.e., Header a Payload.
Version: The first IP header field is a 4-bit version indicator.
Internet Header Length: Internet header length, shortly known as IHL, is 4
bits in size.
Type of Service: Type of Service is also called Differentiated Services Code
Point or DSCP.
Total length: The total length is measured in bytes. The minimum size of an IP
datagram is 20 bytes and the maximum, it can be 65535 bytes.
Identification: Identification is a packet that is used to identify fragments of an
IP datagram uniquely.
IP Flags: Flag is a three-bit field that helps you to control and identify
fragments.
Fragment Offset: Fragment Offset represents the number of Data Bytes ahead
of the particular fragment in the specific Datagram
Time to live: It is an 8-bit field that indicates the maximum time the Datagram
will be live in the internet system. The time duration is measured in seconds,
and when the value of TTL is zero, the Datagram will be erased.
Protocol: This IPv4 header is reserved to denote that internet protocol is used in
the latter portion of the Datagram.
Header Checksum: The next component is a 16 bits header checksum field,
which is used to check the header for any errors. The IP header is compared to
the value of its checksum. When the header checksum is not matching, then the
packet will be discarded.
Source Address: The source address is a 32-bit address of the source used for
the IPv4 packet.
Destination address: The destination address is also 32 bit in size stores the
address of the receiver.
IP Options: It is It contains values and settings related with security, record
route and time stamp, etc. You can see that list of options component ends with
an End of Options or EOL in most cases.
Data: This field stores the data from the protocol layer, which has handed over
the data to the IP layer.
IP Routing

• IP routing is a process of determining the path for data to travel from the
source to the destination. As we know that the data is divided into multiple
packets, and each packet will pass through a router until it reaches the final
destination.
• The path that the data packet follows is determined by the routing algorithm.
The routing algorithm considers various factors like the size of the packet and
its header to determine the efficient route for the data from the source to the
destination.
• When the data packet reaches some router, then the source address and
destination address are used with a routing table to determine the next hop's
address.
• This process goes on until it reaches the destination. The data is divided into
multiple packets so all the packets will travel individually to reach the
destination.
IP Addressing

• An IP address is a unique identifier assigned to the computer which is


connected to the internet. Each IP address consists of a series of characters like
192.168.1.2.
• Users cannot access the domain name of each website with the help of these
characters, so DNS resolvers are used that convert the human-readable domain
names into a series of characters.
• Each IP packet contains two addresses, i.e., the IP address of the device, which
is sending the packet, and the IP address of the device which is receiving the
packet.
IP V4 Address:
• The IPv4 address or the Internet Protocol Address is the fourth version of the
Internet Protocol. IPv4 addresses are 32-bit addresses that are unique to every
host or device on the internet.
• The IP address allows the host to be connected to other devices on the
internet to communicate with them.
• IP addresses are of two types - IPv4 and IPv6 addresses. IPv4 is a 32-bit
address and IPv6 are 128-bit address. The 'v' in IPv4 and IPv6 stands for
version. The IPv4 address or the Internet Protocol Address is the fourth
version of the Internet Protocol. It is a unique address.
• The IPv4 address is divided into two parts - the network part and the host part
(also referred to as netid and hostid).
• IPv4 addresses are 32 bit-addresses and are divided into 4 octets(1 octet = 8
bits). They are usually represented in dotted decimals, but binary
representation is another way to represent them.

The IPv4 address works on the network layer which is responsible for the
transmission of data in the form of packets. It is a connectionless protocol.

Characteristics of IPv4 Address

• IPv4 addresses are 32-bit long.


• They are either represented in binary, dotted-decimal,
or hexadecimal notation. The most common form to represent IPv4 addresses
is the dotted decimal notation.
• IPv4 addresses are classified into classful addressing where the address space
is divided into five classes- Class A, Class B, Class C, Class D, and Class E.
• IPv4 addresses are unique, so two devices on a network can never have the
same IP address.
• IPv4 address consists of two parts - the network part and the host part.
• The IPv4 packet header consists of 20 bytes of data and the number of the
header field is 12.
• IPv4 is a connectionless protocol.
• IPv4 has 3 modes of addressing- unicast, broadcast, and multicast.
• IPv4 can be assigned manually or by a protocol known as DHCP (Dynamic
Host Configuration Protocol). IPv4 can be unreliable while transmitting
packets.
IP address Classes

An IP address is 32-bit long. An IP address is divided into sub-classes:

• Class A
• Class B
• Class C
• Class D
• Class E

Components of IP Address

There are two components to an IP address:


• Network ID: This identifies how many networks there are.
• Host ID: This identifies how many hosts there are.

• In Class A, as you can see above, the 32-bit address, which is divided into
four sections of 8 bits each, out of those, the leading 8 bits are used to
represent the network and the trailing 24 bits are used to represent the
network host.
• For example: 125.16.32.64 is a class A address. And the range for the
network address is 0 to 127
• In Class B, the leading 16 bits are used to represent the network and
the trailing 16 bits are used to represent the network host. For example:
136.192.168.64 is a class B address. And the range for the network
address is 128 to 191.
• In Class C, the leading 24 bits are used to represent the network and
the trailing 8 bits are used to represent the network host. For example:
193.201.198.23 is a class C address, And the range for the network address is
192 to 223.
• In Class D, there are no hosts or networks. For example: 225.108.162.1 is a
class D address, and the range for the network address is 224 to 239. These
addresses are reserved for multicasting group email/broadcast.
• Class E addresses are very similar to class D addresses. The 1st four bits in
the first octet of a Class E address are always 1111. These addresses are
reserved for military purposes.
IPv6 Protocol and Addressing
IPv6 was developed by Internet Engineering Task Force (IETF) to deal with the
problem of IPv4 exhaustion. IPv6 is a 128-bits address having an address space
of 2128, which is way bigger than IPv4. IPv6 use Hexa-Decimal format
separated by colon (:) .
Components in Address format:
There are 8 groups and each group represents 2 Bytes (16-bits). Each Hex-Digit
is of 4 bits (1 nibble) separated by a Delimiter– colon (:)

Features of IPV6

There are various features of IPV6, which are as follows−

• Larger address space: An IPV6 address is 128 bits long. It is compared


with the 32-bit address of IPV4. It will allow for unique IP-addresses up
to 3.4 x 1038 whereas IPV4 allows up to 4.3 x 108 unique address.
• Better Header format: New header form has been designed to reduce
overhead. It is done by moving both non-essential fields and optional
fields to extension field header that are placed after the IPV6 header.
• More Functionality: It is designed with more options like priority of
packet for control of congestion, Authentication etc.
• Allowance for Extension: It is designed to allow the extension of the
protocol if required by new technologies.
• Support of resource allocation: In IPV6, the type of service fields has
been removed, but a new mechanism has been added to support traffic
control or flow labels like real-time audio and video.
IPV6 Packet Format

It is a compulsory base header followed by the payload. The payload includes


two parts (1) optional extension headers and data called payload from the upper
layer.

The base header occupies 40 bytes, and extension headers and data from the
upper layer usually contain up to 65, 535 bytes of data.
Base Header has 8 fields which are as follows−

• Version: It is a four-bit field that defines the version number of the IP.
IP6 version is 6, IP4 version is 4.
• Priority: It is a 4-bit priority field that defines the priority of the packet
with respect to traffic congestion that a packet is to reject or not.
• Flow Label: It is three bytes or 24-bit field designed to provide special
handling for a particular flow of data to speed flow on an already flowing
packet path.
• Payload Length: It is a two-byte payload length field that defines the
total length of the IP datagram, excluding the base header.
• Next Header: It is an 8-bit field that defines the header that follows the
base header in the datagram. In IPV4, this field is called a protocol. Some
of the values in this field indicate options that are
• Source Address: This field is 16-byte which specifies the original source of
the datagram destination address. This is a 16-byte internet address that
usually identifies the final destination of the datagram.
• Priority:
• The new generation IP address, or IPv6, was created primarily to get over
IPv4's limits and exhaustion.
• The 128-bit IPv6 protocol is made up of eight numbered strings with four
(alphanumeric) characters each, separated by a colon.

Types of IPv6 Addresses

There are three addressing methods available in IPv6 representation:


1. Unicast Address – A single network interface is detected by a unicast
address.
2. Multicast Address – A group of hosts referred to as a multicast address
purchases a multicast destination address.
3. Anycast Address – A collection of interfaces has been assigned an Anycast
Address.
Advantages of IPv6:
1. Real time Data Transmission: Real time data transmission refers to the
process of transmitting data in a very fast manner or immediately. Example :
Live streaming services such as cricket matches, or other
2. IPv6 supports authentication: Verifying that the data received by the
receiver from the sender is exactly what the sender sent and came through the
sender only not from any third party. Example : Matching the hash value of
both the messages for verification is also done by IPv6.
3. IPv6 performs Encryption: Ipv6 can encrypt the message at network layer.
4. Faster processing at Router: Routers are able to process data packets of
Ipv6 much faster.

Transition from IPV4 to IPV6


Complete transition from IPv4 to IPv6 might not be possible because IPv6 is
not backward compatible. Various organizations are currently working with
IPv4 technology and directly we can’t switch from IPv4 to IPv6. Transition
means not replacing IPv4 but co-existing of both.

• When we want to send a request from an IPv4 address to an IPv6 address, but it
isn’t possible because IPv4 and IPv6 transition is not compatible.
• Few technologies that can be used to ensure slow and smooth transition from
IPv4 to IPv6. These technologies are Dual Stack Routers, Tunneling, and
NAT Protocol Translation.
1. Dual-Stack Routers:
In dual-stack router, A router’s interface is attached with IPv4 and IPv6
addresses in order to transition from IPv4 to IPv6.

In this above diagram, A given server with both IPv4 and IPv6 addresses
configured can communicate with all hosts of IPv4 and IPv6 via dual-stack
router (DSR). The dual stack router (DSR) gives the path for all the hosts to
communicate with the server without changing their IP addresses.
2. Tunneling:
Tunneling is used as a medium to communicate the transit network with the
different IP versions.

In this above diagram, the different IP versions such as IPv4 and IPv6 are
present. The IPv4 networks can communicate with the transit or
intermediate network on IPv6 with the help of the Tunnel. It’s also possible
that the IPv6 network can also communicate with IPv4 networks with the
help of a Tunnel.
3. NAT Protocol Translation:
With the help of the NAT Protocol Translation technique, the IPv4 and IPv6
networks can also communicate with each other which do not understand
the address of different IP version.

In the above diagram, an IPv4 address communicates with the IPv6 address
via a NAT-PT device to communicate easily. In this situation, the IPv6
address understands that the request is sent by the same IP version (IPv6)
and it responds.
Mobile IP

Mobile IP is a communication protocol (created by extending Internet


Protocol, IP) that allows the users to move from one network to another with
the same IP address. It ensures that the communication will continue without
the user’s sessions or connections being dropped.
Terminologies:
1. Mobile Node (MN) is the hand-held communication device that the user
carries e.g. Cell phone.
2. Home Network is a network to which the mobile node originally belongs
as per its assigned IP address (home address).
3. Home Agent (HA) is a router in-home network to which the mobile node
was originally connected
4. Home Address is the permanent IP address assigned to the mobile node
(within its home network).
5. Foreign Network is the current network to which the mobile node is
visiting (away from its home network).
6. Foreign Agent (FA) is a router in a foreign network to which the mobile
node is currently connected. The packets from the home agent are sent to
the foreign agent which delivers them to the mobile node.
7. Correspondent Node (CN) is a device on the internet communicating to
the mobile node.
8. Care-of Address (COA) is the temporary address used by a mobile node
while it is moving away from its home network.
9. Foreign agent COA, the COA could be located at the FA, i.e., the COA is
an IP address of the FA. The FA is the tunnel end-point and forwards
packets to the MN. Many MN using the FA can share this COA as a
common COA.
10. Co-located COA, the COA is co-located if the MN temporarily acquired
an additional IP address which acts as COA. This address is now
topologically correct, and the tunnel endpoint is at the MN. Co-located
addresses can be acquired using services such as DHCP.
Working of Mobile IP

The working of Mobile IP can be described in 3 phases:


Agent Discovery: In the Agent Discovery phase, the mobile nodes discover their

Foreign and Home Agents. The Home Agent and Foreign Agent advertise
their services on the network using the ICMP Router Discovery Protocol
(IRDP).
Registration : The registration phase is responsible for informing the current

location of the home agent and foreign agent for the correct forwarding of
packets.
Tunneling : This phase is used to establish a virtual connection as a pipe for

moving the data packets between a tunnel entry and a tunnel endpoint.

Applications of Mobile IP

• The mobile IP technology is used in many applications where the sudden


changes in network connectivity and IP address can cause problems. It was
designed to support seamless and continuous Internet connectivity.
• It is used in many wired and wireless environments where users have to carry
their mobile devices across multiple LAN subnets.
• Although Mobile IP is not required within cellular systems such as 3G, it is
often used in 3G systems to provide seamless IP mobility between different
packet data serving node (PDSN) domains.

Link State Routing


Unicast Routing
Unicast routing is a type of network routing. In unicast routing, a packet is sent
from one source to one specific destination. It is a one-to-one communication
model.
Various types of protocols are used in unicast routing. Among them
1. Distance-vector routing protocols: These protocols determine the best
path to a destination by using a metric.
2. Link state routing protocols: It includes information about each router
and the links between them. This allows the routing devices to calculate
the best path to a destination based on the most up-to-date information.

3. Path-vector routing protocols: These protocols are similar to distance-


vector protocols.
4. Hybrid routing protocols: These protocols combine the features of
both distance-vector and link state routing protocols.
Here we have only Link Based and Distance Vector Routing Techniques.
Link state routing is a technique in which each router shares the knowledge of
its neighbourhood with every other router in the internetwork.
The Link State Routing Algorithm is an interior protocol used by every router to
share information or knowledge about the rest of the routers on the network.
The three key points to understand the link state routing algorithm are
o Knowledge about the neighbourhood: Instead of sending its routing table, a
router sends the information about its neighbourhood only.
o Flooding: Each router sends the information to every other router on the
internetwork except its neighbours. This process is known as
FloodingInformation sharing: A router sends the information to every other
router only when the change occurs in the information.
Link state routing has two phases:
1. Reliable Flooding: As discussed, a router shares its information using the
flooding technique. In this first phase, the information about neighbours is
gathered and transmitted, which is divided into two phases: the initial state
and the final state.
o Initial State: In the initial state of reliable flooding, each router gets to
know the cost of connection of its neighbours.
o Final State: In the final state of the reliable flooding, the information
about the entire router network (graph) is known by each router.
2. Route Calculation: In the second phase, i.e., the route calculation, every
router uses the shortest path computation algorithm like Dijkstra's
algorithm to calculate the cheapest i.e., most optimal routes to every router.

Features of Link State Routing Protocols


• Link State Packet: A small packet that contains routing information.
• Link-State Database: A collection of information gathered from the link-
state packet.
• Shortest Path First Algorithm (Dijkstra Algorithm): A calculation
performed on the database results in the shortest path.
• Routing Table: A list of known paths and interfaces.
Calculation of Shortest Path
To find the shortest path, each node needs to run the famous Dijkstra
algorithm. Let us understand how can we find the shortest path using an
example.
Illustration
Input: src = 0, the graph is shown below.

The following subgraph shows vertices and their distance values, only the
vertices with finite distance values are shown. The vertices included in SPT
are shown in green colour.

Step 2:

Step 3:
Step 4:

Protocols of Link State Routing


1. Open Shortest Path First (OSPF)
2. Intermediate System to Intermediate System (IS-IS)
Open Shortest Path First (OSPF): Open Shortest Path First (OSPF) is a
unicast routing protocol developed by a working group of the Internet
Engineering Task Force (IETF). It is an intra-domain routing protocol. It is an
open-source protocol. It is similar to Routing Information Protocol (RIP).
Intermediate System to Intermediate System (IS-IS): Intermediate System
to Intermediate System is a standardized link-state protocol that was developed
as the definitive routing protocol for the OSI Model. IS-IS doesn’t require IP
connectivity between the routers as updates are sent via CLNS instead of IP.
Hierarchical Routing
Because of the global nature of Internet system, it becomes more difficult to
centralize the system management and operation. For this reason, the system
must be hierarchical such that it is organized into multiple levels with several
group loops connected with one another at each level. Therefore, hierarchical
routing is commonly used for such a system.
1. A set of networks interconnected by routers within a specific area using the
same routing protocol is called domain.
2. Two or more domains may be further combined to form a higher-order
domain.
3. A router within a specific domain is called intra-domain router. A router
connecting domains is called inter-domain router.
4. A network composed of inter-domain routers is called backbone.
Each domain, which is also called operation domain, is a point where the
system operation is divided into plural organizations in charge of operation.
Domains are determined according to the territory occupied by each
organization.

Routing protocol in such an Internet system can be broadly divided into two
types:

1. Intra-domain routing
2. Inter-domain routing
• Each of these protocols is hierarchically organized. For communication
within a domain, only the former routing is used. However, both of them
are used for communication between two or more domains.
• In the following pages, we will look at description of
Routing information Protocol (RIP), Open Shortest Path First (OSPF),
and IS-IS, that are intra-domain protocols. RIP and OSPF will be
covered later in detail.
• Two algorithms, Distance-Vector Protocol and Link-State Protocol, are
available to update contents of routing tables.

Distance Vector Routing Algorithm


In distance-vector routing (DVR), each router is required to inform the
topology changes to its neighbouring routers periodically. Historically it is
known as the old ARPNET routing algorithm or Bellman-Ford algorithm
• In DVR, each router maintains a routing table. It contains only one entry for
each router. It contains two parts − a preferred outgoing line to use for that
destination and an estimate of time (delay). Tables are updated by
exchanging the information with the neighbor’s nodes.
• Each router knows the delay in reaching its neighbors (Ex − send echo
request).Routers periodically exchange routing tables with each of their
neighbors.
• It compares the delay in its local table with the delay in the neighbor’s table
and the cost of reaching that neighbour. If the path via the neighbor has a
lower cost, then the router updates its local table to forward packets to the
neighbor.

Example − Distance Vector Router Protocol

In the network shown below, there are three routers, A, B, and C, with the
following weights − AB =2, BC =3 and CA =5.

Step 1 − In this DVR network, each router shares its routing table with every
neighbor. For example, A will share its routing table with neighbors B and C
and neighbors B and C will share their routing table with A.
Congestion control: : Approaches to Congestion Control,
Congestion control refers to the techniques used to control or prevent
congestion. Congestion control techniques can be broadly classified into two
categories:

Open Loop Congestion Control


Open loop congestion control policies are applied to prevent congestion before
it happens. The congestion control is handled either by the source or the
destination.

Policies adopted by open loop congestion control –

1. Retransmission Policy :
It is the policy in which retransmission of the packets are taken care of. If
the sender feels that a sent packet is lost or corrupted, the packet needs to
be retransmitted. This transmission may increase the congestion in the
network. To prevent congestion, retransmission timers must be designed to
prevent congestion and also able to optimize efficiency.

2. Window Policy :
The type of window at the sender’s side may also affect the congestion.
Several packets in the Go-back-n window are re-sent, although some
packets may be received successfully at the receiver side. This duplication
may increase the congestion in the network and make it worse.
Therefore, Selective repeat window should be adopted as it sends the
specific packet that may have been lost.
Discarding Policy :
A good discarding policy adopted by the routers is that the routers may
prevent congestion and at the same time partially discard the corrupted or
less sensitive packages and also be able to maintain the quality of a
message. In case of audio file transmission, routers can discard less
sensitive packets to prevent congestion and also maintain the quality of the
audio file.

3. Acknowledgment Policy :
Since acknowledgements are also the part of the load in the network, the
acknowledgment policy imposed by the receiver may also affect
congestion. Several approaches can be used to prevent congestion related to
acknowledgment.
The receiver should send acknowledgement for N packets rather than
sending acknowledgement for a single packet. The receiver should send an
acknowledgment only if it has to send a packet or a timer expires.
4. Admission Policy :
In admission policy a mechanism should be used to prevent congestion.
Switches in a flow should first check the resource requirement of a network
flow before transmitting it further. If there is a chance of congestion or
there is congestion in the network, router should deny establishing a virtual
network connection to prevent further congestion.
All the above policies are adopted to prevent congestion before it happens in
the network.
Closed Loop Congestion Control
Closed loop congestion control techniques are used to treat or alleviate
congestion after it happens. Several techniques are used by different protocols;
some of them are:
1. Backpressure :
Backpressure is a technique in which a congested node stops receiving packets
from upstream node. This may cause the upstream node or nodes to become
congested and reject receiving data from above nodes. Backpressure is a node-
to-node congestion control technique that propagate in the opposite direction
of data flow. The backpressure technique can be applied only to virtual circuit
where each node has information of its above upstream node.
In above diagram the 3rd node is congested and stops receiving packets
as a result 2nd node may be get congested due to slowing down of the output
data flow. Similarly 1st node may get congested and inform the source to slow
down.

2. Choke Packet Technique :


A choke packet is a packet sent by a node to the source to inform it of
congestion. Each router monitors its resources and the utilization at each of its
output lines. Whenever the resource utilization exceeds the threshold value
which is set by the administrator, the router directly sends a choke packet to
the source giving it a feedback to reduce the traffic. The intermediate nodes
through which the packets have travelled are not warned about congestion.

3. Implicit Signaling :
In implicit signaling, there is no communication between the congested nodes
and the source. The source guesses that there is congestion in a network. For
example when sender sends several packets and there is no acknowledgment
for a while, one assumption is that there is a congestion.
4. Explicit Signaling :
In explicit signaling, if a node experiences congestion it can explicitly sends a
packet to the source or destination to inform about congestion. The difference
between choke packet and explicit signaling is that the signal is included in the
packets that carry data rather than creating a different packet as in case of
choke packet technique.
Explicit signaling can occur in either forward or backward direction.
• Forward Signaling : In forward signaling, a signal is sent in the direction
of the congestion. The destination is warned about congestion. The receiver
in this case adopt policies to prevent further congestion.
• Backward Signalling : In backward signaling, a signal is sent in the
opposite direction of the congestion. The source is warned about congestion
and it needs to slow down.
Traffic Aware Routing
Whenever there is congestion in the network, there will be one strategy for
network-wide congestion control and traffic awareness. Congestion can be
avoided by designing a network that is well-suited to the traffic it transports.
Congestion develops when more traffic is targeted but only a low-bandwidth
link is available.
Traffic-aware routing’s main objective is to choose the optimum routes by
taking the load into account. It does this by setting the link weight to be a
function of the fixed connection bandwidth and propagation delay, traffic
awareness, as well as the variable observed load or average queuing time.
The traffic aware routing is diagrammatically represented as follows

Step 1 − Consider a network which is divided into two parts, East and West
both are connected by links CF and EI.

Step 2 − Suppose most of the traffic in between East and West is using link CF,
and as a result CF link is heavily loaded with long delays. Including queueing
delay in the weight which is used for shortest path calculation will make EI
more attractive.
Step 3 − After installing the new routing tables, most of East-West traffic will
now go over the EI link. As a result in the next update CF link will appear to be
the shortest path.
Step 4 − As a result the routing tables may oscillate widely, leading to erratic
routing and many potential problems.
Step 5 − If we consider only bandwidth and propagation delay by ignoring the
load, this problem does not occur. Attempts to include load but change the
weights within routing scheme to shift traffic across routes arrow range only to
slow down routing oscillations.
Step 6 − Two techniques can contribute for successful solution, which are as
follows −

• Multipath routing
• The routing scheme to shift traffic across routes.

The features of traffic aware routing are as follows −


• It is one of the congestion control techniques.
• To utilise most existing network capacity, routers can be tailored to traffic
patterns making them active during daytime when network users are
using more and sleep in different time zones.
• Roots can be changed to shift traffic away because of heavily used paths.
• Network Traffic can be split across multiple paths.
What is Traffic Throttling?
• Traffic Throttling is an approach used to avoid congestion. In networks
and the internet, the senders try to send as much traffic as possible as the
network can readily deliver.
• In a network when congestion is approaching it should tell the senders
of packets to slow down them.
• Traffic Throttling can be used in virtual circuit networks and datagram
networks. Various approaches are used for throttling traffic. Each used
approach must solve two problems. They are
• Problem 1. The router must be able to determine when the congestion is
approaching. It must identify the congestion before it has arrived.
• Problem 2. The second problem is that the router must send the
feedback on time to the senders that are creating congestion.
Feedback Mechanisms

1. Choke packets
Choke packets are a mechanism where the router directly sends the
choked packet back to its sender or host.

Explicit Congestion Notification


In the explicit congestion notification approach the router does not send extra
packets to the host but sets a bit of any one of the packet headers to inform that
the network has approached with congestion.
Hop-by-Hop Backpressure

After the congestion has been signaled still due to a slow signal many packets
are received from the long distances.

The main aim of this Hop-by-Hop Backpressure technique is to provide faster


relief at the point of congestion in the network.
Traffic Shaping
Traffic shaping is used to control bandwidth of the network to ensure quality of
service to business-critical applications. It can be validated at :
1. Port group level
2. Virtual or distributed virtual switch
This technique uses three parameters to shape the flow of network traffic :
1. Burst size
2. Average bandwidth
3. Peak bandwidth
These are explained as following below.
1. Burst Size :
When the workload is greater than average bandwidth it is known as burst.
Maximum amount of bytes that are permitted to move in a burst are defined
by burst size.
2. Burst Size = Time*Bandwidth
Bandwidth can increase up to peak bandwidth. Available bandwidth and
time burst can stay for a specific burst size are inversely proportional to
each other. Therefore, greater time burst can stay for a specific burst size,
lesser is available bandwidth and vice versa. If a particular burst is greater
than the configured burst size, then remaining frames will be lined up for
later transmission. The frames will be discarded in case queue is full.
3. Average Bandwidth :
It is configured to set permitted bits per second across a port group level
or a virtual/distributed virtual switch, over time. The rate of data transfer
is permitted over time.
4. Peak bandwidth :
It decides maximum number of bits per second permitted across a port
group level or a virtual/distributed virtual switch without discarding or
queuing the frames.
Peak Bandwidth > Average Bandwidth

Traffic Shaping : A network traffic management technique.


Example :
Suppose we have Burst Size = 3 Kb, Average bandwidth = 1 Kbps and Peak
bandwidth = 4 Kbps.
Then we can say that Burst with rate of data 3 Kbps can remain for 1 second.
What is Load Shedding in Computer Networks
• A network consists of various devices or resources. Each resource is
assigned to perform a particular task. Sometimes the available resources are
more but the task or load to be performed is less whereas sometimes it is
completely opposite.
• Load shedding is one of the techniques used for congestion control. A
network router consists of a buffer. This buffer is used to store the packets
and then route them to their destination.
• Load shedding is defined as an approach of discarding the packets when the
buffer is full according to the strategy implemented in the data link layer.
The selection of packets to discard is an important task. Many times packets
with less importance and old packets are discarded.
Selection of Packets to be Discarded
In the process of load shedding the packets need to be discarded in order to
avoid congestion. Therefore which packet needs to be discarded is a question.
Below are the approaches used to discard the packets.
1. Random Selection of packets
When the router is filled with more packets, the packets are selected randomly
for discarding. Discarding the packets it can include old, new, important,
priority-based, or less important packets. Random selection of packets can lead
to various disadvantages and problems.
2. Selection of packets based on applications
According to the application, the new packets will be discarded or old packets
can be discarded by the router. When the application is regarding file transfer
new packets are discarded and when the application is regarding multimedia
the old packets are discarded.
3. Selection of packets based on priority
The source of packets can mark the priority stating how much important the
packet is. Depending upon the priority provided by the sender the packet can
either be selected or discarded. The priority can be given according to price,
algorithm, and methods used, the functions that it will perform, and its effect
on another task upon selecting and discarding the packets.
4. Random early detection
Randomly early detection is an approach in which packets are discarded before
the buffer space becomes full. Therefore the situation of congestion is
controlled earlier. In this approach, the router initially maintains a specific
queue length for the outgoing lines. When this average set line is exceeded it
warns for congestion and discards the packets.
Advantages of Load Shedding
• Using the load shedding technique can help to recover from congestion.
• Load shedding technique reduces the flow of network traffic.
• It discards the packet from the network before congestion occurs
• Load shedding maintains a synchronized flow of packets in the network.
Disadvantages of Load Shedding
• If the size of the buffer is very less it discards more packets
• It is an overhead task for the router to continuously check if it has becomes
full.
• Load shedding can sometimes discard important packets also considered as
old packets.
• Load shedding cannot completely guarantee the avoidance of congestion.

You might also like