0% found this document useful (0 votes)
26 views

5 - Congestion Control Algorithms

Uploaded by

it.mohan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views

5 - Congestion Control Algorithms

Uploaded by

it.mohan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 36

Congestion Control Algorithms, Approaches to Congestion Control-Traffic Aware

Routing-Admission Control-Traffic Throttling-Load Shedding, IP Addressing, Classless


and Class full Addressing, Sub-netting.
Application Layer: The Domain Name System- The DNS Name Space, Resource
Records, Name Servers, Electronic Mail Architecture and Services, The User Agent,
Message Formats, Message Transfer, Final Delivery.

Congestion Control
Congestion Control in Computer Networks
What is congestion?
A state occurring in network layer when the message traffic is so
heavy that it slows down network response time.

Effects of Congestion
 As delay increases, performance decreases.
 If delay increases, retransmission occurs, making situation
worse.

Congestion control algorithms


 Congestion Control is a mechanism that controls the entry of
data packets into the network, enabling a better use of a shared
network infrastructure and avoiding congestive collapse.
 Congestive-Avoidance Algorithms (CAA) are implemented at the
TCP layer as the mechanism to avoid congestive collapse in a
network.
 There are two congestion control algorithm which are as follows:

 Leaky Bucket Algorithm


 The leaky bucket algorithm discovers its use in the context of
network traffic shaping or rate-limiting.
 A leaky bucket execution and a token bucket execution are
predominantly used for traffic shaping algorithms.
 This algorithm is used to control the rate at which traffic is sent
to the network and shape the burst traffic to a steady traffic
stream.
 The disadvantages compared with the leaky-bucket algorithm
are the inefficient use of available network resources.
 The large area of network resources such as bandwidth is not
being used effectively.

Page 1
Let us consider an example to understand

Imagine a bucket with a small hole in the bottom.No matter at what


rate water enters the bucket, the outflow is at constant rate.When the
bucket is full with water additional water entering spills over the
sides and is lost.

Similarly, each network interface contains a leaky bucket and the


following steps are involved in leaky bucket algorithm:
1. When host wants to send packet, packet is thrown into the
bucket.
2. The bucket leaks at a constant rate, meaning the network
interface transmits packets at a constant rate.
3. Bursty traffic is converted to a uniform traffic by the leaky
bucket.
4. In practice the bucket is a finite queue that outputs at a
finite rate.
 Token bucket Algorithm
 The leaky bucket algorithm has a rigid output design at an
average rate independent of the bursty traffic.
 In some applications, when large bursts arrive, the output is
allowed to speed up. This calls for a more flexible algorithm,
preferably one that never loses information. Therefore, a token
bucket algorithm finds its uses in network traffic shaping or
rate-limiting.
 It is a control algorithm that indicates when traffic should be
sent. This order comes based on the display of tokens in the
bucket.
 The bucket contains tokens. Each of the tokens defines a packet of
predetermined size. Tokens in the bucket are deleted for the
ability to share a packet.

Page 2
 When tokens are shown, a flow to transmit traffic appears in
the display of tokens.
 No token means no flow sends its packets. Hence, a flow
transfers traffic up to its peak burst rate in good tokens in the
bucket.

Need of token bucket Algorithm:-

The leaky bucket algorithm enforces output pattern at the average


rate, no matter how bursty the traffic is. So in order to deal with the
bursty traffic we need a flexible algorithm so that the data is not lost.
One such algorithm is token bucket algorithm.

Steps of this algorithm can be described as follows:

1. In regular intervals tokens are thrown into the bucket. ƒ


2. The bucket has a maximum capacity. ƒ
3. If there is a ready packet, a token is removed from the bucket,
and the packet is sent.
4. If there is no token in the bucket, the packet cannot be sent.

Let‘s understand with an example,

In figure (A) we see a bucket holding three tokens, with five packets
waiting to be transmitted. For a packet to be transmitted, it must
capture and destroy one token. In figure (B) We see that three of the
five packets have gotten through, but the other two are stuck waiting
for more tokens to be generated.

Ways in which token bucket is superior to leaky bucket: The


leaky bucket algorithm controls the rate at which the packets are
introduced in the network, but it is very conservative in nature. Some
flexibility is introduced in the token bucket algorithm. In the token
bucket, algorithm tokens are generated at each tick (up to a certain
limit). For an incoming packet to be transmitted, it must capture a
token and the transmission takes place at the same rate. Hence some
of the busty packets are transmitted at the same rate if tokens are
available and thus introduces some amount of flexibility in the
system.

How to correct the Congestion Problem:


Congestion Control refers to techniques and mechanisms that can
either prevent congestion, before it happens, or removecongestion,
after it has happened. Congestion control mechanisms are divided
into two categories, one category prevents the congestion from
Page 3
happening and the other category removes congestion after it has
taken place.

These two categories are:


1. Open loop
2. Closed loop
Open Loop Congestion Control
• In this method, policies are used to prevent the congestion before
it happens.
• Congestion control is handled either by the source or by the
destination.
• The various methods used for open loop congestion control are:
Retransmission Policy
• The sender retransmits a packet, if it feels that the packet it has
sent is lost or corrupted.
• However retransmission in general may increase the congestion in
the network. But we need to implement good retransmission policy to
prevent congestion.
• The retransmission policy and the retransmission timers need to be
designed to optimize efficiency and at the same time prevent the
congestion.
Window Policy
• To implement window policy, selective reject window method is used
for congestion control.
• Selective Reject method is preferred over Go-back-n window as in
Go-back-n method, when timer for a packet times out, several
packets are resent, although some may have arrived safely at the
receiver. Thus, this duplication may make congestion worse.
• Selective reject method sends only the specific lost or damaged
packets.

Acknowledgement Policy
• The acknowledgement policy imposed by the receiver may also affect
congestion.

Page 4
• If the receiver does not acknowledge every packet it receives it may
slow down the sender and help prevent congestion.
• Acknowledgments also add to the traffic load on the network. Thus,
by sending fewer acknowledgements we can reduce load on the
network.
• To implement it, several approaches can be used:
1. A receiver may send an acknowledgement only if it has a packet
to be sent.
2. A receiver may send an acknowledgement when a timer expires.
3. A receiver may also decide to acknowledge only N packets at a time.
Discarding Policy
• A router may discard less sensitive packets when congestion is
likely to happen.
• Such a discarding policy may prevent congestion and at the
same time may not harm the integrity of the transmission.
Admission Policy
• An admission policy, which is a quality-of-service mechanism,
can also prevent congestion in virtual circuit networks.
• Switches in a flow first check the resource requirement of a flow
before admitting it to the network.
• A router can deny establishing a virtual circuit connection if there
is congestion in the ―network or if there is a possibility of future
congestion.
Closed Loop Congestion Control
• Closed loop congestion control mechanisms try to remove the
congestion after it happens.
• The various methods used for closed loop congestion control are:
Backpressure
• Back pressure is a node-to-node congestion control that starts with
a node and propagates, in the opposite direction of data flow.

• The backpressure technique can be applied only to virtual circuit


networks. In such virtual circuit each node knows the

Page 5
upstream node from which a data flow is coming.
• In this method of congestion control, the congested node stops
receiving data from the immediate upstream node or nodes.
• This may cause the upstream node on nodes to become congested,
and they, in turn, reject data from their upstream node or nodes.
• As shown in fig node 3 is congested and it stops receiving packets
and informs its upstream node 2 to slow down. Node 2 in turns may
be congested and informs node 1 to slow down. Now node 1 may
create congestion and informs the source node to slow down. In this
way the congestion is alleviated. Thus, the pressure on node 3 is
moved backward to the source to remove the congestion.
Choke Packet
• In this method of congestion control, congested router or node sends a
special type of packet called choke packet to the source to inform it
about the congestion.
• Here, congested node does not inform its upstream node about the
congestion as in backpressure method.
• In choke packet method, congested node sends a warning directly to
the source station i.e. the intermediate nodes through which the packet
has traveled are not warned.

Implicit Signaling
• In implicit signaling, there is no communication between the
congested node or nodes and the source.
• The source guesses that there is congestion somewhere in the
network when it does not receive any acknowledgment. Therefore the
delay in receiving an acknowledgment is interpreted as congestion in
the network.
• On sensing this congestion, the source slows down.
• This type of congestion control policy is used by TCP.
Explicit Signaling
• In this method, the congested nodes explicitly send a signal to the
source or destination to inform about the congestion.
• Explicit signaling is different from the choke packet method. In
choke packed method, a separate packet is used for this purpose
whereas in explicit signaling method, the signal is included in the
packets that carry data .

Page 6
• Explicit signaling can occur in either the forward direction or the
backward direction .
• In backward signaling, a bit is set in a packet moving in the
direction opposite to the congestion. This bit warns the source about
the congestion and informs the source to slow down.
• In forward signaling, a bit is set in a packet moving in the direction
of congestion. This bit warns the destination about the congestion.
The receiver in this case uses policies such as
slowing down the acknowledgements to remove the congestion.

What are different approaches to congestion control?


The presence of congestion means the load is greater than the
resources available over a network to handle. Generally we will get
an idea to reduce the congestion by trying to increase the resources or
decrease the load, but it is not that much of a good idea.
Approaches to Congestion Control
There are some approaches for congestion control over a network
which are usually applied on different time scales to either prevent
congestion or react to it once it has occurred.

Page 7
Let us understand these approaches step wise as mentioned below −
Step 1 − The basic way to avoid congestion is to build a network that is
well matched to the traffic that it carries. If more traffic is directed but a
low-bandwidth link is available, definitely congestion occurs.
Step 2 − Sometimes resources can be added dynamically like routers
and links when there is serious congestion. This is called
provisioning, and which happens on a timescale of months, driven
by long-term trends.
Step 3 − To utilise most existing network capacity, routers can be
tailored to traffic patterns making them active during daytime when
network users are using more and sleep in different time zones.
Step 4 − Some of local radio stations have helicopters flying around
their cities to report on road congestion to make it possible for their
mobile listeners to route their packets (cars) around hotspots. This is
called traffic aware routing.
Step 5 − Sometimes it is not possible to increase capacity. The only
way to reduce the congestion is to decrease the load. In a virtual
circuit network, new connections can be refused if they would cause
the network to become congested. This is called admission control.
Step 6 − Routers can monitor the average load, queueing delay, or
packet loss. In all these cases, the rising number indicates growing
congestion. The network is forced to discard packets that it cannot
deliver. The general name for this is Load shedding. The better
technique for choosing which packets to discard can help to prevent
congestion collapse.
Traffic Aware Routing
The first approach we examine is traffic aware routing. These
schemes adapted to change in topology, but not change the load. The
goal of taking a load into to make most existing network capacity,
routs can be tailored to traffic patterns that change during the day as
network users wake and sleep in different zones.
Example :
Routes may be changed to shift traffic away from heavily used paths
by changing the shortest path weight. Some radiostations have
helicopters flying around their cities to report on road congestion to
make it possible for their mobile attendant to route their packets
around the hotspot. This is called traffic-aware routing. Splitting
traffic across paths helpful. It is used on the early internet according
to this model.

Page 8
Description of diagram :
Consider a network of the figure –
 Which is divide into two parts east and west connect by two
links CF and EL.
 Suppose most traffic is between east and west use connection
CF and result in this connection is heavily loaded with a long
delay.
 Include queuing in the weight used for short path calculation
will make EL more attractive.
 New routing tables have been installed, most of the east- west
traffic will go over EL, loading this link CF will appear to be
the shortest path.
As result, it may oscillate widely, lead to erratic routing and many
potential problems. If the load is ignored and only bandwidth and
propagation delay are considered this not occur. Attempt to include
load but change weights with a range only slow down routing
oscillations. Two techniques can contribute to a successful solution.
In first multipath routing in which there can be multiple paths from
source to destination.
Diagram :

Features :
 It is a congestion technique.
 In these roots can be altered according to traffic pattern‘s
because these traffic patterns changes during a day as network
user we can sleep in different time zones.
 As there are heavily used paths so roots can be changed to shift
traffic away.
 Traffic can be split across multiple paths.

Page 9
Features
The features of traffic aware routing are as follows −
 It is one of the congestion control techniques.
 To utilise most existing network capacity, routers can be
tailored to traffic patterns making them active during daytime
when network users are using more and sleep in different time
zones.
 Roots can be changed to shift traffic away because of heavily
used paths.
 Network Traffic can be split across multiple paths.

What is an admission control approach for congestion control?


he presence of congestion means the load is greater than the
resources available over a network to handle. Generally, we will get
an idea to reduce the congestion by trying to increase the resources or
decrease the load, but it is not that much of a good idea.
There are some approaches for congestion control over a network
which are usually applied on different time scales to either prevent
congestion or react to it once it has occurred.

Admission Control
It is one of techniques that is widely used in virtual-circuit networks
to keep congestion at bay. The idea is do not set up a new virtual
circuit unless the network can carry the added traffic without
becoming congested.
Admission control can also be combined with traffic aware routing
by considering routes around traffic hotspots as part of the setup
procedure.

Page 10
Example
Take two networks (a) A congestion network and (b) The portion of
the network that is not congested. A virtual circuit A to B is also
shown below −

Explanation
Step 1 − Suppose a host attached to router A wants to set up a
connection to a host attached to router B. Normally this connection
passes through one of the congested routers.
Step 2 − To avoid this situation, we can redraw the network as shown in
figure (b), removing the congested routers and all of their lines.
Step 3 − The dashed line indicates a possible route for the virtual
circuit that avoids the congested routers.
What is Traffic Throttling in computer networks?
Traffic throttling is one of the approaches for congestion control. In
the internet and other computer networks, senders trying to adjust the
transmission need to send as much traffic as the network can readily
deliver. In this setting the network aim is to operate just before the
onset of congestion.
There are some approaches to throttling traffic that can be used in
both datagram and virtual-circuit networks.
Each approach has to solve two problems −
Firs
Routers have to determine when congestion is approaching ideally
before it has arrived. Each router can continuously monitor the
resources it is using.
There are three possibilities, which are as follows −
 Utilisation of output links.
 Buffering of queued packets inside the router.
 Numbers of packets are lost due to insufficient buffering.
Second

Page 11
Average of utilization does not directly account for burstiness of most
traffic and queueing delay inside routers directly captures any
congestion experienced by packets.
To manage the good estimation of queueing delay d, a sample of
queue length s, can be made periodically and d updated according
to,
dnew=αdold+(1−α)sdnew=αdold+(1−α)s
Where the constant α determines how fast the router forgets recent
history. This is called EWMA (Exponentially Weighted Moving
Average)
It smoothest out fluctuations and is equivalent to allow-pass filter.
Whenever d moves above the threshold, the router notes the onset of
congestion.
Routers must deliver timely feedback to the senders that are causing
the congestion. Routers must also identify the appropriate senders. It
must then warn carefully, without sending many more packets into
an already congested network.
There are many feedback mechanisms one of them is as follows −
Explicit Congestion Notification (ECN)
The Explicit Congestion Notification (ECN) is diagrammatically
represented as follows −

Explanation of ECN
Step 1 − Instead of generating additional packets to warn of congestion,
a router can tag any packet it forwards by setting a bit in the packet
header to signal that it is experiencing congestion.
Step 2 − When the network delivers the packet, the destination can
note that there is congestion and inform the sender when it sends a
reply packet.
Step 3 − The sender can then throttle its transmissions as before.
Step 4 − This design is called explicit congestion notification and is
mostly used on the Internet.

Page 12
What is network
throttling?
Definition
Have you ever experienced sudden buffering on a video
you were streaming online or a sudden drop in your
download speeds? Chances are you may have experienced
network or bandwidth throttling. Network throttling is
when your Internet Service Provider (ISP) intentionally
limits your internet speeds to imitate low bandwidth
conditions.
Why would an ISP throttle your network?
There are several reasons why your ISP may throttle your
network. Some of these reasons are:
 An ISP may throttle bandwidth to regulate network
traffic and protect users from malicious online
content.
 They may have different pricing packages so users who
pay more get better speeds while the remaining
experience throttling.
 They may limit bandwidth if a user has reached his or
her monthly data cap.
 They may throttle to relieve the network in order to
minimize network congestion.
How to identify network throttling
Network throttling can be spotted in two ways:
1. Most browsers come equipped with tools that allow
users to inspect network activity. These tools can help
confirm if your network is being throttled.
2. Record your internet speed using an internet speed
test and then record again while using a VPN service.
If the VPN speeds are significantly higher than your
original speeds, your network is likely being
throttled.

What is load shedding in computer networks?


The presence of congestion means the load is greater than
the resources available over a network to handle. Generally
we will get an idea to reduce the congestion by trying to
increase the resources or decrease the load, but it is not that

Page 13
much of a good idea.
There are some approaches for congestion control over a
network which are usually applied on different time scales
to either prevent congestion or react to it once it has
occurred.

Now let us see one of the techniques for congestion control


which is termed as load shedding −
Load Shedding
It is one of the approaches to congestion control. Router
contains a buffer to store packets and route it to destination.
When the buffer is full, it simply discards some packets. It
chooses the packet to be discarded based on the strategy
implemented in the data link layer. This is called load
shedding
Load shedding will use dropping the old packets than
new to avoid congestion. Dropping packets that are part of
the difference is preferable because a future packet
depends on full frame.
To implement an intelligent discard policy, applications
must mark their packets to indicate to the network how
important they are. When packets have to be discarded,
routers can first drop packets from the least important
class, then the next most important class, and so on.
Advantages
The advantages of load shedding are given below −
 It can be used in detection of congestion.
 It can recover from congestion.
 It reduces the network traffic flow.
 Synchronised flow of packets across a network.
 Removes the packets before congestion

Page 14
occurs. Disadvantages
The disadvantages of load shedding are given below −
 Packets get lost because of discarding by the router.
 If buffer size is less it results in more
packets to get discarded.
 Cannot ensure congestion avoidance.
 Overhead for the router to always keep on checking
whether the buffer is full.

Congestion Control in Datagram Subnets


Congestion control in data-gram and sub-nets
: Some congestion Control approaches which can be used
in the datagram Subnet (and also in virtual circuit subnets)
are given under.
1. Choke packets
2. Load shedding
3. Jitter control.
Approach-1: Choke Packets :
 This approach can be used in virtual circuits as well
as in the datagram sub-nets. In this technique, each
router associates a real variable with each of its
output lines.
 This real variable says u has a value between 0 and
1, and it indicates the percentage utilization of that
line. If the value of the variable goes above the
threshold then the output line will enter into a
warning state.
 The router will check each newly arriving packet to
see if its output line is in the warning state. if it is
in the warning state then the router will send back
choke packets. Several variations on the congestion
control algorithm have been proposed depending on
the value of thresholds.
 Depending upon the threshold value, the choke
packets can contain a mild warning a stern warning,
or an ultimatum. Another variation can be in terms of
queue lengths or buffer utilization instead of using
line utilization as a deciding factor
Drawback –
The problem with the choke packet technique is that the action

Page 15
to be taken by the source host on receiving a choke packet is voluntary
and not compulsory.
Approach-2: Load Shedding :
 Admission control, choke packets, and fair queuing
are the techniques suitable for congestion control. But
if these techniques cannot make the congestion
disappear, then the load-shedding technique is to be
used.
 The principle of load shedding states that when the
router is inundated by packets that it cannot handle,
it should simply throw packets away.
 A router flooded with packets due to congestion can
drop any packet at random. But there are better ways
of doing this.
 The policy for dropping a packet depends on the type
of packet. For file transfer, an old packet is more
important than a new packet In contrast, for
multimedia, a new packet is more important than an
old one So.the policy for file transfer is called wine
(old is better than new), and that for the multimedia is
called milk (new is better than old).
 An intelligent discard policy can be decided
depending on the applications. To implement such an
intelligent discard policy, cooperation from the
sender is essential.
 The application should mark their packets in priority
classes to indicate how important they are.
 If this is done then when the packets are to be
discarded the routers can first drop packets from the
lowest class (i.e. the packets which are least
important). Then the routers will discard the packets
from the next lower class and so on. One or more header
bits are required to put the priority to make the class of a
packet. In every ATM cell, 1 bit is reserved in the
header for marking the priority. Every ATM cell is
labeled either as a low priority or high priority.
Approach-3: Jitter control :
 Jitter may be defined as the variation in delay for the
packet belonging to the same flow. The real-time audio
and video cannot tolerate jitter on the other hand the

Page 16
jitter doesn‘t matter if the packets are carrying
information contained in a file.
 For the audio and video transmission, if the packets
take 20 ms to 30 ms delay to reach the destination, it
doesn‘t matter, provided that the delay remains
constant.
 The quality of sound and visuals will be hampered by
the delays associated with different packets having
different values. Therefore, practically we can say
that 99% ofpackets should be delivered with a delay ranging
from 24.5 ms to 25.5 ms.
 When а packet arrives at a router, the router will
check to see whether the packet is behind or ahead
and by what time.
 This information is stored in the packet and updated
at every hop. If a packet is ahead of the schedule then
the router will hold it for a slightly longer time and if
the packet is behind schedule, then the router will try to
send it out as quickly as possible. This will help in
keeping the average delay per packet constant and
will avoid time jitter.

What is an IP Address – Definition and


Explanation IP address definition
An IP address is a unique address that identifies a device on
the internet or a local network. IP stands for "Internet
Protocol," which is the set of rules governing the format of
data sent via the internet or local network.
In essence, IP addresses are the identifier that allows
information to be sent between devices on a network: they
contain location information and make devices accessible
for communication. The internet needs a way to
differentiate between different computers, routers, and
websites. IP addresses provide a way of doing so and form
an essential part of how the internet works.
What is an IP Address?
An IP address is a string of numbers separated by periods.
IP addresses are expressed as a set of four numbers — an
example address might be 192.158.1.38. Each number in
the set can range from 0 to 255. So, the full IP addressing

Page 17
range goes from 0.0.0.0 to 255.255.255.255.
IP addresses are not random. They are mathematically
produced and allocated by the Internet Assigned Numbers
Authority (IANA), a division of the Internet Corporation
for Assigned Names and Numbers (ICANN). ICANN is a
non-profit organization that was established in the United
States in 1998 to help maintain the security of the internet
and allow it to be usable by all. Each time anyone registers
a domain on the internet, they go through a domain name
registrar, who pays a small fee to ICANN to register the
domain.
Watch this video to learn what IP address is, why IP address
is important and how to protect it from hackers:
How do IP addresses work
If you want to understand why a particular device is not
connecting in the way you would expect or you want to
troubleshoot why your network may not be working, it helps
understand how IP addresses work.
Internet Protocol works the same way as any other
language, by communicating using set guidelines to pass
information. All devices find, send, and exchange
information with other connected devices using this
protocol. By speaking the same language, any computer in
any location can talk to one another.
The use of IP addresses typically happens behind the
scenes. The process works like this:
1. Your device indirectly connects to the internet by
connecting at first to a network connected to the
internet, which then grants your device access to the
internet.
2. When you are at home, that network will probably be
your Internet Service Provider (ISP). At work, it will be
your company network.
3. Your IP address is assigned to your device by your ISP.
4. Your internet activity goes through the ISP, and they
route it back to you, using your IP address. Since they
are giving you access to the internet, it is their role to
assign an IP address to your device.
5. However, your IP address can change. For example,
turning your modem or router on or off can change it.

Page 18
Or you can contact your ISP, and they can change it
for you.
6. When you are out and about – for example,
traveling – and you take your device with you, your
home IP address does not come with you. This is
because you will be using another network (Wi-Fi at a
hotel, airport, or coffee shop, etc.) to access the
internet and will be using a different (and temporary) IP
address, assigned to you by the ISP of the hotel,
airport or coffee shop.
As the process implies, there are different types of IP
addresses, which we explore below.
Types of IP addresses
There are different categories of IP addresses, and within each
category, different types.
Consumer IP addresses
Every individual or business with an internet
service plan will have two types of IP addresses: their
private IP addresses and their public IP address. The terms
public and private relate to the network location — that is,
a private IP address is used inside a network, while a
public one is used outside a network.
Private IP addresses
Every device that connects to your internet network has a
privateIP address. This includes computers, smartphones,
and tablets but also any Bluetooth-enabled devices like
speakers, printers, or smart TVs. With the growing internet
of things, the number of private IP addresses you have at
home is probably growing. Your router needs a way to
identify these items separately, and many items need a way
to recognize each other. Therefore, your router generates
private IP addresses that are unique identifiers for each
device that differentiate them on the network.
Public IP addresses
A public IP address is the primary address associated with
your whole network. While each connected device has its
own IP address, they are also included within the main IP
address for your network. As described above, your public
IP address is provided to your router by your ISP.
Typically, ISPs have a large pool of IP addresses that they

Page 19
distribute to their customers. Your public IP address is the
address that all the devices outside your internet network
will use to recognize your network.
Public IP addresses
Public IP addresses come in two forms – dynamic and static.
Dynamic IP addresses
Dynamic IP addresses change automatically and regularly.
ISPs buy a large pool of IP addresses and assign them
automatically to their customers. Periodically, they re-
assign them and put the older IP addresses back into the
pool to be used for other customers. The rationale for this
approach is to generate cost savings for the ISP.
Automating the regular movement of IP addresses means
they don‘t have to carry out specific actions to re-establish
a customer's IP address if they move home, for example.
There are security benefits, too, because a changing IP
address makes it harder for criminals to hack into your
network interface.
Static IP addresses
In contrast to dynamic IP addresses, static addresses
remain consistent. Once the network assigns an IP
address, it remains the same. Most individuals and
businesses do not need a static IP address, but for
businesses that plan to host their own server, it is crucial
to have one. This is because a static IP address ensures
that websites and email addresses tied to it will have a
consistent IP address — vital if you want other devices to
be able to find them consistently on the web.
This leads to the next point – which is the two types of
website IP addresses.
There are two types of website IP addresses
For website owners who don‘t host their own server, and insteadrely
on a web hosting package – which is the case for most websites –
there are two types of website IP addresses. These are shared and
dedicated.
Shared IP addresses
Websites that rely on shared hosting plans from web
hosting providers will typically be one of many websites
hosted on the same server. This tends to be the case for
individual websites or SME websites, where traffic

Page 20
volumes are manageable, and the sites themselves are
limited in terms of the number of pages, etc. Websites
hosted in this way will have shared IP addresses.
Dedicated IP addresses
Some web hosting plans have the option to purchase a
dedicated IP address (or addresses). This can make obtaining
an SSL certificate easier and allows you to run your own File
Transfer Protocol (FTP) server. This makes it easier to share
and transfer files with multiple people within an organization
and allow anonymous FTP sharing options. A dedicated IP
address also allows you to access your website using the IP
address alone rather than the domain name — useful if you
want to build and test it before registering your domain.

Classful Vs Classless
Addressing Introduction of
Classful IP Addressing
IP address is an address having information about how to
reach a specific host, especially outside the LAN. An IP
address is a 32 bit unique address having an address
space of 232. Generally, there are two notations in
which IP address is written, dotted decimal notation and
hexadecimal notation.
Dotted Decimal Notation:

Hexadecimal Notation:

Page 21
Some points to be noted about dotted decimal notation:
1. The value of any segment (byte) is between 0 and
255 (both included).
2. There are no zeroes preceding the value in any
segment (054 is wrong, 54 is correct).
Classful Addressing
The 32 bit IP address is divided into five sub-classes. These are:
 Class A
 Class B
 Class C
 Class D
 Class E
Each of these classes has a valid range of IP addresses.
Classes D and E are reserved for multicast and experimental
purposes respectively. The order of bits in the first octet
determine the classes of IPaddress. IPv4 address is
divided into two parts:
 Network ID
 Host ID
The class of IP address is used to determine the bits used
for network ID and host ID and the number of total
networks and hosts possible in that particular class. Each
ISP or network administrator assigns IP address to each
device that is connected to its network.

Page 22
Note: IP addresses are globally managed by Internet
Assigned Numbers Authority(IANA) and regional Internet
registries(RIR).
Note: While finding the total number of host IP addresses,
2 IP addresses are not counted and are therefore, decreased
from the total count because the first IP address of any
network is the network number and whereas the last IP
address is reserved for broadcast IP.
Class A:
IP address belonging to class A are assigned to the networks
that contain a large number of hosts.
 The network ID is 8 bits long.The host ID is 24 bits long.
The higher order bit of the first octet in class A is always
set to 0. The remaining 7 bits in first octet are used to
determine network ID. The 24 bits of host ID are used to
determine the host in any network. The default subnet
mask for class A is 255.x.x.x. Therefore, class A has a
total of:
 2^7-2= 126 network ID(Here 2 address is subtracted
because 0.0.0.0 and 127.x.y.z are special address. )
 2^24 – 2 = 16,777,214 host ID
IP addresses belonging to class A ranges from 1.x.x.x – 126.x.x.x

Page 23
Class B:
IP address belonging to class B are assigned to the
networks that ranges from medium-sized to large-sized
networks.
 The network ID is 16 bits long.
 The host ID is 16 bits long.
The higher order bits of the first octet of IP addresses of
class B are always set to 10. The remaining 14 bits are used
to determine network ID. The 16 bits of host ID is used to
determine the host in any network. The default sub-net
mask for class B is 255.255.x.x. Class B has a total of:
 2^14 = 16384 network address
 2^16 – 2 = 65534 host address
IP addresses belonging to class B ranges from 128.0.x.x –
191.255.x.x.

Class C:
IP address belonging to class C are assigned to small-sized
networks.
 The network ID is 24 bits long.
 The host ID is 8 bits long.
The higher order bits of the first octet of IP addresses of
class C are always set to 110. The remaining 21 bits are
used to determine network ID. The 8 bits of host ID is used
to determine the host in any network. The default sub-net
mask for class C is255.255.255.x. Class C has a total of:
 2^21 = 2097152 network address
 2^8 – 2 = 254 host address
IP addresses belonging to class C ranges from

Page 24
192.0.0.x – 223.255.255.x.

Class D:
IP address belonging to class D are reserved for multi-
casting. The higher order bits of the first octet of IP
addresses belonging to class D are always set to 1110. The
remaining bits are for the address that interested hosts
recognize.
Class D does not posses any sub-net mask. IP addresses
belonging to class D ranges from 224.0.0.0 –
239.255.255.255.

Class E:
IP addresses belonging to class E are reserved for
experimental and research purposes. IP addresses of
class E ranges from
240.0.0.0 – 255.255.255.254. This class doesn‘t have any
sub- net mask. The higher order bits of first octet of class E
are always set to 1111.

Range of special IP addresses:


169.254.0.0 – 169.254.0.16 : Link local addresses
127.0.0.0 – 127.0.0.8 : Loop-back addresses
– 0.0.0.8 : used to communicate within the
current network.

Page 25
Rules for assigning Host ID:
Host ID‘s are used to identify a host within a network.
The host ID are assigned based on the following rules:
 Within any network, the host ID must be uniqueto
that network.
 Host ID in which all bits are set to 0 cannot
be assigned because this host ID is used
to represent the network ID of the IP
address.
 Host ID in which all bits are set to 1
cannot be assigned because this host ID is
reserved as a broadcast address to send
packets to all the hosts present on that
particular network.
Rules for assigning Network ID:
Hosts that are located on the same physical network are
identified by the network ID, as all host on the same physical
network is assigned the same network ID. The network ID is
assigned based on the following rules:
 The network ID cannot start with 127
because 127 belongs to class A address
and is reserved for internal loop-back
functions.
 All bits of network ID set to 1 are reserved
for use as an IP broadcast address and
therefore, cannot be used.
 All bits of network ID set to 0 are used
to denote a specific host on the local
network and are not routed and therefore,
aren‘t used.
Summary of Classful addressing :

Page 26
Problems with Classful Addressing:
The problem with this classful addressing method is that
millions of class A address are wasted, many of the class B
address are wasted, whereas, number of addresses available
in class C is so small that it cannot cater the needs of
organizations. Class D addresses are used for multicast
routing and are therefore available as a single block only.
Class E addresses are reserved.
Disadvantage of Classful Addressing:
1. Class A with a mask of 255.0.0.0 can support 128
Network, 16,777,216 addresses per network and a
total of 2,147,483,648 addresses.
2. Class B with a mask of 255.255.0.0 can support
16,384 Network, 65,536 addresses per network and a
total of 1,073,741,824 addresses.
3. Class C with a mask of 255.255.255.0 can support
2,097,152 Network, 256 addresses per network and a
total of 536,870,912 addresses.
But what if someone requires 2000 addresses ?
One way to address this situation would be to provide the
person with class B network. But that would result in a
waste of so many

addresses.
Another possible way is to provide multiple class C
networks, but that too can cause a problem as there would
be too many networks to handle.
To resolve problems like the one mentioned above CIDR was

Page 27
introduced.
Classless Inter-Domain Routing (CIDR):
CIDR or Class Inter-Domain Routing was introduced in
1993 to replace classful addressing. It allows the user to
use VLSM or Variable Length Subnet Masks.
CIDR notation:
In CIDR subnet masks are denoted by /X. For example a
subnet of 255.255.255.0 would be denoted by /24. To work
a subnet mask in CIDR, we have to first convert each octet
into its respective binary value. For example, if the subnet
is of 255.255.255.0. then :
First Octet –
255 has 8 binary 1's when converted to binary
Second Octet –
255 has 8 binary 1's when converted to binary
Third Octet –
255 has 8 binary 1's when converted to binary
Fourth Octet –
0 has 0 binary 1's when converted to binary
Therefore, in total there are 24 binary 1‘s, so the subnet mask is
/24.
While creating a network in CIDR, a person has to make
sure that the masks are contiguous, i.e. a subnet mask like
10111111.X.X.X can‘t exist.
With CIDR, we can create Variable Length Subnet Masks,
leading to less wastage of IP addresses. It is not necessary
that thedivider between the network and the host portions
is at an octet boundary. For example, in CIDR a subnet
mask like 255.224.0.0 or
11111111.11100000.00000000.00000000 can exist.
Difference between Classful Addressing and Classless
Addressing
Sr. Parameter Classful Addressing Classless Addressing
No.
1. Basics In Classful addressing Classless addressing
IP addresses are came to replace the
allocated according to classful addressing and
the classes- A to E. to handle the issue
of rapid
exhaustion of IP
addresses.

Page 28
2. Practical It is less practical. It is more practical.
3. Network ID and The changes in the There is no such
Host ID Network ID and Host restriction of class in
ID depend on the classless addressing.
class.
4. VLSM It does not support the It supports the Variable
Variable Length Length
Subnet Mask (VLSM). Subnet Mask (VLSM).
5. Bandwidth Classful addressing It requires less
requires more bandwidth. Thus, fast
bandwidth. As a result, and less expensive as
it becomes slower and compared to classful
more expensive as addressing.
compared to classless
addressing.

6. CIDR It does not support It supports Classless


Classless Inter- Inter-Domain Routing
Domain Routing (CIDR).
(CIDR).
7. Updates Regular or periodic Triggered Updates
updates
8. Troubleshooting and Troubleshooting and It is not as easy
Problem problem detection are compared to classful
detection easy than classless addressing.
addressing because of the
division of
network, host and subnet
parts in the
address.
9. Division of  Network  Host
Address  Host  Subnet
 Subnet
What is a subnet?
A subnet, or subnetwork, is a network inside a network. Subnets
make networks more efficient. Through subnetting, network traffic
can travel a shorter distance without passing through
unnecessary routers to reach its destination.

Page 29
Imagine Alice puts a letter in the mail that is addressed to Bob,
who lives in the town right next to hers. For the letter to reach Bob
as quickly as possible, it should be delivered right from Alice's
post office to the post office in Bob's town, and then to Bob. If the
letter is first sent to a post office hundreds of miles away, Alice's
letter could take a lot longer to reach Bob.
Like the postal service, networks are more efficient when
messages travel as directly as possible. When a network receives
data packets from another network, it will sort and route those
packets by subnet so that the packets do not take an inefficient
route to their destination.
What is an IP address?
In order to understand subnets, we must quickly define IP
addresses. Every device that connects to the Internet is assigned a
unique IP (Internet Protocol) address, enabling data sent over the
Internet to reach the right device out of the billions of devices
connected to the Internet. While computers read IP addresses as
binary code (a series of 1s and 0s), IP addresses are usually
written as a series of alphanumeric characters.
What do the different parts of an IP address mean?
This section focuses on IPv4 addresses, which are presented in the
form of four decimal numbers separated by periods, like
203.0.113.112. (IPv6 addresses are longer and use letters as well
as numbers.)
Every IP address has two parts. The first part indicates which
network the address belongs to. The second part specifies the device
within that network. However, the length of the "first part" changes
depending on the network's class.

Page 30
Networks are categorized into different classes, labeled A through
E.Class A networks can connect millions of devices. Class B networks and
Class C networks are progressively smaller in size. (Class D and Class E
networks are not commonly used.)
Let's break down how these classes affect IP address construction:
Class A network: Everything before the first period indicates the
network, and everything after it specifies the device within that
network. Using 203.0.113.112 as an example, the network is
indicated by "203" and the device by "0.113.112."
Class B network: Everything before the second period indicates
the network. Again using 203.0.113.112 as an example, "203.0"
indicates the network and "113.112" indicates the device within
that network.
Class C network: For Class C networks, everything before the
third period indicates the network. Using the same example,
"203.0.113" indicates the Class C network, and "112" indicates
the device.
Why is subnetting necessary?
As the previous example illustrates, the way IP addresses are
constructed makes it relatively simple for Internet routers to find
the right network to route data into. However, in a Class A
network (for instance), there could be millions of connected
devices, and it could take some time for the data to find the right
device. This is why subnetting comes in handy: subnetting
narrows down the IP address to usage within a range of devices.
Because an IP address is limited to indicating the network and the
device address, IP addresses cannot be used to indicate which
subnet an IP packet should go to. Routers within a network use
something called a subnet mask to sort data into subnetworks.
What is a subnet mask?
A subnet mask is like an IP address, but for only internal usage
within a network. Routers use subnet masks to route data packets
to the right place. Subnet masks are not indicated within data
packets traversing the Internet — those packets only indicate the
destination IP address, which a router will match with a subnet.
Suppose Bob answers Alice's letter, but he sends his reply to
Alice's place of employment rather than her home. Alice's office is
quite large with many different departments. To ensure employees
receive their correspondence quickly, the administrative team at
Alice's workplace sorts mail by department rather than by individual
employee. After receiving Bob's letter, they look up Alice's
department and see she works in Customer Support. They send the
letter to the Customer Support department instead of to Alice, and
the customer support department gives it to Alice.

Page 31
In this analogy, "Alice" is like an IP address and "Customer
Support" is like a subnet mask. By matching Alice to her
department, Bob's letter was quickly sorted into the right group of
potential recipients. Without this step, office administrators would
have to spend time laboriously looking for the exact location of
Alice's desk, which could be anywhere in the building.
For a real-world example, suppose an IP packet is addressed to the
IP address 192.0.2.15. This IP address is a Class C network, so
the networkis identified by "192.0.2" (or to be technically precise,
192.0.2.0/24). Network routers forward the packet to a host on the
network indicated by "192.0.2."
Once the packet arrives at that network, a router within the
network consults its routing table. It does some binary
mathematics using its subnet mask of 255.255.255.0, sees the
device address "15" (the rest of the IP address indicates the
network), and calculates which subnet the packet should go
to. It forwards the packet to the router or switch
responsible for delivering packets within that subnet, and the
packet arrives at IP address 192.0.2.15 (learn
more about routers and switches).
Network Application Architecture
Application architecture is different from the network architecture.
The network architecture is fixed and provides a set of services to
applications. The application architecture, on the other hand, is
designed by the application developer and defines how the
application should be structured over the various end systems.
Application architecture is of two types:
o Client-server architecture: An application program
running on the local machine sends a request to another
application program is known as a client, and a program
that serves a request is known as a server. For example,
when a web server receives a request from the client host, it
responds to the request to the client host.
Characteristics Of Client-server architecture:
o In Client-server architecture, clients do not directly
communicate with each other. For example, in a web
application, two browsers do not directly communicate
with each other.
o A server is fixed, well-known address known as IP address
because the server is always on while the client can always
contact the server by sending a packet to the sender's IP
address.
Disadvantage Of Client-server architecture:
It is a single-server based architecture which is incapable of

Page 32
holding all the requests from the clients. For example, a social
networking site can become overwhelmed when there is only one
server exists.
o P2P (peer-to-peer) architecture: It has no dedicated server
in a data center. The peers are the computers which are not
owned by the service provider. Most of the peers reside in
the homes, offices, schools, and universities. The peers
communicate with each other without passing the
information through a dedicated server, this architecture is
known as peer-to-peer architecture. The applications based
on P2P architecture includes file sharing and internet
telephony.
Features of P2P architecture
o Self scalability: In a file sharing system, although each peer
generates a workload by requesting the files, each peer also
adds a service capacity by distributing the files to the peer.
o Cost-effective: It is cost-effective as it does not require
significant server infrastructure and server bandwidth.
Client and Server processes
o A network application consists of a pair of processes that
send the messages to each other over a network.
o In P2P file-sharing system, a file is transferred from a
process in one peer to a process in another peer. We label
one of the two processes as the client and another process
as the server.
o With P2P file sharing, the peer which is downloading the file is
known as a client, and the peer which is uploading the file is
known as a server. However, we have observed in some
applications such as P2P file sharing; a process can be both as a
client and server. Therefore, we can say that a process can both
download and upload the files.
o Introduction :
The Application Layer is topmost layer in the Open System
Interconnection (OSI) model. This layer provides several ways for
manipulating the data (information) which actually enables any
type of user to access network with ease. This layer also makes a
request to its bottom layer, which is presentation layer for receiving
various types of information from it. The Application Layer
interface directly interacts with application and provides common
web application services. This layer is basically highest level of
open system, which provides services directly for application
process.
Present Layer=> Application Layer
Presentation Layer

Page 33
Session Layer
Transport Layer
Network Layer Data
Layer Physical Layer
Functions of Application Layer :
The Application Layer, as discussed above, being topmost layer in
OSI model, performs several kinds of functions which are
requirement in any kind of application or communication process.
Following are list of functions which are performed by
Application Layer of OSI Model –
Data from User <=> Application layer <=> Data from Presentation Layer
 Application Layer provides a facility by which users can
forward several emails and it also provides a storage facility.
 This layer allows users to access, retrieve and manage
files in a remote computer.
 It allows users to log on as a remote host.
 This layer provides access to global information about various
services.
 This layer provides services which include: e-mail,
transferring files, distributing results to the user, directory
services, network resources and so on.
 It provides protocols that allow software to send and receive
information and present meaningful data to users.
 It handles issues such as network transparency, resource
allocation and so on.
 This layer serves as a window for users and application
processes to access network services.
 Application Layer is basically not a function, but it performs
application layer functions.
 The application layer is actually an abstraction layer that
specifies the shared protocols and interface methods used by
hosts in a communication network.
 Application Layer helps us to identify communication
partners, and synchronizing communication.
 This layer allows users to interact with other software applications.
 In this layer, data is in visual form, which makes
users trulyunderstand data rather than remembering or
visualize the data in the binary format (0‘s or 1‘s).
 This application layer basically interacts with Operating
System (OS) and thus further preserves the data in a
suitable manner.
 This layer also receives and preserves data from it‘s previous
layer, which is Presentation Layer (which carries in itself the
syntax and semantics of the information transmitted).

Page 34
 The protocols which are used in this application layer depend
upon what information users wish to send or receive.
 This application layer, in general, performs host initialization followed
by remote login to hosts.
Working of Application Layer in the OSImodel
The application layer in the OSI model generally acts only like the interface
which is responsible for communicating with host-based and user-facing
applications. This is in contrast with TCP/IP protocol, wherein the
layers below the application layer, which is Session Layer and
Presentation layer, are clubbed together and form a simple single layer
which is responsible for performing the functions, which includes
controlling the dialogues between computers, establishing as well as
maintaining as well as ending a particular session, providing data
compression and data encryption and so on.
At first, client sends a command to server and when server receives
that command, it allocates port number to client. Thereafter, the
client sends an initiation connection request to server and when
server receives request, it gives acknowledgement (ACK) to client
through client has successfully established a connection with the
server and, therefore, now client has access to server through which
it may either ask server to send any types of files or other
documents or it may upload some files or documents on server itself.
Features provided by Application Layer Protocols :
To ensure smooth communication, application layer protocols are
implemented the same on source host and destination
host. The following are some of the features which are provided
by Application layer protocols-
 The Application Layer protocol defines process for both
parties which are involved in communication.
 These protocols define the type of message being sent or
received from any side (either source host or destination
host).
 These protocols also define basic syntax of the message being
forwarded or retrieved.
 These protocols define the way to send a message and the
expected response.
 These protocols also define interaction with the next level.
Application Layer Protocols: The application layer provides
several protocols which allow any software to easily send
and receive information and present meaningful data to its
users. The following are some of the protocols which are
provided by the application layer
 TELNET: Telnet stands for Telecommunications Network.
This protocol is used for managing files over the Internet. It

Page 35
allows the Telnet clients to access the resources of Telnet
server. Telnet uses port number 23.
 DNS: DNS stands for Domain Name System. The DNS
service translates the domain name (selected by user) into
the corresponding IP address. For example- If you choose
the domain name as www.abcd.com, then DNS must
translate it as 192.36.20.8 (random IP address written just
for understanding purposes). DNS protocol uses the port
number 53.
 DHCP: DHCP stands for Dynamic Host Configuration
Protocol. It provides IP addresses to hosts. Whenever a host
tries to register for an IP address with the DHCP server,
DHCP server provides lots of information to the
corresponding host. DHCP uses port numbers 67 and 68.
 FTP: FTP stands for File Transfer Protocol. This protocol
helps to transfer different files from one device to another. FTP
promotes sharing of files via remote computer devices with
reliable, efficient data transfer. FTP uses port number 20 for
data access and port number 21 for data control.
 SMTP: SMTP stands for Simple Mail Transfer Protocol. It is
used to transfer electronic mail from one user to another
user. SMTP is used by end users to send emails with ease.
SMTP uses port numbers 25 and 587.
 HTTP: HTTP stands for Hyper Text Transfer Protocol. It is
the foundation of the World Wide Web (WWW). HTTP
works on the client server model. This protocol is used for
transmitting hypermedia documents like HTML. This
protocol was designed particularly for the communications
between the web browsers and web servers, but this protocol
can also be used for several other purposes.
 NFS: NFS stands for Network File System. This protocol
allows remote hosts to mount files over a network and interact
with those file systems as though they are mounted locally.
NFS uses the port number 2049.
 SNMP: SNMP stands for Simple Network Management
Protocol. This protocol gathers data by polling the devices
from the network to the management station at fixed or
random intervals,

Page 36

You might also like