0% found this document useful (0 votes)
3 views

computer network

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

computer network

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 60

Network Devices: Network devices, also known as networking hardware, are

physical devices that allow hardware on a computer network to communicate and


interact with one another. For example Repeater, Hub, Bridge, Switch, Routers,
Gateway, Brouter, and NIC, etc.
1. Repeater – A repeater operates at the physical layer. Its job is to regenerate the
signal over the same network before the signal becomes too weak or corrupted to
extend the length to which the signal can be transmitted over the same network. An
important point to be noted about repeaters is that they not only amplify the signal
but also regenerate it. When the signal becomes weak, they copy it bit by bit and
regenerate it at its star topology connectors connecting following the original
strength. It is a 2-port device.
2. Hub – A hub is a basically multi-port repeater. A hub connects multiple wires
coming from different branches, for example, the connector in star topology which
connects different stations. Hubs cannot filter data, so data packets are sent to all
connected devices. In other words, the collision domain of all hosts connected
through Hub remains one. Also, they do not have the intelligence to find out the best
path for data packets which leads to inefficiencies and wastage.
Types of Hub
 Active Hub:- These are the hubs that have their power supply and can
clean, boost, and relay the signal along with the network. It serves both
as a repeater as well as a wiring center. These are used to extend the
maximum distance between nodes.
 Passive Hub:- These are the hubs that collect wiring from nodes and
power supply from the active hub. These hubs relay signals onto the
network without cleaning and boosting them and can’t be used to extend
the distance between nodes.
 Intelligent Hub:- It works like an active hub and includes remote
management capabilities. They also provide flexible data rates to
network devices. It also enables an administrator to monitor the traffic
passing through the hub and to configure each port in the hub.
3. Bridge – A bridge operates at the data link layer. A bridge is a repeater, with add
on the functionality of filtering content by reading the MAC addresses of the source
and destination. It is also used for interconnecting two LANs working on the same
protocol. It has a single input and single output port, thus making it a 2 port device.
Types of Bridges
 Transparent Bridges:- These are the bridge in which the stations are
completely unaware of the bridge’s existence i.e. whether or not a bridge
is added or deleted from the network, reconfiguration of the stations is
unnecessary. These bridges make use of two processes i.e. bridge
forwarding and bridge learning.
 Source Routing Bridges:- In these bridges, routing operation is performed
by the source station and the frame specifies which route to follow. The
host can discover the frame by sending a special frame called the
discovery frame, which spreads through the entire network using all
possible paths to the destination.
4. Switch – A switch is a multiport bridge with a buffer and a design that can boost
its efficiency(a large number of ports imply less traffic) and performance. A switch
is a data link layer device. The switch can perform error checking before forwarding
data, which makes it very efficient as it does not forward packets that have errors
and forward good packets selectively to the correct port only. In other words, the
switch divides the collision domain of hosts, but the broadcast domain remains the
same.
Types of Switch
1. Unmanaged switches: These switches have a simple plug-and-play
design and do not offer advanced configuration options. They are
suitable for small networks or for use as an expansion to a larger
network.
2. Managed switches: These switches offer advanced configuration options
such as VLANs, QoS, and link aggregation. They are suitable for larger,
more complex networks and allow for centralized management.
3. Smart switches: These switches have features similar to managed
switches but are typically easier to set up and manage. They are suitable
for small- to medium-sized networks.
4. Layer 2 switches: These switches operate at the Data Link layer of the
OSI model and are responsible for forwarding data between devices on
the same network segment.
5. Layer 3 switches: These switches operate at the Network layer of the OSI
model and can route data between different network segments. They are
more advanced than Layer 2 switches and are often used in larger, more
complex networks.
6. PoE switches: These switches have Power over Ethernet capabilities,
which allows them to supply power to network devices over the same
cable that carries data.
7. Gigabit switches: These switches support Gigabit Ethernet speeds, which
are faster than traditional Ethernet speeds.
8. Rack-mounted switches: These switches are designed to be mounted in
a server rack and are suitable for use in data centers or other large
networks.
9. Desktop switches: These switches are designed for use on a desktop or
in a small office environment and are typically smaller in size than rack-
mounted switches.
10.Modular switches: These switches have modular design, which allows for
easy expansion or customization. They are suitable for large networks
and data centers.
5. Routers – A router is a device like a switch that routes data packets
based on their IP addresses. The router is mainly a Network Layer device.
Routers normally connect LANs and WANs and have a dynamically
updating routing table based on which they make decisions on routing
the data packets. The router divides the broadcast domains of hosts
connected through it.
6. Gateway – A gateway, as the name suggests, is a passage to connect two
networks that may work upon different networking models. They work as messenger
agents that take data from one system, interpret it, and transfer it to another system.
Gateways are also called protocol converters and can operate at any network layer.
Gateways are generally more complex than switches or routers. A gateway is also
called a protocol converter.
7. Brouter – It is also known as the bridging router is a device that combines
features of both bridge and router. It can work either at the data link layer or a
network layer. Working as a router, it is capable of routing packets across networks
and working as the bridge, it is capable of filtering local area network traffic.
8. NIC – NIC or network interface card is a network adapter that is used to connect
the computer to the network. It is installed in the computer to establish a LAN. It has
a unique id that is written on the chip, and it has a connector to connect the cable to
it. The cable acts as an interface between the computer and the router or modem.
NIC card is a layer 2 device which means that it works on both the physical and data
link layers of the network model.

Spanning Tree Protocol (STP):


It is a communication protocol operating at data link layer the OSI model to prevent
bridge loops and the resulting broadcast storms. It creates a loop − free topology
for Ethernet networks.

Working Principle
A bridge loop is created when there are more than one paths between two nodes in
a given network. When a message is sent, particularly when a broadcast is done, the
bridges repeatedly rebroadcast the same message flooding the network. Since a
data link layer frame does not have a time-to-live field in the header, the broadcast
frame may loop forever, thus swamping the channels.
Spanning tree protocol creates a spanning tree by disabling all links that form a loop
or cycle in the network. This leaves exactly one active path between any two nodes
of the network. So when a message is broadcast, there is no way that the same
message can be received from an alternate path. The bridges that participate in
spanning tree protocol are often called spanning tree bridges.
To construct a spanning tree, the bridges broadcast their configuration routes. Then
they execute a distributed algorithm for finding out the minimal spanning tree in the
network, i.e. the spanning tree with minimal cost. The links not included in this tree
are disabled but not removed.
In case a particular active link fails, the algorithm is executed again to find the
minimal spanning tree without the failed link. The communication continues through
the newly formed spanning tree. When a failed link is restored, the algorithm is re-
run including the newly restored link.

Example
Let us consider a physical topology, as shown in the diagram, for an Ethernet
network that comprises of six interconnected bridges. The bridges are named {B1,
B2, B3, B4, B5, B6} and several nodes are connected to each bridge. The links
between two bridges are named {L1, L2, L3, L4, L5, L6, L7, L8, L9}, where L1
connects B1 and B2, L2 connects B1 and B3 and so on. It is assumed that all links
are of uniform costs.
From the diagram we can see that there are multiple paths from a bridge to any
other bridge in the network, forming several bridge loops that makes the topology
susceptible to broadcast storms.

According to spanning tree protocol, links that form a cycle are disabled. Thus,we
get a logical topology so that there is exactly one route between any two bridges.
One possible logical topology is shown in the following diagram below containing
links {L1, L2, L3, L4, L5} −
In the above logical configuration, if a situation arises such that link L4 fails. Then,
the spanning tree is reconstituted leaving L4. A possible logical reconfiguration
containing links {L1, L2, L3, L5, L9} is as follows −

What is Multiplexing and what are its types?


Multiplexing is the process of combining multiple signals into one signal, over a shared
medium. If analog signals are multiplexed, it is Analog Multiplexing and if digital signals are
multiplexed, that process is Digital Multiplexing.

The process of multiplexing divides a communication channel into several number


of logical channels, allotting each one for a different message signal or a data
stream to be transferred. The device that does multiplexing can be simply called as
a MUX while the one that reverses the process which is demultiplexing, is called as
DEMUX.
Types of Multiplexers
There are mainly two types of multiplexers, namely analog and digital. They are
further divided into FDM, WDM, and TDM.

Analog Multiplexing
The analog multiplexing techniques involve signals which are analog in nature. The
analog signals are multiplexed according to their frequency (FDM) or wavelength
(WDM).
Frequency Division Multiplexing (FDM)
In analog multiplexing, the most used technique is Frequency Division Multiplexing
FDM. This technique uses various frequencies to combine streams of data, for
sending them on a communication medium, as a single signal.
Example: A traditional television transmitter, which sends a number of channels
through a single cable, uses FDM.
Wavelength Division Multiplexing (WDM)
Wavelength Division Multiplexing is an analog technique, in which many data
streams of different wavelengths are transmitted in the light spectrum. If the
wavelength increases, the frequency of the signal decreases.
Example: Optical fibre Communications use the WDM technique, to merge different
wavelengths into a single light for the communication.

Digital Multiplexing
The term digital represents the discrete bits of information. Hence the available data
is in the form of frames or packets, which are discrete.
Time Division Multiplexing (TDM)
In TDM, the time frame is divided into slots. This technique is used to transmit a
signal over a single communication channel, with allotting one slot for each
message. Of all the types of TDM, the main ones are Synchronous and
Asynchronous TDM.
Synchronous TDM
In Synchronous TDM, the input is connected to a frame. If there are ‘n’ number of
connections, then the frame is divided into ‘n’ time slots. One slot is allocated for
each input line. In this technique, the sampling rate is common to all signals and
hence same clock input is given. The mux allocates the same slot to each device at
all times.
Asynchronous TDM
In Asynchronous TDM, the sampling rate is different for each of the signals and the
clock signal is also not in common. If the allotted device, for a time-slot, transmits
nothing and sits idle, then that slot is allotted to another device, unlike synchronous.

Network Layer
o The Network Layer is the third layer of the OSI model.
o It handles the service requests from the transport layer and further forwards the
service request to the data link layer.
o The network layer translates the logical addresses into physical addresses
o It determines the route from the source to the destination and also manages the
traffic problems such as switching, routing and controls the congestion of data
packets.
o The main role of the network layer is to move the packets from sending host to the
receiving host.

The main functions performed by the network layer are:


o Routing: When a packet reaches the router's input link, the router will move the
packets to the router's output link. For example, a packet from S1 to R1 must be
forwarded to the next router on the path to S2.
o Logical Addressing: The data link layer implements the physical addressing and
network layer implements the logical addressing. Logical addressing is also used to
distinguish between source and destination system. The network layer adds a header
to the packet which includes the logical addresses of both the sender and the
receiver.
o Internetworking: This is the main role of the network layer that it provides the logical
connection between different types of networks.
o Fragmentation: The fragmentation is a process of breaking the packets into the
smallest individual data units that travel through different networks.

Forwarding & Routing


In Network layer, a router is used to forward the packets. Every router has a
forwarding table. A router forwards a packet by examining a packet's header field
and then using the header field value to index into the forwarding table. The value
stored in the forwarding table corresponding to the header field value indicates the
router's outgoing interface link to which the packet is to be forwarded.

For example, the router with a header field value of 0111 arrives at a router, and then
router indexes this header value into the forwarding table that determines the output
link interface is 2. The router forwards the packet to the interface 2. The routing
algorithm determines the values that are inserted in the forwarding table. The routing
algorithm can be centralized or decentralized.

Services Provided by the Network Layer


o Guaranteed delivery: This layer provides the service which guarantees that the
packet will arrive at its destination.
o Guaranteed delivery with bounded delay: This service guarantees that the packet will
be delivered within a specified host-to-host delay bound.
o In-Order packets: This service ensures that the packet arrives at the destination in
the order in which they are sent.
o Guaranteed max jitter: This service ensures that the amount of time taken between
two successive transmissions at the sender is equal to the time between their receipt
at the destination.
o Security services: The network layer provides security by using a session key
between the source and destination host. The network layer in the source host
encrypts the payloads of datagrams being sent to the destination host. The network
layer in the destination host would then decrypt the payload. In such a way, the
network layer maintains the data integrity and source authentication services.
Switching Modes
o The layer 2 switches are used for transmitting the data on the data link layer, and it
also performs error checking on transmitted and received frames.
o The layer 2 switches forward the packets with the help of MAC address.
o Different modes are used for forwarding the packets known as Switching modes.
o In switching mode, Different parts of a frame are recognized. The frame consists of
several parts such as preamble, destination MAC address, source MAC address,
user's data, FCS.

Store-and-forward

o Store-and-forward is a technique in which the intermediate nodes store the


received frame and then check for errors before forwarding the packets to the
next node.
o The layer 2 switch waits until the entire frame has received. On receiving the
entire frame, switch store the frame into the switch buffer memory. This
process is known as storing the frame.
o When the frame is stored, then the frame is checked for the errors. If any error
found, the message is discarded otherwise the message is forwarded to the
next node. This process is known as forwarding the frame.
o CRC (Cyclic Redundancy Check) technique is implemented that uses a
number of bits to check for the errors on the received frame.
o The store-and-forward technique ensures a high level of security as the
destination network will not be affected by the corrupted frames.
o Store-and-forward switches are highly reliable as it does not forward the
collided frames.
Difference between Connection-Oriented and Connectionless
Service
Data communication is a telecommunication network to send and receive data
between two or more computers over the same or different network. There are two
ways to establish a connection before sending data from one device to another, that
are Connection-Oriented and Connectionless Service.Connection-oriented service
involves the creation and termination of the connection for sending the data between
two or more devices. In contrast, connectionless service does not require
establishing any connection and termination process for transferring the data over a
network.

Connection-Oriented Service
A connection-oriented service is a network service that was designed and developed
after the telephone system. A connection-oriented service is used to create an end to
end connection between the sender and the receiver before transmitting the data
over the same or different networks. In connection-oriented service, packets are
transmitted to the receiver in the same order the sender has sent them. It uses a
handshake method that creates a connection between the user and sender for
transmitting the data over the network. Hence it is also known as a reliable network
service.

Suppose, a sender wants to send data to the receiver. Then, first, the sender sends a
request packet to a receiver in the form of an SYN packet. After that, the receiver
responds to the sender's request with an (SYN-ACK) signal/packets. That represents
the confirmation is received by the receiver to start the communication between the
sender and the receiver. Now a sender can send the message or data to the receiver.

Similarly, a receiver can respond or send the data to the sender in the form of
packets. After successfully exchanging or transmitting data, a sender can terminate
the connection by sending a signal to the receiver. In this way, we can say that it is a
reliable network service.
S. Comparison Connection-oriented Service Connection Less Service
No Parameter

1. Related It is designed and developed based on the It is service based on the postal system.
System telephone system.

2. Definition It is used to create an end to end It is used to transfer the data packets
connection between the senders to the
between senders to the receiver without creating
receiver before transmitting the data over
the same or different network. any connection.

3. Virtual path It creates a virtual path between the It does not create any virtual connection or
sender and the receiver.
path between the sender and the receiver.

4. Authentication It requires authentication before It does not require authentication before


transmitting the data packets to the
transferring data packets.
receiver.

5. Data Packets All data packets are received in the same Not all data packets are received in the same
Path order as those sent by the sender.
order as those sent by the sender.

6. Bandwidth It requires a higher bandwidth to transfer It requires low bandwidth to transfer the data packets.
Requirement the data packets.

7. Data It is a more reliable connection service It is not a reliable connection service because
Reliability because it guarantees data packets
it does not guarantee the transfer of data
transfer from one end to the other end
with a connection. packets from one end to another for establishing
a connection.

8. Congestion There is no congestion as it provides an There may be congestion due to not providing an en
end-to-end connection between sender end connection between the source and receive
and receiver during transmission of data. transmit of data packets.

9. Examples Transmission Control Protocol (TCP) is an User Datagram Protocol (UDP),


example of a connection-oriented service.
Internet Protocol (IP), and Internet Control Mes
Protocol (ICMP) are examples of connectionless servi

What is Circuit Switching?

Circuit switching is a communication method where a dedicated communication


path, or circuit, is established between two devices before data transmission
begins. The circuit remains dedicated to the communication for the duration of the
session, and no other devices can use it while the session is in progress. Circuit
switching is commonly used in voice communication and some types of data
communication.
Advantages of Circuit Switching:
 Guaranteed bandwidth: Circuit switching provides a dedicated path for
communication, ensuring that bandwidth is guaranteed for the duration
of the call.
 Low latency: Circuit switching provides low latency because the path is
predetermined, and there is no need to establish a connection for each
packet.
 Predictable performance: Circuit switching provides predictable
performance because the bandwidth is reserved, and there is no
competition for resources.
 Suitable for real-time communication: Circuit switching is suitable for
real-time communication, such as voice and video, because it provides
low latency and predictable performance.
Disadvantages of Circuit Switching:
 Inefficient use of bandwidth: Circuit switching is inefficient because the
bandwidth is reserved for the entire duration of the call, even when no
data is being transmitted.
 Limited scalability: Circuit switching is limited in its scalability because
the number of circuits that can be established is finite, which can limit
the number of simultaneous calls that can be made.
 High cost: Circuit switching is expensive because it requires dedicated
resources, such as hardware and bandwidth, for the duration of the call.
What is Packet Switching?
Packet switching is a communication method where data is divided into smaller
units called packets and transmitted over the network. Each packet contains the
source and destination addresses, as well as other information needed for routing.
The packets may take different paths to reach their destination, and they may be
transmitted out of order or delayed due to network congestion.
Advantages of Packet Switching:
 Efficient use of bandwidth: Packet switching is efficient because
bandwidth is shared among multiple users, and resources are allocated
only when data needs to be transmitted.
Multicasting:
 Multicast is a method of group communication where the sender
sends data to multiple receivers or nodes present in the network
simultaneously. Multicasting is a type of one-to-many and many-to-
many communication as it allows sender or senders to send data
packets to multiple receivers at once across LANs or WANs. This
process helps in minimizing the data frame of the network because at
once the data can be received by multiple nodes.
 Multicasting is considered as the special case of broadcasting as.it
works in similar to Broadcasting, but in Multicasting, the information is
sent to the targeted or specific members of the network. This task can
be accomplished by transmitting individual copies to each user or
node present in the network, but sending individual copies to each user
is inefficient and might increase the network latency.
1. Broadcast :
Broadcast transfer (one-to-all) techniques and can be classified into
two types : Limited Broadcasting, Direct Broadcasting. In broadcasting
mode, transmission happens from one host to all the other hosts
connected on the LAN. The devices such as bridge uses this. The
protocol like ARP implement this, in order to know MAC address for
the corresponding IP address of the host machine. ARP does ip
address to mac address translation. RARP does the reverse.
Circuit Switching Packet Switching

In-circuit switching has there are 3 phases:


i) Connection Establishment. In Packet switching directly data
ii) Data Transfer. transfer takes place.
iii) Connection Released.

In Packet switching, each data


In-circuit switching, each data unit knows the
unit just knows the final
entire path address which is provided by the
destination address intermediate
source.
path is decided by the routers.

In Packet switching, data is


In-Circuit switching, data is processed at the processed at all intermediate
source system only nodes including the source
system.

The delay between data units in circuit The delay between data units in
switching is uniform. packet switching is not uniform.

Resource reservation is the feature of circuit There is no resource reservation


switching because the path is fixed for data because bandwidth is shared
transmission. among users.

Circuit switching is more reliable. Packet switching is less reliable.

Wastage of resources is more in Circuit Less wastage of resources as


Switching compared to Circuit Switching

It is a store and forward


It is not a store and forward technique.
technique.

Transmission of the data is


Transmission of the data is done by the done not only by the source but
source. also by the intermediate
routers.
Congestion can occur during the Congestion can occur during
connection establishment phase because the data transfer phase, a large
there might be a case where a request is number of packets comes in no
being made for a channel but the channel is time.
already occupied.

Circuit switching is not convenient for Packet switching is suitable for


handling bilateral traffic. handling bilateral traffic.

In-Circuit switching, the charge depends on In Packet switching, the charge


time and distance, not on traffic in the is based on the number of
network. bytes and connection time.

Recording of packets is never possible in Recording of packets is


circuit switching. possible in packet switching.

In Packet Switching there is no


In-Circuit Switching there is a physical path
physical path between the
between the source and the destination
source and the destination

Circuit Switching does not support store Packet Switching supports


and forward transmission store and forward transmission

No call setup is required in


Call setup is required in circuit switching.
packet switching.

In-circuit switching each packet follows the In packet switching packets


same route. can follow any route.

Packet switching is
The circuit switching network is
implemented at the datalink
implemented at the physical layer.
layer and network layer

Circuit switching requires simple protocols Packet switching requires


for delivery. complex protocols for delivery.
 Flexible: Packet switching is flexible and can handle a wide range of
data rates and packet sizes.
 Scalable: Packet switching is highly scalable and can handle large
amounts of traffic on a network.
 Lower cost: Packet switching is less expensive than circuit
switching because resources are shared among multiple users.
Disadvantages of Packet Switching:
 Higher latency: Packet switching has higher latency than circuit
switching because packets must be routed through multiple nodes,
which can cause delay.
 Limited QoS: Packet switching provides limited QoS guarantees,
meaning that different types of traffic may be treated equally.
 Packet loss: Packet switching can result in packet loss due to
congestion on the network or errors in transmission.
 Unsuitable for real-time communication: Packet switching is not
suitable for real-time communication, such as voice and video,
because of the potential for latency and packet loss.
Similarities:
 Both methods involve the transmission of data over a network.
 Both methods use a physical layer of the OSI model for
transmission of data.
 Both methods can be used to transmit voice, video, and data.
 Both methods can be used in the same network infrastructure.
 Both methods can be used for both wired and wireless networks.
Flooding in Computer Network:-
Flooding is a non-adaptive routing technique following this simple method:
when a data packet arrives at a router, it is sent to all the outgoing links
except the one it has arrived on.
For example, let us consider the network in the figure, having six routers that
are connected through transmission lines.
Types of Flooding
Flooding may be of three types −
 Uncontrolled flooding − Here, each router unconditionally transmits
the incoming data packets to all its neighbours.
 Controlled flooding − They use some methods to control the
transmission of packets to the neighbouring nodes. The two popular
algorithms for controlled flooding are Sequence Number Controlled
Flooding (SNCF) and Reverse Path Forwarding (RPF).
 Selective flooding − Here, the routers don't transmit the incoming
packets only along those paths which are heading towards
approximately in the right direction, instead of every available paths.
Advantages of Flooding
 It is very simple to setup and implement, since a router may know only
its neighbours.
 It is extremely robust. Even in case of malfunctioning of a large
number routers, the packets find a way to reach the destination.
 All nodes which are directly or indirectly connected are visited. So,
there are no chances for any node to be left out. This is a main criteria
in case of broadcast messages.
 The shortest path is always chosen by flooding.
Link State Routing:-
Link state routing is the second family of routing protocols. While distance-
vector routers use a distributed algorithm to compute their routing tables,
link-state routing uses link-state routers to exchange messages that allow
each router to learn the entire network topology. Based on this learned
topology, each router is then able to compute its routing table by using the
shortest path computation.
Features of Link State Routing Protocols
 Link State Packet: A small packet that contains routing information.
 Link-State Database: A collection of information gathered from the link-
state packet.
 Shortest Path First Algorithm (Dijkstra algorithm): A calculation
performed on the database results in the shortest path
 Routing Table: A list of known paths and interfaces.

STEP 1: The set sptSet is initially empty and distances assigned to vertices
are {0, INF, INF, INF, INF, INF, INF, INF} where INF indicates infinite. Now pick
the vertex with a minimum distance value. The vertex 0 is picked and
included in sptSet. So sptSet becomes {0}. After including 0 to sptSet, update
the distance values of its adjacent vertices. Adjacent vertices of 0 are 1 and
7. The distance values of 1 and 7 are updated as 4 and 8.
The following subgraph shows vertices and their distance values. Vertices
included in SPT are included in GREEN color.

Shortest Path Calculation – Step 2


STEP 2: Pick the vertex with minimum distance value and not already
included in SPT (not in sptSET). The vertex 1 is picked and added to sptSet.
So sptSet now becomes {0, 1}. Update the distance values of adjacent
vertices of 1. The distance value of vertex 2 becomes 12.
Shortest Path Calculation – Step 3
STEP 3: Pick the vertex with minimum distance value and not already
included in SPT (not in sptSET). Vertex 7 is picked. So sptSet now becomes
{0, 1, 7}. Update the distance values of adjacent vertices of 7. The distance
value of vertex 6 and 8 becomes finite (15 and 9 respectively).

Shortest Path Calculation – Step 4


STEP 4: Pick the vertex with minimum distance value and not already
included in SPT (not in sptSET). Vertex 6 is picked. So sptSet now becomes
{0, 1, 7, 6}. Update the distance values of adjacent vertices of 6. The distance
value of vertex 5 and 8 are updated.

Shortest Path Calculation – Step 5


We repeat the above steps until sptSet includes all vertices of the given
graph. Finally, we get the following Shortest Path Tree (SPT).

Shortest Path Calculation – Step 6


Characteristics of Link State Protocol
 It requires a large amount of memory.
 Shortest path computations require many CPU circles.
 If a network uses little bandwidth; it quickly reacts to topology
changes
 All items in the database must be sent to neighbors to form link-state
packets.
 All neighbors must be trusted in the topology.
 Authentication mechanisms can be used to avoid undesired adjacency
and problems.
 No split horizon techniques are possible in the link-state routing.
 OSPF Protocol

Routing Information Protocol:-


(RIP) is a dynamic routing protocol that uses hop count as a routing metric
to find the best path between the source and the destination network. It is a
distance-vector routing protocol that has an AD value of 120 and works on
the Network layer of the OSI model. RIP uses port number 520.

Hop Count
Hop count is the number of routers occurring in between the source and
destination network. The path with the lowest hop count is considered as the
best route to reach a network and therefore placed in the routing table. RIP
prevents routing loops by limiting the number of hops allowed in a path from
source and destination. The maximum hop count allowed for RIP is 15 and a
hop count of 16 is considered as network unreachable.
RIP v1 is known as Classful Routing Protocol because it doesn’t send
information of subnet mask in its routing update.
RIP v2 is known as Classless Routing Protocol because it sends information
of subnet mask in its routing update.

>> Use debug command to get the details :


# debug ip rip
>> Use this command to show all routes configured in router, say for router
R1 :
R1# show ip route
>> Use this command to show all protocols configured in router, say for
router R1 :
R1# show ip protocols
Configuration :

Consider the above-given topology which has 3-routers R1, R2, R3. R1 has IP
address 172.16.10.6/30 on s0/0/1, 192.168.20.1/24 on fa0/0. R2 has IP
address 172.16.10.2/30 on s0/0/0, 192.168.10.1/24 on fa0/0. R3 has IP
address 172.16.10.5/30 on s0/1, 172.16.10.1/30 on s0/0, 10.10.10.1/24 on
fa0/0.
Configure RIP for R1 :
R1(config)# router rip
R1(config-router)# network 192.168.20.0
R1(config-router)# network 172.16.10.4
R1(config-router)# version 2
R1(config-router)# no auto-summary
Note: no auto-summary command disables the auto-summarisation. If we
don’t select any auto-summary, then the subnet mask will be considered as
classful in Version 1.
Configuring RIP for R2:
R2(config)# router rip
R2(config-router)# network 192.168.10.0
R2(config-router)# network 172.16.10.0
R2(config-router)# version 2
R2(config-router)# no auto-summary
Similarly, Configure RIP for R3 :
R3(config)# router rip
R3(config-router)# network 10.10.10.0
R3(config-router)# network 172.16.10.4
R3(config-router)# network 172.16.10.0
R3(config-router)# version 2
R3(config-router)# no auto-summary

1. Distance Vector Routing Protocol :


These protocols select the best path on the basis of hop counts to reach a
destination network in a particular direction. Dynamic protocol like RIP is an
example of a distance vector routing protocol. Hop count is each router that
occurs in between the source and the destination network. The path with the
least hop count will be chosen as the best path.
Features –
 Updates of the network are exchanged periodically.
 Updates (routing information) is not broadcasted but shared to
neighbouring nodes only.
 Full routing tables are not sent in updates but only distance vector is
shared.
 Routers always trust routing information received from neighbor
routers. This is also known as routing on rumors.

2. Link State Routing Protocol :


These protocols know more about Internetwork than any other distance
vector routing protocol. These are also known as SPF (Shortest Path First)
protocol. OSPF is an example of link-state routing protocol.
Features –
 Hello, messages, also known as keep-alive messages are used for
neighbor discovery and recovery.
 Concept of triggered updates is used i.e updates are triggered only
when there is a topology change.
 Only that many updates are exchanged which is requested by the
neighbor router.

Link state routing protocol maintains three tables namely:


1. Neighbor table- the table which contains information about the
neighbors of the router only, i.e, to which adjacency has been formed.
2. Topology table- This table contains information about the whole
topology i.e contains both best and backup routes to a particular
advertised networks.
3. Routing table- This table contains all the best routes to the advertised
network.

Advantages –
 As it maintains separate tables for both the best route and the backup
routes ( whole topology) therefore it has more knowledge of the
internetwork than any other distance vector routing protocol.
 Concept of triggered updates is used therefore no more unnecessary
bandwidth consumption is seen like in distance vector routing
protocol.
 Partial updates are triggered when there is a topology change, not a
full update like distance vector routing protocol where the whole
routing table is exchanged.

3. Advanced Distance vector routing protocol :


It is also known as hybrid routing protocol which uses the concept of both
distance vector and link-state routing protocol. Enhanced Interior Gateway
Routing Protocol (EIGRP) is an example of this class of routing protocol.
EIGRP acts as a link-state routing protocol as it uses the concept of Hello
protocol for neighbor discovery and forming an adjacency. Also, partial
updates are triggered when a change occurs. EIGRP acts as a distance-
vector routing protocol as it learned routes from directly connected.
neighbors.

Logical Address is generated by CPU while a program is running. The logical


address is virtual address as it does not exist physically, therefore, it is also
known as Virtual Address. This address is used as a reference to access the
physical memory location by CPU. The term Logical Address Space is used
for the set of all logical addresses generated by a program’s perspective.
The hardware device called Memory-Management Unit is used for mapping
logical address to its corresponding physical address.
Physical Address identifies a physical location of required data in a memory.
The user never directly deals with the physical address but can access by its
corresponding logical address. The user program generates the logical
address and thinks that the program is running in this logical address but the
program needs physical memory for its execution, therefore, the logical
address must be mapped to the physical address by MMU before they are
used. The term Physical Address Space is used for all physical addresses
corresponding to the logical addresses in a Logical address space.
Mapping virtual-address to physical-addresses
Differences Between Logical and Physical Address in Operating System
1. The basic difference between Logical and physical address is that Logical
address is generated by CPU in perspective of a program whereas the
physical address is a location that exists in the memory unit.
2. Logical Address Space is the set of all logical addresses generated by
CPU for a program whereas the set of all physical address mapped to
corresponding logical addresses is called Physical Address Space.
3. The logical address does not exist physically in the memory whereas
physical address is a location in the memory that can be accessed
physically.
4. Identical logical addresses are generated by Compile-time and Load time
address binding methods whereas they differs from each other in run-
time address binding method. Please refer this for details.
5. The logical address is generated by the CPU while the program is running
whereas the physical address is computed by the Memory Management
Unit (MMU).
Comparison Chart:

Parameter LOGICAL ADDRESS PHYSICAL ADDRESS

Basic generated by CPU location in a memory unit

Logical Address Space is set Physical Address is set of all


Address of all logical addresses physical addresses mapped to
Space generated by CPU in reference the corresponding logical
to a program. addresses.

User can view the logical User can never view physical
Visibility
address of a program. address of program.
Parameter LOGICAL ADDRESS PHYSICAL ADDRESS

Generation generated by the CPU Computed by MMU

The user can use the logical


The user can indirectly access
Access address to access the
physical address but not directly.
physical address.

Logical address can be


Editable Physical address will not change.
change.

Also called virtual address. real address.

IP address:-
An IP address is the identifier that enables your device to send or receive data
packets across the internet. It holds information related to your location and
therefore making devices available for two-way communication. The internet
requires a process to distinguish between different networks, routers, and websites.
Therefore, IP addresses provide the mechanism of doing so, and it forms an
indispensable part in the working of the internet. You will notice that most of the IP
addresses are essentially numerical. Still, as the world is witnessing a colossal
growth of network users, the network developers had to add letters and some
addresses as internet usage grows.

An IP address is represented by a series of numbers segregated by periods(.). They


are expressed in the form of four pairs - an example address might be
255.255.255.255 wherein each set can range from 0 to 255.

How do IP addresses work?


Sometimes your device doesn't connect to your network the way you expect it to be,
or you wish to troubleshoot why your network is not operating correctly. To answer
the above questions, it is vital to learn the process with which IP addresses work.

Internet Protocol or IP runs the same manner as other languages, i.e., applying the
set guidelines to communicate the information. All devices obtain, send, and pass
information with other associated devices with the help of this protocol only. By
using the same language, the computers placed anywhere can communicate with
one another.

The process of IP address works in the following way:


1. Your computer, smartphone, or any other Wi-Fi-enabled device firstly connects to a
network that is further connected to the internet. The network is responsible for
giving your device access to the internet.
2. While working from home, your device would be probably using that network
provided by your Internet Service Provider (ISP). In a professional environment, your
device uses your company network.
3. Your ISP is responsible to generate the IP address for your device.
4. Your internet request penetrates through the ISP, and they place the requested data
back to your device using your IP address. Since they provide you access to the
internet, ISP's are responsible for allocating an IP address to your computer or
respective device.
5. Your IP address is never consistent and can change if there occurs any changes in its
internal environment. For instance, if you turn your modem or router on or off, it will
change your IP address. Or the user can also connect the ISP to change their IP
address.
6. When you are out of your home or office, mainly if you travel and carry your device
with you, your computer won't be accessing your home IP address anymore. This is
because you will be accessing the different networks (your phone hotspot, Wi-Fi at a
cafe, resort, or airport, etc.) to connect the device with the internet. Therefore, your
device will be allocated a different (temporary) IP address by the ISP of the hotel or
cafe.

Types of IP addresses
There are various classifications of IP addresses, and each category further contains
some types.

Consumer IP addresses
Every individual or firm with an active internet service system pursues two types of
IP addresses, i.e., Private IP (Internet Protocol) addresses and public IP (Internet
Protocol) addresses. The public and private correlate to the network area. Therefore,
a private IP address is practiced inside a network, whereas the other (public IP
address) is practiced outside a network.

1. Private IP addresses

All the devices that are linked with your internet network are allocated a private IP
address. It holds computers, desktops, laptops, smartphones, tablets, or even Wi-Fi-
enabled gadgets such as speakers, printers, or smart Televisions. With the
expansion of IoT (internet of things), the demand for private IP addresses at
individual homes is also seemingly growing. However, the router requires a method
to identify these things distinctly. Therefore, your router produces unique private IP
addresses that act as an identifier for every device using your internet network. Thus,
differentiating them from one another on the network.

2. Public IP addresses

A public IP address or primary address represents the whole network of devices


associated with it. Every device included within with your primary address contains
their own private IP address. ISP is responsible to provide your public IP address to
your router. Typically, ISPs contains the bulk stock of IP addresses that they
dispense to their clients. Your public IP address is practiced by every device to
identify your network that is residing outside your internet network.

Public IP addresses are further classified into two categories- dynamic and static.

o Dynamic IP addresses
As the name suggests, Dynamic IP addresses change automatically and frequently.
With this types of IP address, ISPs already purchase a bulk stock of IP addresses and
allocate them in some order to their customers. Periodically, they re-allocate the IP
addresses and place the used ones back into the IP addresses pool so they can be
used later for another client. The foundation for this method is to make cost savings
profits for the ISP.
o Static IP addresses
In comparison to dynamic IP addresses, static addresses are constant in nature. The
network assigns the IP address to the device only once and, it remains consistent.
Though most firms or individuals do not prefer to have a static IP address, it is
essential to have a static IP address for an organization that wants to host its
network server. It protects websites and email addresses linked with it with a
constant IP address.

Types of website IP addresses


The following classification is segregated into the two types of website IP addresses
i.e., shared and dedicated.

1. Shared IP addresses

Many startups or individual website makers or various SME websites who don't want
to invest initially in dedicated IP addresses can opt for shared hosting plans. Various
web hosting providers are there in the market providing shared hosting services
where two or more websites are hosted on the same server. Shared hosting is only
feasible for websites that receive average traffic, the volumes are manageable, and
the websites themselves are confined in terms of the webpages, etc.

2. Dedicated IP addresses

Web hosting providers also provide the option to acquire a dedicated IP address.
Undoubtedly dedicated IP addresses are more secure, and they permit the users to
run their File Transfer Protocol (FTP) server. Therefore, it is easier to share and
transfer data with many people within a business, and it also provides the option of
anonymous FTP sharing. Another advantage of a dedicated IP addresses it the user
can easily access the website using the IP address rather than typing the full domain
name.

Classless Inter Domain Routing (CIDR)

Classless Inter-Domain Routing (CIDR) is a method of IP address allocation and IP


routing that allows for more efficient use of IP addresses. CIDR is based on the idea
that IP addresses can be allocated and routed based on their network prefix rather
than their class, which was the traditional way of IP address allocation.

CIDR addresses are represented using a slash notation, which specifies the number
of bits in the network prefix. For example, an IP address of 192.168.1.0 with a prefix
length of 24 would be represented as 192.168.1.0/24. This notation indicates that
the first 24 bits of the IP address are the network prefix and the remaining 8 bits are
the host identifier.

CIDR has several advantages over the traditional class-based


addressing system, including:

1. Efficient use of IP addresses: CIDR allows for more efficient use of IP


addresses by allowing the allocation of IP addresses based on their
network prefix rather than their class.
2. Flexibility: CIDR allows for more flexible IP address allocation, as it allows
for the allocation of arbitrary-sized blocks of IP addresses.
Better routing: CIDR allows for better routing of IP traffic, as it allows
routers to aggregate IP addresses based on their network prefix, reducing
the size of routing tables.
3. Reduced administrative overhead: CIDR reduces administrative overhead
by allowing for the allocation and routing of IP addresses in a more
efficient and flexible way.
4. In summary, CIDR is a method of IP address allocation and routing that
allows for more efficient use of IP addresses and better routing of IP
traffic. It has several advantages over the traditional class-based
addressing system, including greater flexibility, better routing, and
reduced administrative overhead.

Advantages:
1. Efficient use of IP addresses: CIDR allows for more efficient use of IP
addresses, which is important as the pool of available IPv4 addresses
continues to shrink.
2. Flexibility: CIDR allows for more flexible allocation of IP addresses, which can
be important for organizations with complex network requirements.
3. Better routing: CIDR allows for more efficient routing of IP traffic, which can
lead to better network performance.
Reduced administrative overhead: CIDR reduces administrative overhead by
allowing for easier management of IP addresses and routing.

Disadvantages:

1. Complexity: CIDR can be more complex to implement and manage than


traditional class-based addressing, which can require additional training and
expertise.
2. Compatibility issues: Some older network devices may not be compatible with
CIDR, which can make it difficult to transition to a CIDR-based network.
3. Security concerns: CIDR can make it more difficult to implement security
measures such as firewall rules and access control lists, which can increase
security risks.
4. Overall, CIDR is a useful and efficient method of IP address allocation and
routing, but it may not be suitable for all organizations or networks. It is
important to weigh the advantages and disadvantages of CIDR and consider
the specific needs and requirements of your network before implementing
CIDR.

IP stands for Internet Protocol and v4 stands for Version Four (IPv4). IPv4 was
the primary version brought into action for production within the ARPANET in
1983.
IP version four addresses are 32-bit integers which will be expressed in decimal
notation.
Example- 192.0.2.126 could be an IPv4 address.

Parts of IPv4

 Network part:
The network part indicates the distinctive variety that’s appointed to the
network. The network part conjointly identifies the category of the network
that’s assigned.
 Host Part:
The host part uniquely identifies the machine on your network. This part of the
IPv4 address is assigned to every host.
For each host on the network, the network part is the same, however, the host
half must vary.
 Subnet number:
This is the nonobligatory part of IPv4. Local networks that have massive
numbers of hosts are divided into subnets and subnet numbers are appointed
to that.
Characteristics of IPv4

 IPv4 could be a 32-Bit IP Address.


 IPv4 could be a numeric address, and its bits are separated by a dot.
 The number of header fields is twelve and the length of the header field is
twenty.
 It has Unicast, broadcast, and multicast style of addresses.
 IPv4 supports VLSM (Virtual Length Subnet Mask).
 IPv4 uses the Post Address Resolution Protocol to map to the MAC address.
 RIP may be a routing protocol supported by the routed daemon.
 Networks ought to be designed either manually or with DHCP.
 Packet fragmentation permits from routers and causing host.

Advantages of IPv4

 IPv4 security permits encryption to keep up privacy and security.


 IPV4 network allocation is significant and presently has quite 85000 practical
routers.
 It becomes easy to attach multiple devices across an outsized network while
not NAT.
 This is a model of communication so provides quality service also as
economical knowledge transfer.
 IPV4 addresses are redefined and permit flawless encoding.
 Routing is a lot of scalable and economical as a result of addressing is
collective more effectively.
 Data communication across the network becomes a lot of specific in
multicast organizations.
 Limits net growth for existing users and hinders the use of the net for
brand new users.
 Internet Routing is inefficient in IPv4.
 IPv4 has high System Management prices and it’s labor-intensive,
complex, slow & frequent to errors.
 Security features are nonobligatory.
 Difficult to feature support for future desires as a result of adding it on
is extremely high overhead since it hinders the flexibility to attach
everything over IP.

Limitations of IPv4

 IP relies on network layer addresses to identify end-points on network, and


each network has a unique IP address.
 The world’s supply of unique IP addresses is dwindling, and they might
eventually run out theoretically.
 If there are multiple host, we need IP addresses of next class.
 Complex host and routing configuration, non-hierarchical addressing, difficult
to re-numbering addresses, large routing tables, non-trivial implementations in
providing security, QoS (Quality of Service), mobility and multi-homing,
multicasting etc. are the big limitation of IPv4 so that’s why IPv6 came into
the picture.
IPV 6

Internet Protocol version 6 (IPV 6) is the replacement for version 4 (IPV 4). The
phenomenal development of the Internet has begun to push IP to its limits. It
provides a large address space, and it contains a simple header as compared to
IPv4.

Features of IPV6
There are various features of IPV6, which are as follows−

 Larger address space: An IPV6 address is 128 bits long. It is compared with the 32-
bit address of IPV4. It will allow for unique IP-addresses up to 3.4 x 1038 whereas IPV4
allows up to 4.3 x 108 unique address.
 Better Header format: New header form has been designed to reduce overhead. It is
done by moving both non-essential fields and optional fields to extension field header
that are placed after the IPV6 header.
 More Functionality: It is designed with more options like priority of packet for control
of congestion, Authentication etc.
 Allowance for Extension: It is designed to allow the extension of the protocol if
required by new technologies.
 Support of resource allocation: In IPV6, the type of service fields has been removed,
but a new mechanism has been added to support traffic control or flow labels like
real-time audio and video.
IPV6 Packet Format
It is a compulsory base header followed by the payload. The payload includes two
parts (1) optional extension headers and data called payload from the upper layer.

The base header occupies 40 bytes, and extension headers and data from the upper
layer usually contain up to 65, 535 bytes of data.
Base Header has 8 fields which are as follows−

 Version: It is a four-bit field that defines the version number of the IP. IP6 version is 6,
IP4 version is 4.
 Priority: It is a 4-bit priority field that defines the priority of the packet with respect to
traffic congestion that a packet is to reject or not.
 Flow Label: It is three bytes or 24-bit field designed to provide special handling for a
particular flow of data to speed flow on an already flowing packet path.
 Payload Length: It is a two-byte payload length field that defines the total length of
the IP datagram, excluding the base header.
 Next Header: It is an 8-bit field that defines the header that follows the base header in
the datagram. In IPV4, this field is called a protocol. Some of the values in this field
indicate options that are
Code Next Header

0 Hop by Hop Option

2 ICMP

6 TCP

17 UDP

43 Source Routing

44 Fragmentation

50 Authentication

59 Null

60 Destination Option

 Source Address: This field is 16-byte which specifies the original source of the
datagram destination address. This is a 16-byte internet address that usually
identifies the final destination of the datagram.
 Priority: IPV6 divides traffic into two broad categories, which are as follows:
Congestion Control Traffic: If a source adopts itself to traffic showdown when there
is congestion. In TCP protocol, congestion-control data is assigned priority 0 to 7,
such as 0 for lowest and 7 for highest in congestion.

Priority Meaning

0 There is no specific traffic

1 Background data
Priority Meaning

2 Unattended data traffic

3 Reserved

4 Attended Bulk data traffic

5 Reserved

6 Interactive Traffic

7 Control Traffic

Non-Congestion Traffic: In this type of traffic packet is expected to arrive at the


receiver by minimum delay. In this, packets are not discarded because the source
doesn't send a packet on a congested or congested path.
An example of non-congestion traffic is Real-time audio & video. In this packets are
given priority 8 to 15. Priority 8 packet means data with most redundancy & priority
15 means data with least redundancy.

Address mapping

is a process of determining a logical address knowing the physical address of the


device and determining the physical address by knowing the logical address of
the device. Address mapping is required when a packet is routed from source
host to destination host in the same or different network.

The physical address is unique to the local network but not in the universal
network such as the Internet. However, the logical address is unique
universally. Now why do we require both addresses, we can use only one
type of address to identify a host or router in the network.

The physical address and the logical address both are different identifiers
and we require both of them as the physical address defines the physical
connection between source host to destination host whereas the logical
address defines routable connection from source host to the destination
host and from network to network.

So as both physical and logical addresses are essential to route a packet


from the source host to the destination host, we require an address
mapping mechanism to relate a physical address of the device to its logical
address and vice versa.
Types of Address Mapping

There are two kinds of address mapping, static address mapping, and dynamic
address mapping. In the section ahead we will discuss both of them in detail.

1. Static Mapping

In static mapping, each device connected to the network maintains a table


i.e., routing table which has a list of all the routes from that device to a particular
network or hosts. It maintains the network/next hop association i.e., the logical
address of next-hop and its corresponding physical address.

A source host knows the logical address of the host to which it wants to deliver
the packet so it can refer to the routing table to recognize the physical address of
the destined host. But the static address mapping has some constraint over the
physical address of the device as it changes in certain conditions such as:

1. If a device changes its Network Interface Card (NIC), the physical address of
the device also changes. As the physical address is hardcoded on the NIC
card at the time of its manufacturing.
2. Some local networks such as LocalTalk compel the connected device to
change its physical address each time the device turns on.
3. Nowadays there are some third-party apps through which users can change
their physical address.

Even the logical address of the device also changes under some circumstances
such as:

1. If the host switches the network, this changes the logical address of the host.
2. If you reset your modem, it also results in a change of logical address.
3. If the host gets connected to the network via VPN (Virtual Private Network)
then it appears that you

In such a scenario, if we use static address mapping, more time will be wasted in
updating the routing table at each connected device and this will generate
overhead on the connected devices which will also affect the performance of the
network. A solution to this is dynamic mapping.

2. Dynamic Mapping

In dynamic mapping usually, the source host knows the logical address of the
destination host but to deliver the packet to the destined host its physical
address is required as at the physical level the device is identified by its physical
address.

So, the source host uses the protocols to identify the physical address of the
destination host. Two protocols are designed for dynamic mapping ARP
(Address Resolution Protocol) and RARP (Reverse Address Resolution Protocol).
Internet Control Message Protocol (ICMP)

Internet Control Message Protocol (ICMP) is a network layer protocol used to


diagnose communication errors by performing an error control mechanism. Since
IP does not have an inbuilt mechanism for sending error and control messages. It
depends on Internet Control Message Protocol(ICMP) to provide error control.

ICMP is used for reporting errors and management queries. It is a supporting


protocol and is used by network devices like routers for sending error messages
and operations information. For example, the requested service is not available or
a host or router could not be reached.

Uses of ICMP

ICMP is used for error reporting if two devices connect over the internet and
some error occurs, So, the router sends an ICMP error message to the source
informing about the error. For Example, whenever a device sends any message
which is large enough for the receiver, in that case, the receiver will drop the
message and reply back ICMP message to the source.

Another important use of ICMP protocol is used to perform network diagnosis by


making use of traceroute and ping utility. We will discuss them one by one.

Traceroute: Traceroute utility is used to know the route between two devices
connected over the internet. It routes the journey from one router to another, and
a traceroute is performed to check network issues before data transfer.

Ping: Ping is a simple kind of traceroute known as the echo-request message, it


is used to measure the time taken by data to reach the destination and return to
the source, these replies are known as echo-replies messages.

How Does ICMP Work?

ICMP is the primary and important protocol of the IP suite, but ICMP isn’t
associated with any transport layer protocol (TCP or UDP) as it doesn’t need to
establish a connection with the destination device before sending any message
as it is a connectionless protocol.

The working of ICMP is just contrasting with TCP, as TCP is a connection-


oriented protocol whereas ICMP is a connectionless protocol. Whenever a
connection is established before the message sending, both devices must be
ready through a TCP Handshake.

ICMP packets are transmitted in the form of datagrams that contain an IP header
with ICMP data. ICMP datagram is similar to a packet, which is an independent
data entity.

ICMP Packet Format


ICMP header comes after IPv4 and IPv6 packet header.

ICMPv4 Packet Format

In the ICMP packet format, the first 32 bits of the packet contain three fields:

Type (8-bit): The initial 8-bit of the packet is for message type, it provides a brief
description of the message so that receiving network would know what kind of
message it is receiving and how to respond to it. Some common message types
are as follows:

 Type 0 – Echo reply


 Type 3 – Destination unreachable
 Type 5 – Redirect Message
 Type 8 – Echo Request
 Type 11 – Time Exceeded
 Type 12 – Parameter problem

Code (8-bit): Code is the next 8 bits of the ICMP packet format, this field carries
some additional information about the error message and type.

Checksum (16-bit): Last 16 bits are for the checksum field in the ICMP packet
header. The checksum is used to check the number of bits of the complete
message and enable the ICMP tool to ensure that complete data is delivered.

The next 32 bits of the ICMP Header are Extended Header which has the work of
pointing out the problem in IP Message. Byte locations are identified by the
pointer which causes the problem message and receiving device looks here for
pointing to the problem.

The last part of the ICMP packet is Data or Payload of variable length. The bytes
included in IPv4 are 576 bytes and in IPv6, 1280 bytes.

ICMP in DDoS Attacks


In Distributed DOS (DDoS) attacks, attackers provide so much extra traffic to the
target, so that it cannot provide service to users. There are so many ways through
which an attacker executes these attacks, which are described below.

Ping of Death Attack

Whenever an attacker sends a ping, whose size is greater than the maximum
allowable size, oversized packets are broken into smaller parts. When the sender
re-assembles it, the size exceeds the limit which causes a buffer overflow and
makes the machine freeze. This is simply called a Ping of Death Attack. Newer
devices have protection from this attack, but older devices did not have
protection from this attack.

ICMP Flood Attack

Whenever the sender sends so many pings that the device on whom the target is
done is unable to handle the echo request. This type of attack is called an ICMP
Flood Attack. This attack is also called a ping flood attack. It stops the target
computer’s resources and causes a denial of service for the target computer.

Network Layer Protocols

TCP/IP supports the following protocols:

ARP

o ARP stands for Address Resolution Protocol.


o It is used to associate an IP address with the MAC address.
o Each device on the network is recognized by the MAC address imprinted on
the NIC. Therefore, we can say that devices need the MAC address for
communication on a local area network. MAC address can be changed easily.
For example, if the NIC on a particular machine fails, the MAC address
changes but IP address does not change. ARP is used to find the MAC
address of the node when an internet address is known.

How ARP works

If the host wants to know the physical address of another host on its network,
then it sends an ARP query packet that includes the IP address and broadcast it
over the network. Every host on the network receives and processes the ARP
packet, but only the intended recipient recognizes the IP address and sends back
the physical address. The host holding the datagram adds the physical address
to the cache memory and to the datagram header, then sends back to the sender.
Steps taken by ARP protocol

If a device wants to communicate with another device, the following steps are
taken by the device:

o The device will first look at its internet list, called the ARP cache to check
whether an IP address contains a matching MAC address or not. It will check
the ARP cache in command prompt by using a command arp-a.

o If ARP cache is empty, then device broadcast the message to the entire
network asking each device for a matching MAC address.
o The device that has the matching IP address will then respond back to the
sender with its MAC address
o Once the MAC address is received by the device, then the communication can
take place between two devices.
o If the device receives the MAC address, then the MAC address gets stored in
the ARP cache. We can check the ARP cache in command prompt by using a
command arp -a.
There are two types of ARP entries:

o Dynamic entry: It is an entry which is created automatically when the sender


broadcast its message to the entire network. Dynamic entries are not
permanent, and they are removed periodically.
o Static entry: It is an entry where someone manually enters the IP to MAC
address association by using the ARP command utility.

RARP

o RARP stands for Reverse Address Resolution Protocol.


o If the host wants to know its IP address, then it broadcast the RARP query
packet that contains its physical address to the entire network. A RARP server
on the network recognizes the RARP packet and responds back with the host
IP address.
o The protocol which is used to obtain the IP address from a server is known
as Reverse Address Resolution Protocol.
o The message format of the RARP protocol is similar to the ARP protocol.
o Like ARP frame, RARP frame is sent from one machine to another
encapsulated in the data portion of a frame.

ICMP
o ICMP stands for Internet Control Message Protocol.
o The ICMP is a network layer protocol used by hosts and routers to send the
notifications of IP datagram problems back to the sender.
o ICMP uses echo test/reply to check whether the destination is reachable and
responding.
o ICMP handles both control and error messages, but its main function is to
report the error but not to correct them.
o An IP datagram contains the addresses of both source and destination, but it
does not know the address of the previous router through which it has been
passed. Due to this reason, ICMP can only send the messages to the source,
but not to the immediate routers.
o ICMP protocol communicates the error messages to the sender. ICMP
messages cause the errors to be returned back to the user processes.
o ICMP messages are transmitted within IP datagram.

The Format of an ICMP message

o The first field specifies the type of the message.


o The second field specifies the reason for a particular message type.
o The checksum field covers the entire ICMP message.

Error Reporting

ICMP protocol reports the error messages to the sender.

Five types of errors are handled by the ICMP protocol:

o Destination unreachable
o Source Quench
o Time Exceeded
o Parameter problems
o Redirection

o Destination unreachable: The message of "Destination Unreachable" is sent


from receiver to the sender when destination cannot be reached, or packet is
discarded when the destination is not reachable.
o Source Quench: The purpose of the source quench message is congestion
control. The message sent from the congested router to the source host to
reduce the transmission rate. ICMP will take the IP of the discarded packet
and then add the source quench message to the IP datagram to inform the
source host to reduce its transmission rate. The source host will reduce the
transmission rate so that the router will be free from congestion.
o Time Exceeded: Time Exceeded is also known as "Time-To-Live". It is a
parameter that defines how long a packet should live before it would be
discarded.

There are two ways when Time Exceeded message can be generated:

Sometimes packet discarded due to some bad routing implementation, and this
causes the looping issue and network congestion. Due to the looping issue, the
value of TTL keeps on decrementing, and when it reaches zero, the router
discards the datagram. However, when the datagram is discarded by the router,
the time exceeded message will be sent by the router to the source host.

When destination host does not receive all the fragments in a certain time limit,
then the received fragments are also discarded, and the destination host sends
time Exceeded message to the source host.

o Parameter problems: When a router or host discovers any missing value in


the IP datagram, the router discards the datagram, and the "parameter
problem" message is sent back to the source host.
o Redirection: Redirection message is generated when host consists of a small
routing table. When the host consists of a limited number of entries due to
which it sends the datagram to a wrong router. The router that receives a
datagram will forward a datagram to a correct router and also sends the
"Redirection message" to the host to update its routing table.

IGMP

o IGMP stands for Internet Group Message Protocol.


o The IP protocol supports two types of communication:
o Unicasting: It is a communication between one sender and one
receiver. Therefore, we can say that it is one-to-one communication.
o Multicasting: Sometimes the sender wants to send the same message
to a large number of receivers simultaneously. This process is known
as multicasting which has one-to-many communication.
o The IGMP protocol is used by the hosts and router to support multicasting.
o The IGMP protocol is used by the hosts and router to identify the hosts in a
LAN that are the members of a group.
o IGMP is a part of the IP layer, and IGMP has a fixed-size message.
o The IGMP message is encapsulated within an IP datagram.

The Format of IGMP message

Where,

Type: It determines the type of IGMP message. There are three types of IGMP
message: Membership Query, Membership Report and Leave Report.

Maximum Response Time: This field is used only by the Membership Query
message. It determines the maximum time the host can send the Membership
Report message in response to the Membership Query message.

Checksum: It determines the entire payload of the IP datagram in which IGMP


message is encapsulated.

Group Address: The behavior of this field depends on the type of the message
sent.

o For Membership Query, the group address is set to zero for General Query
and set to multicast group address for a specific query.
o For Membership Report, the group address is set to the multicast group
address.
o For Leave Group, it is set to the multicast group address.

IGMP Messages
o Membership Query message
o This message is sent by a router to all hosts on a local area network to
determine the set of all the multicast groups that have been joined by
the host.
o It also determines whether a specific multicast group has been joined
by the hosts on a attached interface.
o The group address in the query is zero since the router expects one
response from a host for every group that contains one or more
members on that host.
o Membership Report message
o The host responds to the membership query message with a
membership report message.
o Membership report messages can also be generated by the host when
a host wants to join the multicast group without waiting for a
membership query message from the router.
o Membership report messages are received by a router as well as all the
hosts on an attached interface.
o Each membership report message includes the multicast address of a
single group that the host wants to join.
o IGMP protocol does not care which host has joined the group or how
many hosts are present in a single group. It only cares whether one or
more attached hosts belong to a single multicast group.
o The membership Query message sent by a router also includes a
"Maximum Response time". After receiving a membership query
message and before sending the membership report message, the
host waits for the random amount of time from 0 to the maximum
response time. If a host observes that some other attached host has
sent the "Maximum Report message", then it discards its "Maximum
Report message" as it knows that the attached router already knows
that one or more hosts have joined a single multicast group. This
process is known as feedback suppression. It provides the
performance optimization, thus avoiding the unnecessary transmission
of a "Membership Report message".

Dynamic Host Configuration Protocol (DHCP)


DHCP stands for Dynamic Host Configuration Protocol. It is the critical feature on which the
users of an enterprise network communicate. DHCP helps enterprises to smoothly manage
the allocation of IP addresses to the end-user clients’ devices such as desktops, laptops,
cellphones, etc. is an application layer protocol that is used to provide:
Why Use DHCP?
DHCP helps in managing the entire process automatically and centrally.
DHCP helps in maintaining a unique IP Address for a host using the server.
DHCP servers maintain information on TCP/IP configuration and provide
configuration of address to DHCP-enabled clients in the form of a lease offer.

Components of DHCP

The main components of DHCP include:

 DHCP Server: DHCP Server is basically a server that holds IP Addresses and
other information related to configuration.
 DHCP Client: It is basically a device that receives configuration information
from the server. It can be a mobile, laptop, computer, or any other electronic
device that requires a connection.
 DHCP Relay: DHCP relays basically work as a communication channel
between DHCP Client and Server.
 IP Address Pool: It is the pool or container of IP Addresses possessed by the
DHCP Server. It has a range of addresses that can be allocated to devices.
 Subnets: Subnets are smaller portions of the IP network partitioned to keep
networks under control.
 Lease: It is simply the time that how long the information received from the
server is valid, in case of expiration of the lease, the tenant must have to re-
assign the lease.
 DNS Servers: DHCP servers can also provide DNS (Domain Name System)
server information to DHCP clients, allowing them to resolve domain names
to IP addresses.
 Default Gateway: DHCP servers can also provide information about the
default gateway, which is the device that packets are sent to when the
destination is outside the local network.
 Options: DHCP servers can provide additional configuration options to clients,
such as the subnet mask, domain name, and time server information.
 Renewal: DHCP clients can request to renew their lease before it expires to
ensure that they continue to have a valid IP address and configuration
information.
 Failover: DHCP servers can be configured for failover, where two servers work
together to provide redundancy and ensure that clients can always obtain an
IP address and configuration information, even if one server goes down.
 Dynamic Updates: DHCP servers can also be configured to dynamically
update DNS records with the IP address of DHCP clients, allowing for easier
management of network resources.
 Audit Logging: DHCP servers can keep audit logs of all DHCP transactions,
providing administrators with visibility into which devices are using which IP
addresses and when leases are being assigned or renewed.

Working of DHCP

The working of DHCP is as follows:


DHCP works on the Application layer of the TCP/IP Protocol. The main task of DHCP
is to dynamically assigns IP Addresses to the Clients and allocate information on
TCP/IP configuration to Clients. For more, you can refer to the Article Working of
DHCP.

The DHCP port number for the server is 67 and for the client is 68. It is a client-
server protocol that uses UDP services. An IP address is assigned from a pool of
addresses. In DHCP, the client and the server exchange mainly 4 DHCP messages in
order to make a connection, also called the DORA process, but there are 8 DHCP
messages in the process.

Transport Layer protocols

o The transport layer is represented by two protocols: TCP and UDP.


o The IP protocol in the network layer delivers a datagram from a source host to
the destination host.
o Nowadays, the operating system supports multiuser and multiprocessing
environments, an executing program is called a process. When a host sends a
message to other host means that source process is sending a process to a
destination process. The transport layer protocols define some connections
to individual ports known as protocol ports.
o An IP protocol is a host-to-host protocol used to deliver a packet from source
host to the destination host while transport layer protocols are port-to-port
protocols that work on the top of the IP protocols to deliver the packet from
the originating port to the IP services, and from IP services to the destination
port.
o Each port is defined by a positive integer address, and it is of 16 bits.

UDP

o UDP stands for User Datagram Protocol.


o UDP is a simple protocol and it provides nonsequenced transport functionality.
o UDP is a connectionless protocol.
o This type of protocol is used when reliability and security are less important
than speed and size.
o UDP is an end-to-end transport level protocol that adds transport-level
addresses, checksum error control, and length information to the data from
the upper layer.
o The packet produced by the UDP protocol is known as a user datagram.

User Datagram Format

The user datagram has a 16-byte header which is shown below:

Where,

o Source port address: It defines the address of the application process that
has delivered a message. The source port address is of 16 bits address.
o Destination port address: It defines the address of the application process
that will receive the message. The destination port address is of a 16-bit
address.
o Total length: It defines the total length of the user datagram in bytes. It is a 16
-bit field.
o Checksum: The checksum is a 16-bit field which is used in error detection.

Disadvantages of UDP protocol

o UDP provides basic functions needed for the end-to-end delivery of a


transmission.
o It does not provide any sequencing or reordering functions and does not
specify the damaged packet when reporting an error.
o UDP can discover that an error has occurred, but it does not specify which
packet has been lost as it does not contain an ID or sequencing number of a
particular data segment.

TCP

o TCP stands for Transmission Control Protocol.


o It provides full transport layer services to applications.
o It is a connection-oriented protocol means the connection established
between both the ends of the transmission. For creating the connection, TCP
generates a virtual circuit between sender and receiver for the duration of a
transmission.

Features Of TCP protocol


o Stream data transfer: TCP protocol transfers the data in the form of
contiguous stream of bytes. TCP group the bytes in the form of TCP
segments and then passed it to the IP layer for transmission to the
destination. TCP itself segments the data and forward to the IP.
o Reliability: TCP assigns a sequence number to each byte transmitted and
expects a positive acknowledgement from the receiving TCP. If ACK is not
received within a timeout interval, then the data is retransmitted to the
destination.
The receiving TCP uses the sequence number to reassemble the segments if
they arrive out of order or to eliminate the duplicate segments.
o Flow Control: When receiving TCP sends an acknowledgement back to the
sender indicating the number the bytes it can receive without overflowing its
internal buffer. The number of bytes is sent in ACK in the form of the highest
sequence number that it can receive without any problem. This mechanism is
also referred to as a window mechanism.
o Multiplexing: Multiplexing is a process of accepting the data from different
applications and forwarding to the different applications on different
computers. At the receiving end, the data is forwarded to the correct
application. This process is known as demultiplexing. TCP transmits the
packet to the correct application by using the logical channels known as ports.
o Logical Connections: The combination of sockets, sequence numbers, and
window sizes, is called a logical connection. Each connection is identified by
the pair of sockets used by sending and receiving processes.
o Full Duplex: TCP provides Full Duplex service, i.e., the data flow in both the
directions at the same time. To achieve Full Duplex service, each TCP should
have sending and receiving buffers so that the segments can flow in both the
directions. TCP is a connection-oriented protocol. Suppose the process A
wants to send and receive the data from process B. The following steps occur:
o Establish a connection between two TCPs.
o Data is exchanged in both the directions.
o The Connection is terminated.

TCP Segment Format


Where,

o Source port address: It is used to define the address of the application


program in a source computer. It is a 16-bit field.
o Destination port address: It is used to define the address of the application
program in a destination computer. It is a 16-bit field.
o Sequence number: A stream of data is divided into two or more TCP
segments. The 32-bit sequence number field represents the position of the
data in an original data stream.
o Acknowledgement number: A 32-field acknowledgement number
acknowledge the data from other communicating devices. If ACK field is set
to 1, then it specifies the sequence number that the receiver is expecting to
receive.
o Header Length (HLEN): It specifies the size of the TCP header in 32-bit words.
The minimum size of the header is 5 words, and the maximum size of the
header is 15 words. Therefore, the maximum size of the TCP header is 60
bytes, and the minimum size of the TCP header is 20 bytes.
o Reserved: It is a six-bit field which is reserved for future use.
o Control bits: Each bit of a control field functions individually and
independently. A control bit defines the use of a segment or serves as a
validity check for other fields.

There are total six types of flags in control field:

o URG: The URG field indicates that the data in a segment is urgent.
o ACK: When ACK field is set, then it validates the acknowledgement number.
o PSH: The PSH field is used to inform the sender that higher throughput is
needed so if possible, data must be pushed with higher throughput.
o RST: The reset bit is used to reset the TCP connection when there is any
confusion occurs in the sequence numbers.
o SYN: The SYN field is used to synchronize the sequence numbers in three
types of segments: connection request, connection confirmation ( with the
ACK bit set ), and confirmation acknowledgement.
o FIN: The FIN field is used to inform the receiving TCP module that the sender
has finished sending data. It is used in connection termination in three types
of segments: termination request, termination confirmation, and
acknowledgement of termination confirmation.
o Window Size: The window is a 16-bit field that defines the size of the
window.
o Checksum: The checksum is a 16-bit field used in error detection.
o Urgent pointer: If URG flag is set to 1, then this 16-bit field is an offset
from the sequence number indicating that it is a last urgent data byte.
o Options and padding: It defines the optional fields that convey the
additional information to the receiver.

Differences b/w TCP & UDP

Basis for TCP UDP


Comparison

Definition TCP establishes a virtual circuit UDP transmits the data directly to the
before transmitting the data. destination computer without verifying
whether the receiver is ready to receive or
not.

Connection Type It is a Connection-Oriented It is a Connectionless protocol


protocol

Speed slow high

Reliability It is a reliable protocol. It is an unreliable protocol.

Header size 20 bytes 8 bytes

acknowledgement It waits for the acknowledgement It neither takes the acknowledgement, nor
of data and has the ability to it retransmits the damaged frame.
resend the lost packets.

Services and Segment structure in TCP


The Transmission Control Protocol is the most common transport layer protocol.
It works together with IP and provides a reliable transport service between
processes using the network layer service provided by the IP protocol.

1. Process-to-Process Communication –
TCP provides a process to process communication, i.e, the transfer of data
that takes place between individual processes executing on end systems.
This is done using port numbers or port addresses. Port numbers are 16 bits
long that help identify which process is sending or receiving data on a host.

2. Stream oriented –
This means that the data is sent and received as a stream of bytes(unlike UDP
or IP that divides the bits into datagrams or packets). However, the network
layer, that provides service for the TCP, sends packets of information not
streams of bytes. Hence, TCP groups a number of bytes together into
a segment and adds a header to each of these segments and then delivers
these segments to the network layer. At the network layer, each of these
segments is encapsulated in an IP packet for transmission. The TCP header
has information that is required for control purposes which will be discussed
along with the segment structure.

3. Full-duplex service –
This means that the communication can take place in both directions at the
same time.

4. Connection-oriented service –
Unlike UDP, TCP provides a connection-oriented service. It defines 3 different
phases:
 Connection establishment
 Data transfer
 Connection termination
5. Reliability –
TCP is reliable as it uses checksum for error detection, attempts to recover
lost or corrupted packets by re-transmission, acknowledgement policy and
timers. It uses features like byte number and sequence number and
acknowledgement number so as to ensure reliability. Also, it uses congestion
control mechanisms.

6. Multiplexing –
TCP does multiplexing and de-multiplexing at the sender and receiver ends
respectively as a number of logical connections can be established between
port numbers over a physical connection.

Byte number, Sequence number and Acknowledgement number:


All the data bytes that are to be transmitted are numbered and the beginning of
this numbering is arbitrary. Sequence numbers are given to the segments so as
to reassemble the bytes at the receiver end even if they arrive in a different order.
The sequence number of a segment is the byte number of the first byte that is
being sent. The acknowledgement number is required since TCP provides full-
duplex service. The acknowledgement number is the next byte number that the
receiver expects to receive which also provides acknowledgement for receiving
the previous bytes.
Example:
In this example we see that A sends acknowledgement number1001, which
means that it has received data bytes till byte number 1000 and expects to
receive 1001 next, hence B next sends data bytes starting from 1001.
Similarly, since B has received data bytes till byte number 13001 after the
first data transfer from A to B, therefore B sends acknowledgement number
13002, the byte number that it expects to receive from A next.
TCP Segment structure –
A TCP segment consists of data bytes to be sent and a header that is added
to the data by TCP as shown:

The header of a TCP segment can range from 20-60 bytes. 40 bytes are for
options. If there are no options, a header is 20 bytes else it can be of upmost
60 bytes.
Header fields:

 Source Port Address –


A 16-bit field that holds the port address of the application that is sending
the data segment.

 Destination Port Address –


A 16-bit field that holds the port address of the application in the host that
is receiving the data segment.

 Sequence Number –
A 32-bit field that holds the sequence number, i.e, the byte number of the
first byte that is sent in that particular segment. It is used to reassemble
the message at the receiving end of the segments that are received out of
order.

 Acknowledgement Number –
A 32-bit field that holds the acknowledgement number, i.e, the byte
number that the receiver expects to receive next. It is an
acknowledgement for the previous bytes being received successfully.

 Header Length (HLEN) –


This is a 4-bit field that indicates the length of the TCP header by a
number of 4-byte words in the header, i.e if the header is 20 bytes(min
length of TCP header), then this field will hold 5 (because 5 x 4 = 20) and
the maximum length: 60 bytes, then it’ll hold the value 15(because 15 x 4
= 60). Hence, the value of this field is always between 5 and 15.

 Control flags –
These are 6 1-bit control bits that control connection establishment,
connection termination, connection abortion, flow control, mode of
transfer etc. Their function is:
 URG: Urgent pointer is valid
 ACK: Acknowledgement number is valid( used in case of
cumulative acknowledgement)
 PSH: Request for push
 RST: Reset the connection
 SYN: Synchronize sequence numbers
 FIN: Terminate the connection
 Window size –
This field tells the window size of the sending TCP in bytes.

 Checksum –
This field holds the checksum for error control. It is mandatory in TCP as
opposed to UDP.

 Urgent pointer –
This field (valid only if the URG control flag is set) is used to point to data
that is urgently required that needs to reach the receiving process at the
earliest. The value of this field is added to the sequence number to get
the byte number of the last urgent byte.
Prerequisite to use the Sliding window technique
The use of the Sliding Window technique can be done in a very specific
scenario, where the size of the window for computation is fixed throughout
the complete nested loop. Only then the time complexity can be reduced.
How to use Sliding Window Technique?
The general use of the Sliding window technique can be demonstrated as
follows:
1. Find the size of the window required
2. Compute the result for 1st window, i.e. from the start of the data structure
3. Then use a loop to slide the window by 1, and keep computing the result
window by window.

TCP Congestion Control

TCP congestion control is a method used by the TCP protocol to manage data
flow over a network and prevent congestion. TCP uses a congestion window and
congestion policy that avoids congestion. Previously, we assumed that only the
receiver could dictate the sender’s window size. We ignored another entity here,
the network. If the network cannot deliver the data as fast as it is created by the
sender, it must tell the sender to slow down. In other words, in addition to the
receiver, the network is a second entity that determines the size of the sender’s
window

Congestion Policy in TCP

1. Slow Start Phase: Starts slow increment is exponential to the threshold.


2. Congestion Avoidance Phase: After reaching the threshold increment is by 1.
3. Congestion Detection Phase: The sender goes back to the Slow start phase
or the Congestion avoidance phase.

Slow Start Phase

Exponential increment: In this phase after every RTT the congestion window size
increments exponentially.

Example:- If the initial congestion window size is 1 segment, and the first
segment is successfully acknowledged, the congestion window size becomes 2
segments. If the next transmission is also acknowledged, the congestion window
size doubles to 4 segments. This exponential growth continues as long as all
segments are successfully acknowledged.

Initially cwnd = 1

After 1 RTT, cwnd = 2^(1) = 2

2 RTT, cwnd = 2^(2) = 4

3 RTT, cwnd = 2^(3) = 8

Congestion Avoidance Phase

Additive increment: This phase starts after the threshold value also denoted as
ssthresh. The size of cwnd(congestion window) increases additive. After each
RTT cwnd = cwnd + 1.

Example:- if the congestion window size is 20 segments and all 20 segments are
successfully acknowledged within an RTT, the congestion window size would be
increased to 21 segments in the next RTT. If all 21 segments are again
successfully acknowledged, the congestion window size would be increased to
22 segments, and so on.

Initially cwnd = i

After 1 RTT, cwnd = i+1

2 RTT, cwnd = i+2

3 RTT, cwnd = i+3

Congestion Detection Phase

Multiplicative decrement: If congestion occurs, the congestion window size is


decreased. The only way a sender can guess that congestion has happened is
the need to retransmit a segment. Retransmission is needed to recover a missing
packet that is assumed to have been dropped by a router due to congestion.
Retransmission can occur in one of two cases: when the RTO timer times out or
when three duplicate ACKs are received.

Case 1: Retransmission due to Timeout – In this case, the congestion possibility


is high.
(a) ssthresh is reduced to half of the current window size.

(b) set cwnd = 1

(c) start with the slow start phase again.

Case 2: Retransmission due to 3 Acknowledgement Duplicates – The congestion


possibility is less.

(a) ssthresh value reduces to half of the current window size.

(b) set cwnd= ssthresh

(c) start with congestion avoidance phase

What is congestion?

A state occurring in network layer when the message traffic is so heavy that it
slows down network response time.

Effects of Congestion

 As delay increases, performance decreases.


 If delay increases, retransmission occurs, making situation worse.

Congestion control algorithms

 Congestion Control is a mechanism that controls the entry of data packets


into the network, enabling a better use of a shared network infrastructure and
avoiding congestive collapse.
 Congestive-Avoidance Algorithms (CAA) are implemented at the TCP layer as
the mechanism to avoid congestive collapse in a network.
 There are two congestion control algorithm which are as follows:

 Leaky Bucket Algorithm

 The leaky bucket algorithm discovers its use in the context of network traffic
shaping or rate-limiting.
 A leaky bucket execution and a token bucket execution are predominantly
used for traffic shaping algorithms.
 This algorithm is used to control the rate at which traffic is sent to the
network and shape the burst traffic to a steady traffic stream.
 The disadvantages compared with the leaky-bucket algorithm are the
inefficient use of available network resources.
 The large area of network resources such as bandwidth is not being used
effectively.

Let us consider an example to understand


Imagine a bucket with a small hole in the bottom.No matter at what rate water
enters the bucket, the outflow is at constant rate.When the bucket is full with
water additional water entering spills over the sides and is lost.

Similarly, each network interface contains a leaky bucket and the


following steps are involved in leaky bucket algorithm:

1. When host wants to send packet, packet is thrown into the bucket.
2. The bucket leaks at a constant rate, meaning the network interface transmits
packets at a constant rate.
3. Bursty traffic is converted to a uniform traffic by the leaky bucket.
4. In practice the bucket is a finite queue that outputs at a finite rate.

 Token bucket Algorithm

 The leaky bucket algorithm has a rigid output design at an average rate
independent of the bursty traffic.
 In some applications, when large bursts arrive, the output is allowed to speed
up. This calls for a more flexible algorithm, preferably one that never loses
information. Therefore, a token bucket algorithm finds its uses in network
traffic shaping or rate-limiting.
 It is a control algorithm that indicates when traffic should be sent. This order
comes based on the display of tokens in the bucket.
 The bucket contains tokens. Each of the tokens defines a packet of
predetermined size. Tokens in the bucket are deleted for the ability to share a
packet.
 When tokens are shown, a flow to transmit traffic appears in the display of
tokens.
 No token means no flow sends its packets. Hence, a flow transfers traffic up
to its peak burst rate in good tokens in the bucket.

Need of token bucket Algorithm:-

The leaky bucket algorithm enforces output pattern at the average rate, no matter
how bursty the traffic is. So in order to deal with the bursty traffic we need a
flexible algorithm so that the data is not lost. One such algorithm is token bucket
algorithm.

Steps of this algorithm can be described as follows:


1. In regular intervals tokens are thrown into the bucket. ƒ
2. The bucket has a maximum capacity. ƒ
3. If there is a ready packet, a token is removed from the bucket, and the packet
is sent.
4. If there is no token in the bucket, the packet cannot be sent.

Let’s understand with an example, In figure (A) we see a bucket holding three
tokens, with five packets waiting to be transmitted. For a packet to be
transmitted, it must capture and destroy one token. In figure (B) We see that
three of the five packets have gotten through, but the other two are stuck waiting
for more tokens to be generated.

Ways in which token bucket is superior to leaky bucket: The leaky bucket
algorithm controls the rate at which the packets are introduced in the network,
but it is very conservative in nature. Some flexibility is introduced in the token
bucket algorithm. In the token bucket, algorithm tokens are generated at each
tick (up to a certain limit). For an incoming packet to be transmitted, it must
capture a token and the transmission takes place at the same rate. Hence some
of the busty packets are transmitted at the same rate if tokens are available and
thus introduces some amount of flexibility in the system.

Formula: M * s = C + ρ * s where S – is time taken M – Maximum output rate ρ –


Token arrival rate C – Capacity of the token bucket in byte

Let’s understand with an example,

Application Layer

The Application Layer, as discussed above, being topmost layer in OSI model,
performs several kinds of functions which are requirement in any kind of
application or communication process.
Following are list of functions which are performed by Application Layer of OSI
Model –

Data from User <=> Application layer <=> Data from Presentation Layer
 Application Layer provides a facility by which users can forward several
emails and it also provides a storage facility.
 This layer allows users to access, retrieve and manage files in a remote
computer.
 It allows users to log on as a remote host.
 This layer provides access to global information about various services.
 This layer provides services which include: e-mail, transferring files,
distributing results to the user, directory services, network resources and so
on.
 It provides protocols that allow software to send and receive information and
present meaningful data to users.
 It handles issues such as network transparency, resource allocation and so on.
 This layer serves as a window for users and application processes to access
network services.
 Application Layer is basically not a function, but it performs application layer
functions.
 The application layer is actually an abstraction layer that specifies the shared
protocols and interface methods used by hosts in a communication network.
 Application Layer helps us to identify communication partners, and
synchronizing communication.
 This layer allows users to interact with other software applications.
 In this layer, data is in visual form, which makes users truly understand data
rather than remembering or visualize the data in the binary format (0’s or 1’s).
 This application layer basically interacts with Operating System (OS) and thus
further preserves the data in a suitable manner.
 This layer also receives and preserves data from it’s previous layer, which is
Presentation Layer (which carries in itself the syntax and semantics of the
information transmitted).
 The protocols which are used in this application layer depend upon what
information users wish to send or receive.
 This application layer, in general, performs host initialization followed by
remote login to hosts.

Application Layer Protocols: The application layer provides several protocols


which allow any software to easily send and receive information and present
meaningful data to its users.
The following are some of the protocols which are provided by the
application layer.
 TELNET: Telnet stands for Telecommunications Network. This protocol is
used for managing files over the Internet. It allows the Telnet clients to
access the resources of Telnet server. Telnet uses port number 23.
 DNS: DNS stands for Domain Name System. The DNS service translates
the domain name (selected by user) into the corresponding IP address.
For example- If you choose the domain name as www.abcd.com, then
DNS must translate it as 192.36.20.8 (random IP address written just for
understanding purposes). DNS protocol uses the port number 53.
 DHCP: DHCP stands for Dynamic Host Configuration Protocol. It provides
IP addresses to hosts. Whenever a host tries to register for an IP address
with the DHCP server, DHCP server provides lots of information to the
corresponding host. DHCP uses port numbers 67 and 68.
 FTP: FTP stands for File Transfer Protocol. This protocol helps to transfer
different files from one device to another. FTP promotes sharing of files
via remote computer devices with reliable, efficient data transfer. FTP
uses port number 20 for data access and port number 21 for data control.
 SMTP: SMTP stands for Simple Mail Transfer Protocol. It is used to
transfer electronic mail from one user to another user. SMTP is used by
end users to send emails with ease. SMTP uses port numbers 25 and 587.
 HTTP: HTTP stands for Hyper Text Transfer Protocol. It is the foundation
of the World Wide Web (WWW). HTTP works on the client server model.
This protocol is used for transmitting hypermedia documents like HTML.
This protocol was designed particularly for the communications between
the web browsers and web servers, but this protocol can also be used for
several other purposes. HTTP is a stateless protocol (network protocol in
which a client sends requests to server and server responses back as per
the given state), which means the server is not responsible for
maintaining the previous client’s requests. HTTP uses port number 80.
 NFS: NFS stands for Network File System. This protocol allows remote
hosts to mount files over a network and interact with those file systems
as though they are mounted locally. NFS uses the port number 2049.
 SNMP: SNMP stands for Simple Network Management Protocol. This
protocol gathers data by polling the devices from the network to the
management station at fixed or random intervals, requiring them to
disclose certain information. SNMP uses port numbers 161 (TCP) and
162 (UDP).
The Application layer includes the following functions:
o Identifying communication partners: The application layer identifies the availability
of communication partners for an application with data to transmit.
o Determining resource availability: The application layer determines whether
sufficient network resources are available for the requested communication.
o Synchronizing communication: All the communications occur between the
applications requires cooperation which is managed by an application layer.

Services of Application Layers


o Network Virtual terminal: An application layer allows a user to log on to a remote
host. To do so, the application creates a software emulation of a terminal at the
remote host. The user's computer talks to the software terminal, which in turn, talks
to the host. The remote host thinks that it is communicating with one of its own
terminals, so it allows the user to log on.
o File Transfer, Access, and Management (FTAM): An application allows a user to
access files in a remote computer, to retrieve files from a computer and to manage
files in a remote computer. FTAM defines a hierarchical virtual file in terms of file
structure, file attributes and the kind of operations performed on the files and their
attributes.
o Addressing: To obtain communication between client and server, there is a need for
addressing. When a client made a request to the server, the request contains the
server address and its own address. The server response to the client request, the
request contains the destination address, i.e., client address. To achieve this kind of
addressing, DNS is used.
o Mail Services: An application layer provides Email forwarding and storage.
o Directory Services: An application contains a distributed database that provides
access for global information about various objects and services.

Authentication: It authenticates the sender or receiver's message or both.

Network Application Architecture


Application architecture is different from the network architecture. The network
architecture is fixed and provides a set of services to applications. The application
architecture, on the other hand, is designed by the application developer and defines
how the application should be structured over the various end systems.

Application architecture is of two types:

o Client-server architecture: An application program running on the local machine


sends a request to another application program is known as a client, and a program
that serves a request is known as a server. For example, when a web server receives
a request from the client host, it responds to the request to the client host.

Characteristics Of Client-server architecture:

o In Client-server architecture, clients do not directly communicate with each other. For
example, in a web application, two browsers do not directly communicate with each
other.
o A server is fixed, well-known address known as IP address because the server is
always on while the client can always contact the server by sending a packet to the
sender's IP address.

Disadvantage Of Client-server architecture:


It is a single-server based architecture which is incapable of holding all the requests
from the clients. For example, a social networking site can become overwhelmed
when there is only one server exists.

o P2P (peer-to-peer) architecture: It has no dedicated server in a data center. The


peers are the computers which are not owned by the service provider. Most of the
peers reside in the homes, offices, schools, and universities. The peers communicate
with each other without passing the information through a dedicated server, this
architecture is known as peer-to-peer architecture. The applications based on P2P
architecture includes file sharing and internet telephony.

Features of P2P architecture


o Self scalability: In a file sharing system, although each peer generates a workload by
requesting the files, each peer also adds a service capacity by distributing the files to
the peer.
o Cost-effective: It is cost-effective as it does not require significant server
infrastructure and server bandwidth.

Client and Server processes


o A network application consists of a pair of processes that send the messages to
each other over a network.
o In P2P file-sharing system, a file is transferred from a process in one peer to a
process in another peer. We label one of the two processes as the client and another
process as the server.
o With P2P file sharing, the peer which is downloading the file is known as a client, and
the peer which is uploading the file is known as a server. However, we have observed
in some applications such as P2P file sharing; a process can be both as a client and
server.Therefore, we can say that a process can both download and upload the files.

You might also like