0% found this document useful (0 votes)
47 views116 pages

CN Notes

Uploaded by

dumacc000.111
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
47 views116 pages

CN Notes

Uploaded by

dumacc000.111
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 116

CN PYQs

Q1.
[ ] a. Explain principle differences between connectionless and
connection-oriented communication.
[ ] b. What is channel allocation problem?
[ ] c. Find the error, if any, in the following IPv4 addresses:
i. 221.24.7.8.20 ii. 75.45.351.14
[ ] d. Differentiate between TCP and UDP
[ ] e. Write short note on SMTP
Q2.
[ ] a. Describe OSI reference model with a neat diagram.
[ ] b. Explain different framing methods.
Q3.
[ ] a. Explain different types of guided transmission media in detail.
[ ] b. Explain sliding window protocol using selective repeat technique.
Q4.
[ ] a. Explain Link State Routing with a suitable example.
[ ] b. What is the need of DNS and explain how DNS works?
Q5.
[ ] a. Explain IPv4 header format in detail.
[ ] Explain Three Way Handshake Technique in TCP
Q6.
[ ] a. Explain leaky bucket algorithm and compare it with token bucket
algorithm.
[ ] b. Write short notes on:
[ ] i. TCP Timers
[ ] ii. HTTP
Q1. Solve any Four out of Five
[ ] a. Explain the need of layering in reference model for communication and
networking?
[ ] b. Explain one bit sliding window protocol.
[ ] c. Explain IPv4 header format with diagram.
[ ] d. Differentiate between TCP and UDP. [COMMON]
[ ] e. What is the need of DNS? Explain DNS Name Space. [COMMON]
Q2. Attempt the following
[ ] a. Explain following transmission medias - Twisted Pair, Coaxial Cable
(baseband and broadband), Fiber Optic.
[ ] b. What is channel allocation problem? Explain CSMA/CD protocol.
[ ] Consider building a CSMA/CD network running at 1Gbps over a 1-km cable with
no repeaters. The signal speed of the cable is 200,000 km/sec. What is the
minimum frame size?
Q3. Attempt the following
[ ] a. Explain Classful and Classless IPv4 addressing.
[ ] b. Explain TCP connection establishment and TCP connection release.
Q4. Attempt the following
[ ] a. Explain Selective Repeat Protocol for flow control.
[ ] b. Explain shortest path (Dijkstra's Algorithm) routing algorithm.
Q5. Attempt the following
[ ] a. A large number of consecutive IP addresses are available starting at
198.16.0.0. Suppose that four organizations, A, B, C, and D, request 4000, 2000,
4000, and 8000 addresses, respectively, and in that order. For each of these,
give the first IP address assigned, the last IP address assigned, and the mask in
the w.x.y.z/s notation.
[ ] b. Explain DHCP
[ ] c. Explain ARP protocol in detail.
Q6. Attempt the following
[ ] a. Explain IP address message format and its operation in detail.
[ ] b. Explain ARP protocol in detail.
Q1. Attempt any four of the following
[ ] a. What is subnetting? Compare subnetting and supernetting
[ ] b. What are three reasons for using layered protocols? What is two possible
disadvantages of using layered protocols?
[ ] c. Explain the count to infinity problem in detail.
[ ] d. List two ways in which the OSI reference model and the TCP/IP reference
model are the same. Now list two ways in which they differ. [COMMON]
[ ] e. 4-bit data bits with binary value 1010 is to be encoded using even parity
Hamming code. What is the binary value after encoding?
Q2. Attempt the following
[ ] a. Define guided transmission media? Illustrate with diagram the details for
coaxial cable? State any 5 comparative characteristics of coaxial cable with
fiber optics and twisted pair cables.
[ ] b. Explain how collision handled in CSMA/CD? A 5 km long broadcast LAN uses
CSMA/CD. [COMMON]
[ ] The signal travels along the wire at 5 x 10^8 m/s. What is the minimum
packet size that can be used on this network?
Q3. Attempt the following
[ ] a. An organization has granted a block of addresses starting with
105.8.71.0/24. The organization wanted to distribute this block to 11 subnets as
follows:
1. First Group has 3 medium size businesses, each need 16 addresses
2. The second Group has 4 medium size businesses, each need 32 addresses.
3. The third Group has 4 households, each need 4 addresses.
Design the sub blocks and give slash notation for each subblock. Find how many
addresses have been left after this allocation.
[ ] b. Explain classful IP addressing scheme in detail? List the advantages and
disadvantages of classless IP addressing scheme.
Q4.
[ ] a) Explain the open loop congestion control and closed loop congestion
control policies in detail
[ ] b) Explain the TCP connection establishment and Connection release.
Q5.
[ ] a) Explain the concept of sliding protocol? Explain the selective repeat
protocol with example? Compare the performance of Selective repeat & Go-back-N
protocol.
[ ] b) Explain the link state routing algorithm with example?
Q6. Write a short note on following
[ ] a) ARP & RARP
[ ] b) DNS [COMMON]
Q1.
[ ] a) State and explain the design issues of OSI layers. [COMMON]
[ ] b) Compare the performance characteristics of coaxial, twisted pair and
fiber optic transmission media.
[ ] c) List the types of Error Detection and Correction techniques with the help
of example.
d) Compare the Network layer protocols IPv4 and IPv6.
Q2.
[ ] a) Illustrate TCP protocol for establishing a connection using 3-way
handshake technique in the transport layer. [COMMON]
[ ] b) Explain ISO-OSI reference model with diagram. [COMMON]
Q3.
[ ] a) What is the throughput of the system both in Pure ALOHA and Slotted
ALOHA, if the network transmits 200 bits frames on a shared channel of 200 Kbps
and the system produces? a) 1000 frames per second b) 500 frames per second
[ ] b) Analyze the steps involved in Token and Leaky bucket algorithm by quoting
the need and benefit in the network layer with suitable diagrams.
Q4.
[ ] a) Explain Linked State Routing with the help of example.
[ ] b) An ISP is granted a block of addresses starting with 190.100.0.0/16
(65,536 addresses). The ISP needs to distribute these addresses to three groups
of customers as follows:
a. The first group has 64 customers; each need 256 addresses.
b. The second group has 128 customers; each need 128 addresses.
c. The third group has 128 customers; each need 64 addresses.
[ ] Design the subblocks and find out how many addresses are still available
after these allocations.
Q5.
[ ] a) What is Congestion control? Explain Open loop and Close loop Congestion
control.
[ ] b) Draw and summarize the structure of HTTP request and response.
Q6.
Write Short Note on (Any Two)
[ ] a) Address Resolution Protocol (ARP)
[ ] b) Classful and Classless Addressing
[ ] c) Distance Vector Routing (DVR)
Q1.
[ ] A. Explain design issues of layers in OSI reference model in computer
networks. Explain ISO OSI Reference model with diagram. [COMMON]
[ ] B. Explain CSMA/CA protocols. Explain how collisions are handled in CSMA/CD.
[COMMON]
[ ] C. Explain different framing methods? What are the advantages of variable
length frame over fixed length frame?
Q3. Solve any Two.
[ ] A. Explain IPv4 header format with diagram.
[ ] B. Explain different TCP Congestion Control policies.
[ ] C. Explain TCP flow control.
Q4. Solve any Two.
[ ] A. Explain ARP and RARP protocols in detail.
[ ] B. Explain the need for DNS (Domain Name System) and describe it's
functioning. [COMMON]
[ ] C. Explain working of DHCP protocol.
LMT Video
MODULE 1
[ ] OSI Reference Model
The OSI (Open Systems Interconnection) Model is a set of rules that explains how
different computer systems communicate over a network. OSI Model was developed by
the International Organization for Standardization (ISO). The OSI Model consists
of 7 layers and each layer has specific functions and responsibilities.
This layered approach makes it easier for different devices and technologies to
work together. OSI Model provides a clear structure for data transmission and
managing network issues. The OSI Model is widely used as a reference to
understand how network systems function.
In this article, we will discuss the OSI Model and each layer of the OSI Model in
detail. We will also discuss the flow of data in the OSI Model and how the OSI
Model is different from the TCP/IP Model.
Layer 1 – Physical Layer
The lowest layer of the OSI reference model is the Physical Layer. It is
responsible for the actual physical connection between the devices. The physical
layer contains information in the form of bits. Physical Layer is responsible for
transmitting individual bits from one node to the next. When receiving data, this
layer will get the signal received and convert it into 0s and 1s and send them to
the Data Link layer, which will put the frame back together. Common physical
layer devices are Hub, Repeater, Modem, and Cables.
Functions of the Physical Layer
 Bit Synchronization: The physical layer provides the synchronization of the
bits by providing a clock. This clock controls both sender and receiver
thus providing synchronization at the bit level.
 Bit Rate Control: The Physical layer also defines the transmission rate
i.e. the number of bits sent per second.
 Physical Topologies: Physical layer specifies how the different,
devices/nodes are arranged in a network i.e. bus topology, star topology,
or mesh topology.
 Transmission Mode: Physical layer also defines how the data flows between
the two connected devices. The various transmission modes possible
are Simplex, half-duplex and full-duplex.
Layer 2 – Data Link Layer (DLL)
The data link layer is responsible for the node-to-node delivery of the message.
The main function of this layer is to make sure data transfer is error-free from
one node to another, over the physical layer. When a packet arrives in a network,
it is the responsibility of the DLL to transmit it to the Host using its MAC
address. Packet in the Data Link layer is referred to as Frame. Switches and
Bridges are common Data Link Layer devices.
The Data Link Layer is divided into two sublayers:
 Logical Link Control (LLC)
 Media Access Control (MAC)
The packet received from the Network layer is further divided into frames
depending on the frame size of the NIC(Network Interface Card). DLL also
encapsulates Sender and Receiver’s MAC address in the header.
The Receiver’s MAC address is obtained by placing an ARP(Address Resolution
Protocol) request onto the wire asking “Who has that IP address?” and the
destination host will reply with its MAC address.
Functions of the Data Link Layer
 Framing: Framing is a function of the data link layer. It provides a way
for a sender to transmit a set of bits that are meaningful to the receiver.
This can be accomplished by attaching special bit patterns to the beginning
and end of the frame.
 Physical Addressing: After creating frames, the Data link layer adds
physical addresses (MAC addresses) of the sender and/or receiver in the
header of each frame.
 Error Control: The data link layer provides the mechanism of error control
in which it detects and retransmits damaged or lost frames.
 Flow Control: The data rate must be constant on both sides else the data
may get corrupted thus, flow control coordinates the amount of data that
can be sent before receiving an acknowledgment.
 Access Control: When a single communication channel is shared by multiple
devices, the MAC sub-layer of the data link layer helps to determine which
device has control over the channel at a given time.
Layer 3 – Network Layer
The network layer works for the transmission of data from one host to the other
located in different networks. It also takes care of packet routing i.e.
selection of the shortest path to transmit the packet, from the number of routes
available. The sender and receiver’s IP address are placed in the header by the
network layer. Segment in the Network layer is referred to as Packet. Network
layer is implemented by networking devices such as routers and switches.
Functions of the Network Layer
 Routing: The network layer protocols determine which route is suitable from
source to destination. This function of the network layer is known as
routing.
 Logical Addressing: To identify each device inter-network uniquely, the
network layer defines an addressing scheme. The sender and receiver’s IP
addresses are placed in the header by the network layer. Such an address
distinguishes each device uniquely and universally.
Layer 4 – Transport Layer
The transport layer provides services to the application layer and takes services
from the network layer. The data in the transport layer is referred to
as Segments. It is responsible for the end-to-end delivery of the complete
message. The transport layer also provides the acknowledgment of the successful
data transmission and re-transmits the data if an error is found. Protocols used
in Transport Layer are TCP, UDP NetBIOS, PPTP.
At the sender’s side, the transport layer receives the formatted data from the
upper layers, performs Segmentation, and also implements Flow and error
control to ensure proper data transmission. It also adds Source and
Destination port number in its header and forwards the segmented data to the
Network Layer.
 Generally, this destination port number is configured, either by default or
manually. For example, when a web application requests a web server, it
typically uses port number 80, because this is the default port assigned to
web applications. Many applications have default ports assigned.
At the Receiver’s side, Transport Layer reads the port number from its header and
forwards the Data which it has received to the respective application. It also
performs sequencing and reassembling of the segmented data.
Functions of the Transport Layer
 Segmentation and Reassembly: This layer accepts the message from the
(session) layer, and breaks the message into smaller units. Each of the
segments produced has a header associated with it. The transport layer at
the destination station reassembles the message.
 Service Point Addressing: To deliver the message to the correct process,
the transport layer header includes a type of address called service point
address or port address. Thus by specifying this address, the transport
layer makes sure that the message is delivered to the correct process.
Services Provided by Transport Layer
 Connection-Oriented Service
 Connectionless Service
Layer 5 – Session Layer
Session Layer in the OSI Model is responsible for the establishment of
connections, management of connections, terminations of sessions between two
devices. It also provides authentication and security. Protocols used in the
Session Layer are NetBIOS, PPTP.
Functions of the Session Layer
 Session Establishment, Maintenance, and Termination: The layer allows the
two processes to establish, use, and terminate a connection.
 Synchronization: This layer allows a process to add checkpoints that are
considered synchronization points in the data. These synchronization points
help to identify the error so that the data is re-synchronized properly,
and ends of the messages are not cut prematurely and data loss is avoided.
 Dialog Controller: The session layer allows two systems to start
communication with each other in half-duplex or full-duplex.
Example
Let us consider a scenario where a user wants to send a message through some
Messenger application running in their browser. The “Messenger” here acts as the
application layer which provides the user with an interface to create the data.
This message or so-called Data is compressed, optionally encrypted (if the data
is sensitive), and converted into bits (0’s and 1’s) so that it can be
transmitted.
Layer 6 – Presentation Layer
The presentation layer is also called the Translation layer. The data from the
application layer is extracted here and manipulated as per the required format to
transmit over the network. Protocols used in the Presentation Layer
are JPEG, MPEG, GIF, TLS/SSL, etc.
Functions of the Presentation Layer
 Translation: For example, ASCII to EBCDIC.
 Encryption/ Decryption: Data encryption translates the data into another
form or code. The encrypted data is known as the ciphertext and the
decrypted data is known as plain text. A key value is used for encrypting
as well as decrypting data.
 Compression: Reduces the number of bits that need to be transmitted on the
network.
Layer 7 – Application Layer
At the very top of the OSI Reference Model stack of layers, we find the
Application layer which is implemented by the network applications. These
applications produce the data to be transferred over the network. This layer also
serves as a window for the application services to access the network and for
displaying the received information to the user. Protocols used in the
Application layer are SMTP, FTP, DNS, etc.
Functions of the Application Layer
The main functions of the application layer are given below.
 Network Virtual Terminal(NVT): It allows a user to log on to a remote host.
 File Transfer Access and Management(FTAM): This application allows a user
to access files in a remote host, retrieve files in a remote host, and
manage or control files from a remote computer.
 Mail Services: Provide email service.
 Directory Services: This application provides distributed database sources
and access for global information about various objects and services.
Data flows through the OSI model in a step-by-step process:
 Application Layer: Applications create the data.
 Presentation Layer: Data is formatted and encrypted.
 Session Layer: Connections are established and managed.
 Transport Layer: Data is broken into segments for reliable delivery.
 Network Layer: Segments are packaged into packets and routed.
 Data Link Layer: Packets are framed and sent to the next device.
 Physical Layer: Frames are converted into bits and transmitted physically.
Each layer adds specific information to ensure the data reaches its destination
correctly, and these steps are reversed upon arrival.
We can understand how data flows through OSI Model with the help of an example
mentioned below.
Let us suppose, Person A sends an e-mail to his friend Person B.
Step 1: Person A interacts with e-mail application like Gmail, outlook, etc.
Writes his email to send. (This happens at Application Layer).
Step 2: At Presentation Layer, Mail application prepares for data transmission
like encrypting data and formatting it for transmission.
Step 3: At Session Layer, There is a connection established between the sender
and receiver on the internet.
Step 4: At Transport Layer, Email data is broken into smaller segments. It adds
sequence number and error-checking information to maintain the reliability of the
information.
Step 5: At Network Layer, Addressing of packets is done in order to find the best
route for transfer.
Step 6: At Data Link Layer, data packets are encapsulated into frames, then MAC
address is added for local devices and then it checks for error using error
detection.
Step 7: At Physical Layer, Frames are transmitted in the form of electrical/
optical signals over a physical network medium like ethernet cable or WiFi.
After the email reaches the receiver i.e. Person B, the process will reverse and
decrypt the e-mail content. At last, the email will be shown on Person B email
client.
Protocols Used in the OSI Layers
Layer Working Protocol Data Protocols
Unit
1 – Physical Establishing Physical Bits USB, SONET/SDH, etc.
Layer Connections between
Devices.
2 – Data Link Node to Node Delivery of Frames Ethernet, PPP, etc.
Layer Message.
3 – Network Transmission of data from Packets IP, ICMP, IGMP, OSPF,
Layer one host to another, etc.
located in different
networks.
4 – Transport Take Service from Network Segments (for TCP, UDP, SCTP, etc.
Layer Layer and provide it to TCP) or
the Application Layer. Datagrams
(for UDP)
5 – Session Establishes Connection, Data NetBIOS, RPC, PPTP,
Layer Maintenance, Ensures etc.
Authentication and
Ensures security.
6 – Data from the application Data TLS/SSL, MIME, JPEG,
Presentation layer is extracted and PNG, ASCII, etc.
Layer manipulated in the
required format for
transmission.
7 – Helps in identifying the Data FTP, SMTP, DNS, DHCP,
Application client and synchronizing etc.
Layer communication.

Advantages of OSI Model


The OSI Model defines the communication of a computing system into 7 different
layers. Its advantages include:
 It divides network communication into 7 layers which makes it easier to
understand and troubleshoot.
 It standardizes network communications, as each layer has fixed functions
and protocols.
 Diagnosing network problems is easier with the OSI model.
 It is easier to improve with advancements as each layer can get updates
separately.
Disadvantages of OSI Model
 The OSI Model has seven layers, which can be complicated and hard to
understand for beginners.
 In real-life networking, most systems use a simpler model called the
Internet protocol suite (TCP/IP), so the OSI Model is not always directly
applicable.
 Each layer in the OSI Model adds its own set of rules and operations, which
can make the process more time-consuming and less efficient.
 The OSI Model is more of a theoretical framework, meaning it’s great for
understanding concepts but not always practical for implementation.

[ ] Layered Architectures
Every network consists of a specific number of functions, layers, and tasks to
perform. Layered Architecture in a computer network is defined as a model where a
whole network process is divided into various smaller sub-tasks. These divided
sub-tasks are then assigned to a specific layer to perform only the dedicated
tasks. A single layer performs only a specific type of task. To run the
application and provide all types of services to clients a lower layer adds its
services to the higher layer present above it. Therefore layered architecture
provides interactions between the sub-systems. If any type of modification is
done in one layer it does not affect the next layer.

Layered Architecture
As shown in the above diagram, there are five different layers. Therefore, it is
a five-layered architecture. Each layer performs a dedicated task. The lower-
level data for example from layer 1 data is transferred to layer 2. Below all the
layers Physical Medium is present. The physical medium is responsible for the
actual communication to take place. For the transfer of data and communication
layered architecture provides simple interface.
Features of Layered Architecture
 Use of Layered architecture in computer network provides with the feature
of modularity and distinct interfaces.
 Layered architecture ensures independence between layers, by offering
services to higher layers from the lower layers and without specifying how
these services are implemented.
 Layered architecture segments as larger and unmanageable design into small
sub tasks.
 In layer architecture every network has different number of functions,
layers and content.
 In layered architecture, the physical route provides with communication
which is available under the layer 1.
 In layered architecture, the implementation done by one layer can be
modified by another layer.
Elements of Layered Architecture
There are three different types of elements of a layered architecture. They are
described below:
 Service: Service is defined as a set of functions and tasks being provided
by a lower layer to a higher layer. Each layer performs a different type of
task. Therefore, actions provided by each layer are different.
 Protocol: Protocol is defined as a set rule used by the layer for
exchanging and transmission of data with its peer entities. These rules can
consists details regarding a type of content and their order passed from
one layer to another.
 Interface: Interface is defined as a channel that allows to transmit the
messages from one layer to the another.
Significance of Layered Architecture
 Divide and Conquer Approach: Layered architecture supports divide and
conquer approach. The unmanageable and complex task is further divided into
smaller sub tasks. Each sub task is then carried out by the different
layer. Therefore, using this approach reduces the complexity of the problem
or design process.
 Easy to Modify: The layers are independent of each other in layered
architecture. If any sudden change occurs in the implementation of one
layer, it can be changed. This change does not affect the working of other
layers involved in the task. Therefore, layered architectures are required
to perform any sudden update or change.
 Modularity: Layered architecture is more modular as compared to other
architecture models in computer network. Modularity provides with more
independence between the layers and are easier to understand.
 Easy to Test: Each layer in layered architecture performs a different and
dedicated task. Therefore, each layer can be analyzed and tested
individually. It helps to analyze the problem and solve them more
efficiently as compared to solving all the problems at a time.
 Scalability: As networks grow in size and complexity, additional layers or
protocols may be added to meet new requirements while maintaining existing
functionality.
 Security: The layered technique enables security measures to be implemented
to varying degrees, protecting the community from a variety of threats.
 Efficiency: Each layer focuses on a certain aspect of verbal exchange,
optimizing aid allocation and performance.
Benefits of Layered Architecture
 Modularity
 Interoperability
 Flexibility
 Reusability
 Scalability
 Security
Challenges in Layered Architecture
 Performance Overhead
 Complexity in Implementation
 Resource Utilization
 Debugging and Troubleshooting
 Protocol Overhead

[ ] Design issues in OSI Layers


A number of design issues exist for the layer to layer approach of computer
networks. Some of the main design issues are as follows −
Reliability
Network channels and components may be unreliable, resulting in loss of bits
while data transfer. So, an important design issue is to make sure that the
information transferred is not distorted.
Scalability
Networks are continuously evolving. The sizes are continually increasing leading
to congestion. Also, when new technologies are applied to the added components,
it may lead to incompatibility issues. Hence, the design should be done so that
the networks are scalable and can accommodate such additions and alterations.

Explore our latest online courses and learn new skills at your own pace. Enroll
and become a certified expert to boost your career.

Addressing
At a particular time, innumerable messages are being transferred between large
numbers of computers. So, a naming or addressing system should exist so that each
layer can identify the sender and receivers of each message.
Error Control
Unreliable channels introduce a number of errors in the data streams that are
communicated. So, the layers need to agree upon common error detection and error
correction methods so as to protect data packets while they are transferred.
Flow Control
If the rate at which data is produced by the sender is higher than the rate at
which data is received by the receiver, there are chances of overflowing the
receiver. So, a proper flow control mechanism needs to be implemented.
Resource Allocation
Computer networks provide services in the form of network resources to the end
users. The main design issue is to allocate and deallocate resources to
processes. The allocation/deallocation should occur so that minimal interference
among the hosts occurs and there is optimal usage of the resources.
Statistical Multiplexing
It is not feasible to allocate a dedicated path for each message while it is
being transferred from the source to the destination. So, the data channel needs
to be multiplexed, so as to allocate a fraction of the bandwidth or time to each
host.
Routing
There may be multiple paths from the source to the destination. Routing involves
choosing an optimal path among all possible paths, in terms of cost and time.
There are several routing algorithms that are used in network systems.
Security
A major factor of data communication is to defend it against threats like
eavesdropping and surreptitious alteration of messages. So, there should be
adequate mechanisms to prevent unauthorized access to data through authentication
and cryptography.

[ ] Network Topologies (Bus, Ring, Mesh, etc)


Point to Point Topology
Point-to-point topology is a type of topology that works on the functionality of
the sender and receiver. It is the simplest communication between two nodes, in
which one is the sender and the other one is the receiver. Point-to-Point
provides high bandwidth.

Point to Point Topology


Mesh Topology
In a mesh topology, every device is connected to another device via a particular
channel. Every device is connected to another via dedicated channels. These
channels are known as links. In Mesh Topology, the protocols used are AHCP (Ad
Hoc Configuration Protocols), DHCP (Dynamic Host Configuration Protocol), etc.
Mesh Topology
 Suppose, the N number of devices are connected with each other in a mesh
topology, the total number of ports that are required by each device is N-
1. In Figure 1, there are 5 devices connected to each other, hence the
total number of ports required by each device is 4. The total number of
ports required = N * (N-1).
 Suppose, N number of devices are connected with each other in a mesh
topology, then the total number of dedicated links required to connect them
is N C 2 i.e. N(N-1)/2. In Figure 1, there are 5 devices connected to each
other, hence the total number of links required is 5*4/2 = 10.
Advantages of Mesh Topology
 Communication is very fast between the nodes.
 Mesh Topology is robust.
 The fault is diagnosed easily. Data is reliable because data is transferred
among the devices through dedicated channels or links.
 Provides security and privacy.
Disadvantages of Mesh Topology
 Installation and configuration are difficult.
 The cost of cables is high as bulk wiring is required, hence suitable for
less number of devices.
 The cost of maintenance is high.
A common example of mesh topology is the internet backbone, where various
internet service providers are connected to each other via dedicated channels.
This topology is also used in military communication systems and aircraft
navigation systems.
Star Topology
In Star Topology, all the devices are connected to a single hub through a cable.
This hub is the central node and all other nodes are connected to the central
node. The hub can be passive in nature i.e., not an intelligent hub such as
broadcasting devices, at the same time the hub can be intelligent known as an
active hub. Active hubs have repeaters in them. Coaxial cables or RJ-45 cables
are used to connect the computers. In Star Topology, many popular Ethernet LAN
protocols are used as CD(Collision Detection), CSMA (Carrier Sense Multiple
Access), etc.

Star Topology
Advantages of Star Topology
 If N devices are connected to each other in a star topology, then the
number of cables required to connect them is N. So, it is easy to set up.
 Each device requires only 1 port i.e. to connect to the hub, therefore the
total number of ports required is N.
 It is Robust. If one link fails only that link will affect and not other
than that.
 Easy to fault identification and fault isolation.
 Star topology is cost-effective as it uses inexpensive coaxial cable.
Disadvantages of Star Topology
 If the concentrator (hub) on which the whole topology relies fails, the
whole system will crash down.
 The cost of installation is high.
 Performance is based on the single concentrator i.e. hub.
A common example of star topology is a local area network (LAN) in an office
where all computers are connected to a central hub. This topology is also used in
wireless networks where all devices are connected to a wireless access point.
Bus Topology
Bus Topology is a network type in which every computer and network device is
connected to a single cable. It is bi-directional. It is a multi-point connection
and a non-robust topology because if the backbone fails the topology crashes. In
Bus Topology, various MAC (Media Access Control) protocols are followed by LAN
ethernet connections like TDMA, Pure Aloha, CDMA, Slotted Aloha, etc.
Bus Topology
Advantages of Bus Topology
 If N devices are connected to each other in a bus topology, then the number
of cables required to connect them is 1, known as backbone cable, and N
drop lines are required.
 Coaxial or twisted pair cables are mainly used in bus-based networks that
support up to 10 Mbps.
 The cost of the cable is less compared to other topologies, but it is used
to build small networks.
 Bus topology is familiar technology as installation and troubleshooting
techniques are well known.
 CSMA is the most common method for this type of topology.
Disadvantages of Bus Topology
 A bus topology is quite simpler, but still, it requires a lot of cabling.
 If the common cable fails, then the whole system will crash down.
 If the network traffic is heavy, it increases collisions in the network. To
avoid this, various protocols are used in the MAC layer known as Pure
Aloha, Slotted Aloha, CSMA/CD, etc.
 Adding new devices to the network would slow down networks.
 Security is very low.
A common example of bus topology is the Ethernet LAN, where all devices are
connected to a single coaxial cable or twisted pair cable. This topology is also
used in cable television networks.
Ring Topology
In a Ring Topology, it forms a ring connecting devices with exactly two
neighboring devices. A number of repeaters are used for Ring topology with a
large number of nodes, because if someone wants to send some data to the last
node in the ring topology with 100 nodes, then the data will have to pass through
99 nodes to reach the 100th node. Hence to prevent data loss repeaters are used
in the network.
The data flows in one direction, i.e. it is unidirectional, but it can be made
bidirectional by having 2 connections between each Network Node, it is
called Dual Ring Topology. In-Ring Topology, the Token Ring Passing protocol is
used by the workstations to transmit the data.
Ring Topology
The most common access method of ring topology is token passing.
 Token passing: It is a network access method in which a token is passed
from one node to another node.
 Token: It is a frame that circulates around the network.
Operations of Ring Topology
 One station is known as a monitor station which takes all the
responsibility for performing the operations.
 To transmit the data, the station has to hold the token. After the
transmission is done, the token is to be released for other stations to
use.
 When no station is transmitting the data, then the token will circulate in
the ring.
 There are two types of token release techniques: Early token
release releases the token just after transmitting the data and Delayed
token release releases the token after the acknowledgment is received from
the receiver.
Advantages of Ring Topology
 The data transmission is high-speed.
 The possibility of collision is minimum in this type of topology.
 Cheap to install and expand.
 It is less costly than a star topology.
Disadvantages of Ring Topology
 The failure of a single node in the network can cause the entire network to
fail.
 Troubleshooting is difficult in this topology.
 The addition of stations in between or the removal of stations can disturb
the whole topology.
 Less secure.
Tree Topology
Tree topology is the variation of the Star topology. This topology has a
hierarchical flow of data. In Tree Topology, protocols like DHCP and SAC
(Standard Automatic Configuration) are used.

Tree Topology
In tree topology, the various secondary hubs are connected to the central hub
which contains the repeater. This data flow from top to bottom i.e. from the
central hub to the secondary and then to the devices or from bottom to top i.e.
devices to the secondary hub and then to the central hub. It is a multi-point
connection and a non-robust topology because if the backbone fails the topology
crashes.
Advantages of Tree Topology
 It allows more devices to be attached to a single central hub thus it
decreases the distance that is traveled by the signal to come to the
devices.
 It allows the network to get isolated and also prioritize from different
computers.
 We can add new devices to the existing network.
 Error detection and error correction are very easy in a tree topology.
Disadvantages of Tree Topology
 If the central hub gets fails the entire system fails.
 The cost is high because of the cabling.
 If new devices are added, it becomes difficult to reconfigure.
A common example of a tree topology is the hierarchy in a large organization. At
the top of the tree is the CEO, who is connected to the different departments or
divisions (child nodes) of the company. Each department has its own hierarchy,
with managers overseeing different teams (grandchild nodes). The team members
(leaf nodes) are at the bottom of the hierarchy, connected to their respective
managers and departments.
Hybrid Topology
Hybrid Topology is the combination of all the various types of topologies we have
studied above. Hybrid Topology is used when the nodes are free to take any form.
It means these can be individuals such as Ring or Star topology or can be a
combination of various types of topologies seen above. Each individual topology
uses the protocol that has been discussed earlier.
Hybrid Topology
The above figure shows the structure of the Hybrid topology. As seen it contains
a combination of all different types of networks.
Advantages of Hybrid Topology
 This topology is very flexible .
 The size of the network can be easily expanded by adding new devices.
Disadvantages of Hybrid Topology
 It is challenging to design the architecture of the Hybrid Network.
 Hubs used in this topology are very expensive.
 The infrastructure cost is very high as a hybrid network requires a lot of
cabling and network devices .
A common example of a hybrid topology is a university campus network. The network
may have a backbone of a star topology, with each building connected to the
backbone through a switch or router. Within each building, there may be a bus or
ring topology connecting the different rooms and offices. The wireless access
points also create a mesh topology for wireless devices. This hybrid topology
allows for efficient communication between different buildings while providing
flexibility and redundancy within each building.

[ ] Interconnecting Devices (Hub, Routers, etc)


Access Point
An access point in networking is a device that allows wireless devices, like
smartphones and laptops, to connect to a wired network. It creates a Wi-Fi
network that lets wireless devices communicate with the internet or other devices
on the network. Access points are used to extend the range of a network or
provide Wi-Fi in areas that do not have it. They are commonly found in homes,
offices, and public places to provide wireless internet access.
Modems
Modems is also known as modulator/demodulator is a network device that is used to
convert digital signal into analog signal of different frequencies and transmits
these signal to a modem at the receiving location. These converted signals can be
transmitted over the cable systems, telephone lines, and other communication
mediums. A modem is also used to convert analog signal back into digital signal.
Modems are generally used to access internet by customers of an Internet Service
Provider (ISP).
Types of Modems
There are four main types of modems:
 DSL Modem: Uses regular phone lines to connect to the internet but it is
slower compared to other types.
 Cable Modem: Sends data through TV cables, providing faster internet
than DSL.
 Wireless Modem: Connects devices to the internet using Wi-Fi relying on
nearby Wi-Fi signals.
 Cellular Modem: Connects to the internet using mobile data from a cellular
network not Wi-Fi or fixed cables.
Firewalls
A firewall is a network security device that monitors and controls the flow of
data between your computer or network and the internet. It acts as a barrier,
blocking unauthorized access while allowing trusted data to pass through.
Firewalls help protect your network from hackers, viruses, and other
online threats by filtering traffic based on security rules. Firewalls can be
physical devices (hardware), programs (software), or even cloud-based services,
which can be offered as SaaS, through public clouds, or private virtual clouds.
Repeater
A repeater operates at the physical layer. Its job is to amplify (i.e.,
regenerate) the signal over the same network before the signal becomes too weak
or corrupted to extend the length to which the signal can be transmitted over the
same network. When the signal becomes weak, they copy it bit by bit and
regenerate it at its star topology connectors connecting following the original
strength. It is a 2-port device.
Understanding network devices is key to mastering networking concepts, which are
heavily tested in exams like GATE. To ensure you’re fully prepared, consider
the GATE CS Self-Paced Course . This course offers in-depth coverage of
networking topics, including detailed tutorials on the various network devices,
helping you build the expertise needed to excel in your exams.
Hub
A hub is a multi-port repeater. A hub connects multiple wires coming from
different branches, for example, the connector in star topology which connects
different stations. Hubs cannot filter data, so data packets are sent to all
connected devices. In other words, the collision domain of all hosts connected
through Hub remains one. Also, they do not have the intelligence to find out the
best path for data packets which leads to inefficiencies and wastage.
Types of Hub
 Active Hub: These are the hubs that have their power supply and can clean,
boost, and relay the signal along with the network. It serves both as a
repeater as well as a wiring center. These are used to extend the maximum
distance between nodes.
 Passive Hub: These are the hubs that collect wiring from nodes and power
supply from the active hub. These hubs relay signals onto the network
without cleaning and boosting them and can’t be used to extend the distance
between nodes.
 Intelligent Hub: It works like an active hub and includes remote management
capabilities. They also provide flexible data rates to network devices. It
also enables an administrator to monitor the traffic passing through the
hub and to configure each port in the hub.
Bridge
A bridge operates at the data link layer. A bridge is a repeater, with add on the
functionality of filtering content by reading the MAC addresses of the source and
destination. It is also used for interconnecting two LANs working on the same
protocol. It has a single input and single output port, thus making it a 2 port
device.
Types of Bridges
 Transparent Bridges: These are the bridge in which the stations are
completely unaware of the bridge’s existence i.e. whether or not a bridge
is added or deleted from the network, reconfiguration of the stations is
unnecessary. These bridges make use of two processes i.e. bridge forwarding
and bridge learning.
 Source Routing Bridges: In these bridges, routing operation is performed by
the source station and the frame specifies which route to follow. The host
can discover the frame by sending a special frame called the discovery
frame, which spreads through the entire network using all possible paths to
the destination.
Switch
A switch is a multiport bridge with a buffer and a design that can boost its
efficiency(a large number of ports imply less traffic) and performance. A switch
is a data link layer device. The switch can perform error checking before
forwarding data, which makes it very efficient as it does not forward packets
that have errors and forward good packets selectively to the correct port
only. In other words, the switch divides the collision domain of hosts, but
the broadcast domain remains the same.
Types of Switch
 Unmanaged Switches: These switches have a simple plug-and-play design and
do not offer advanced configuration options. They are suitable for small
networks or for use as an expansion to a larger network.
 Managed Switches: These switches offer advanced configuration options such
as VLANs, QoS, and link aggregation. They are suitable for larger, more
complex networks and allow for centralized management.
 Smart Switches: These switches have features similar to managed switches
but are typically easier to set up and manage. They are suitable for small-
to medium-sized networks.
 Layer 2 Switches: These switches operate at the Data Link layer of the OSI
model and are responsible for forwarding data between devices on the same
network segment.
 Layer 3 switches: These switches operate at the Network layer of the OSI
model and can route data between different network segments. They are more
advanced than Layer 2 switches and are often used in larger, more complex
networks.
 PoE Switches: These switches have Power over Ethernet capabilities, which
allows them to supply power to network devices over the same cable that
carries data.
 Gigabit switches: These switches support Gigabit Ethernet speeds, which are
faster than traditional Ethernet speeds.
 Rack-Mounted Switches: These switches are designed to be mounted in a
server rack and are suitable for use in data centers or other large
networks.
 Desktop Switches: These switches are designed for use on a desktop or in a
small office environment and are typically smaller in size than rack-
mounted switches.
 Modular Switches: These switches have modular design, which allows for easy
expansion or customization. They are suitable for large networks and data
centers.
Router
A router is a device like a switch that routes data packets based on their IP
addresses. The router is mainly a Network Layer device. Routers normally connect
LANs and WANs and have a dynamically updating routing table based on which they
make decisions on routing the data packets. The router divides the broadcast
domains of hosts connected through it.
Gateway
A gateway, as the name suggests, is a passage to connect two networks that may
work upon different networking models. They work as messenger agents that take
data from one system, interpret it, and transfer it to another system. Gateways
are also called protocol converters and can operate at any network layer.
Gateways are generally more complex than switches or routers.
Brouter
It is also known as the bridging router is a device that combines features of
both bridge and router. It can work either at the data link layer or a network
layer. Working as a router, it is capable of routing packets across networks and
working as the bridge, it is capable of filtering local area network traffic.
NIC
NIC or network interface card is a network adapter that is used to connect the
computer to the network. It is installed in the computer to establish a LAN. It
has a unique id that is written on the chip, and it has a connector to connect
the cable to it. The cable acts as an interface between the computer and the
router or modem. NIC card is a layer 2 device which means that it works on both
the physical and data link layers of the network model.
[ ] Connection-less v/s Connection-oriented Services

Connection-oriented Service Connection-less Service


Connection-oriented service is related to Connection-less service is related
the telephone system. to the postal system.
Connection-oriented service is preferred Connection-less Service is preferred
by long and steady communication. by bursty communication.
Connection-oriented Service is necessary. Connection-less Service is not
compulsory.
Connection-oriented Service is feasible. Connection-less Service is not
feasible.
In connection-oriented Service, In connection-less Service,
Congestion is not possible. Congestion is possible.
Connection-oriented Service gives the Connection-less Service does not
guarantee of reliability. give a guarantee of reliability.
In connection-oriented Service, Packets In connection-less Service, Packets
follow the same route. do not follow the same route.
Connection-oriented services require a Connection-less Service requires a
bandwidth of a high range. bandwidth of low range.
Ex: TCP (Transmission Control Protocol) Ex: UDP (User Datagram Protocol)
Connection-oriented requires Connection-less Service does not
authentication. require authentication.

[ ] TCP-IP Model and comparison with OSI


The main work of TCP/IP is to transfer the data of a computer from one device to
another. The main condition of this process is to make data reliable and accurate
so that the receiver will receive the same information which is sent by the
sender. To ensure that, each message reaches its final destination accurately,
the TCP/IP model divides its data into packets and combines them at the other
end, which helps in maintaining the accuracy of the data while transferring from
one end to another end. The TCP/IP model is used in the context of the real-world
internet, where a wide range of physical media and network technologies are in
use. Rather than specifying a particular Physical Layer, the TCP/IP model allows
for flexibility in adapting to different physical implementations.
Whenever we want to send something over the internet using the TCP/IP Model, the
TCP/IP Model divides the data into packets at the sender’s end and the same
packets have to be recombined at the receiver’s end to form the same data, and
this thing happens to maintain the accuracy of the data. TCP/IP model divides the
data into a 4-layer procedure, where the data first go into this layer in one
order and again in reverse order to get organized in the same way at the
receiver’s end.
For more, you can refer to TCP/IP in Computer Networking.
Layers of TCP/IP Model
 Application Layer
 Transport Layer(TCP/UDP)
 Network/Internet Layer(IP)
 Network Access Layer
The diagrammatic comparison of the TCP/IP and OSI model is as follows:

TCP/IP and OSI


1. Network Access Layer
It is a group of applications requiring network communications. This layer is
responsible for generating the data and requesting connections. It acts on behalf
of the sender and the Network Access layer on the behalf of the receiver. During
this article, we will be talking on the behalf of the receiver.
The packet’s network protocol type, in this case, TCP/IP, is identified by
network access layer. Error prevention and “framing” are also provided by this
layer. Point-to-Point Protocol (PPP) framing and Ethernet IEEE 802.2 framing are
two examples of data-link layer protocols.
2. Internet or Network Layer
This layer parallels the functions of OSI’s Network layer. It defines the
protocols which are responsible for the logical transmission of data over the
entire network. The main protocols residing at this layer are as follows:
 IP: IP stands for Internet Protocol and it is responsible for delivering
packets from the source host to the destination host by looking at the IP
addresses in the packet headers. IP has 2 versions: IPv4 and IPv6. IPv4 is
the one that most websites are using currently. But IPv6 is growing as the
number of IPv4 addresses is limited in number when compared to the number
of users.
 ICMP: ICMP stands for Internet Control Message Protocol. It is encapsulated
within IP datagrams and is responsible for providing hosts with information
about network problems.
 ARP: ARP stands for Address Resolution Protocol. Its job is to find the
hardware address of a host from a known IP address. ARP has several types:
Reverse ARP, Proxy ARP, Gratuitous ARP, and Inverse ARP.
The Internet Layer is a layer in the Internet Protocol (IP) suite, which is the
set of protocols that define the Internet. The Internet Layer is responsible for
routing packets of data from one device to another across a network. It does this
by assigning each device a unique IP address, which is used to identify the
device and determine the route that packets should take to reach it.
Example: Imagine that you are using a computer to send an email to a friend. When
you click “send,” the email is broken down into smaller packets of data, which
are then sent to the Internet Layer for routing. The Internet Layer assigns an IP
address to each packet and uses routing tables to determine the best route for
the packet to take to reach its destination. The packet is then forwarded to the
next hop on its route until it reaches its destination. When all of the packets
have been delivered, your friend’s computer can reassemble them into the original
email message.
In this example, the Internet Layer plays a crucial role in delivering the email
from your computer to your friend’s computer. It uses IP addresses and routing
tables to determine the best route for the packets to take, and it ensures that
the packets are delivered to the correct destination. Without the Internet Layer,
it would not be possible to send data across the Internet.
3. Transport Layer
The TCP/IP transport layer protocols exchange data receipt acknowledgments and
retransmit missing packets to ensure that packets arrive in order and without
error. End-to-end communication is referred to as such. Transmission Control
Protocol (TCP) and User Datagram Protocol are transport layer protocols at this
level (UDP).
 TCP: Applications can interact with one another using TCP as though they
were physically connected by a circuit. TCP transmits data in a way that
resembles character-by-character transmission rather than separate packets.
A starting point that establishes the connection, the whole transmission in
byte order, and an ending point that closes the connection make up this
transmission.
 UDP: The datagram delivery service is provided by UDP, the other transport
layer protocol. Connections between receiving and sending hosts are not
verified by UDP. Applications that transport little amounts of data use UDP
rather than TCP because it eliminates the processes of establishing and
validating connections.
4. Application Layer
This layer is analogous to the transport layer of the OSI model. It is
responsible for end-to-end communication and error-free delivery of data. It
shields the upper-layer applications from the complexities of data. The three
main protocols present in this layer are:
 HTTP and HTTPS: HTTP stands for Hypertext transfer protocol. It is used by
the World Wide Web to manage communications between web browsers and
servers. HTTPS stands for HTTP-Secure. It is a combination of HTTP with
SSL(Secure Socket Layer). It is efficient in cases where the browser needs
to fill out forms, sign in, authenticate, and carry out bank transactions.
 SSH: SSH stands for Secure Shell. It is a terminal emulations software
similar to Telnet. The reason SSH is preferred is because of its ability to
maintain the encrypted connection. It sets up a secure session over a
TCP/IP connection.
 NTP: NTP stands for Network Time Protocol. It is used to synchronize the
clocks on our computer to one standard time source. It is very useful in
situations like bank transactions. Assume the following situation without
the presence of NTP. Suppose you carry out a transaction, where your
computer reads the time at 2:30 PM while the server records it at 2:28 PM.
The server can crash very badly if it’s out of sync.
The host-to-host layer is a layer in the OSI (Open Systems Interconnection) model
that is responsible for providing communication between hosts (computers or other
devices) on a network. It is also known as the transport layer.
Some common use cases for the host-to-host layer include:
 Reliable Data Transfer: The host-to-host layer ensures that data is
transferred reliably between hosts by using techniques like error
correction and flow control. For example, if a packet of data is lost
during transmission, the host-to-host layer can request that the packet be
retransmitted to ensure that all data is received correctly.
 Segmentation and Reassembly: The host-to-host layer is responsible for
breaking up large blocks of data into smaller segments that can be
transmitted over the network, and then reassembling the data at the
destination. This allows data to be transmitted more efficiently and helps
to avoid overloading the network.
 Multiplexing and Demultiplexing: The host-to-host layer is responsible for
multiplexing data from multiple sources onto a single network connection,
and then demultiplexing the data at the destination. This allows multiple
devices to share the same network connection and helps to improve the
utilization of the network.
 End-to-End Communication: The host-to-host layer provides a connection-
oriented service that allows hosts to communicate with each other end-to-
end, without the need for intermediate devices to be involved in the
communication.

Parameters OSI Model TCP/IP Model


Full Form OSI stands for Open Systems TCP/IP stands for
Interconnection Transmission Control
Protocol/Internet
Protocol
Layers It has 7 layers It has 4 layers
Usage It is low in usage It is mostly used
Approach It is vertically approached It is horizontally
approached
Delivery Delivery of the package is guaranteed Delivery of the package
in OSI Model is not guaranteed in
TCP/IP Model
Replacement Replacement of tools and changes can Replacing the tools is
easily be done in this model not easy as it is in OSI
Model
Reliability It is less reliable than TCP/IP Model It is more reliable than
OSI Model
Protocol Not tied to specific protocols, but HTTP, FTP, TCP, UDP, IP,
Example examples include HTTP (Application), Ethernet
SSL/TLS (Presentation), TCP
(Transport), IP (Network), Ethernet
(Data Link)
Error Built into Data Link and Transport Built into protocols like
Handling layers TCP
Connection Both connection-oriented (TCP) and TCP (connection-
Orientation connectionless (UDP) protocols are oriented), UDP
covered at the Transport layer (connectionless)
MODULE 2
[ ] Guided Transmission Media
1. Guided Media
Guided Media is also referred to as Wired or Bounded transmission media. Signals
being transmitted are directed and confined in a narrow pathway by using physical
links.
Features:
 High Speed
 Secure
 Used for comparatively shorter distances
To further enhance your understanding of these essential concepts, consider
enrolling in the GATE CS Self-Paced course . This course offers in-depth coverage
of key topics, including networking fundamentals, to help you master the concepts
crucial for GATE and other competitive exams. Expand your knowledge and stay
ahead with structured learning from experts.
There are 3 major types of Guided Media:

[ ] Co-axial Cable
Coaxial Cable
Coaxial cable has an outer plastic covering containing an insulation layer made
of PVC or Teflon and 2 parallel conductors each having a separate insulated
protection cover. The coaxial cable transmits information in two modes: Baseband
mode(dedicated cable bandwidth) and Broadband mode(cable bandwidth is split into
separate ranges). Cable TVs and analog television networks widely use Coaxial
cables.

Advantages of Coaxial Cable


 Coaxial cables has high bandwidth.
 It is easy to install.
 Coaxial cables are more reliable and durable.
 Less affected by noise or cross-talk or electromagnetic inference.
 Coaxial cables support multiple channels
Disadvantages of Coaxial Cable
 Coaxial cables are expensive.
 The coaxial cable must be grounded in order to prevent any crosstalk.
 As a Coaxial cable has multiple layers it is very bulky.
 There is a chance of breaking the coaxial cable and attaching a “t-joint”
by hackers, this compromises the security of the data.

[ ] Twisted-Pair Cable
Twisted Pair Cable
It consists of 2 separately insulated conductor wires wound about each other.
Generally, several such pairs are bundled together in a protective sheath. They
are the most widely used Transmission Media. Twisted Pair is of two types:
 Unshielded Twisted Pair (UTP): UTP consists of two insulated copper wires
twisted around one another. This type of cable has the ability to block
interference and does not depend on a physical shield for this purpose. It
is used for telephonic applications.

Unshielded Twisted Pair


Advantages of Unshielded Twisted Pair
 Least expensive
 Easy to install
 High-speed capacity
Disadvantages of Unshielded Twisted Pair
 Lower capacity and performance in comparison to STP
 Short distance transmission due to attenuation

Shielded Twisted Pair


Shielded Twisted Pair (STP): Shielded Twisted Pair (STP) cable consists of a
special jacket (a copper braid covering or a foil shield) to block external
interference. It is used in fast-data-rate Ethernet and in voice and data
channels of telephone lines.
Advantages of Shielded Twisted Pair
 Better performance at a higher data rate in comparison to UTP
 Eliminates crosstalk
 Comparatively faster
Disadvantages of Shielded Twisted Pair
 Comparatively difficult to install and manufacture
 More expensive
 Bulky

[ ] Fiber Optics Cable


Optical Fiber Cable
Optical Fibre Cable uses the concept of refraction of light through a core made
up of glass or plastic. The core is surrounded by a less dense glass or plastic
covering called the coating. It is used for the transmission of large volumes of
data. The cable can be unidirectional or bidirectional. The WDM (Wavelength
Division Multiplexer) supports two modes, namely unidirectional and bidirectional
mode.

Advantages of Optical Fibre Cable


 Increased capacity and bandwidth
 Lightweight
 Less signal attenuation
 Immunity to electromagnetic interference
 Resistance to corrosive materials
Disadvantages of Optical Fibre Cable
 Difficult to install and maintain
 High cost
Applications of Optical Fibre Cable
 Medical Purpose: Used in several types of medical instruments.
 Defence Purpose: Used in transmission of data in aerospace.
 For Communication: This is largely used in formation of internet cables.
 Industrial Purpose: Used for lighting purposes and safety measures in
designing the interior and exterior of automobiles.

[ ] Unguided Transmission Media


Unguided Media
It is also referred to as Wireless or Unbounded transmission media . No physical
medium is required for the transmission of electromagnetic signals.
Features of Unguided Media
 The signal is broadcasted through air
 Less Secure
 Used for larger distances
There are 3 types of Signals transmitted through unguided media:
Radio Waves
Radio waves are easy to generate and can penetrate through buildings. The sending
and receiving antennas need not be aligned. Frequency Range:3KHz – 1GHz. AM and
FM radios and cordless phones use Radio waves for transmission.

Radiowave
Microwaves
It is a line of sight transmission i.e. the sending and receiving antennas need
to be properly aligned with each other. The distance covered by the signal is
directly proportional to the height of the antenna. Frequency Range:1GHz –
300GHz. Micro waves are majorly used for mobile phone communication and
television distribution.

Infrared
Infrared waves are used for very short distance communication. They cannot
penetrate through obstacles. This prevents interference between systems.
Frequency Range:300GHz – 400THz. It is used in TV remotes, wireless mouse,
keyboard, printer, etc.

Difference Between Radio Waves, Micro Waves, and Infrared Waves

Basis Radiowave Microwave Infrared wave


Direction These are omni- These are These are
directional in nature. unidirectional in unidirectional in
nature. nature.
Penetration At low frequency, they At low frequency, They cannot
can penetrate through they can penetrate penetrate through
solid objects and through solid objects any solid object
walls but high and walls. at high and walls.
frequency they bounce frequency, they
off the obstacle. cannot penetrate.
Frequency Frequency range: 3 KHz Frequency range: 1 Frequency range:
range to 1GHz. GHz to 300 GHz. 300 GHz to 400
GHz.
Security These offers poor These offers medium These offers high
security. security. security.
Attenuation Attenuation is high. Attenuation is Attenuation is
variable. low.
Government Some frequencies in Some frequencies in There is no need
License the radio-waves the microwaves of government
require government require government license to use
license to use these. license to use these. these waves.
Usage Cost Setup and usage Cost Setup and usage Cost Usage Cost is
is moderate. is high. very less.
Communication These are used in long These are used in These are not
distance long distance used in long
communication. communication. distance
communication.
MODULE 3
[ ] Framing Methods
Framing is function of Data Link Layer that is used to separate message from
source or sender to destination or receiver or simply from all other messages to
all other destinations just by adding sender address and destination address. The
destination or receiver address is simply used to represent where message or
packet is to go and sender or source address is simply used to help recipient to
acknowledge receipt.
Frames are generally data unit of data link layer that is transmitted or
transferred among various network points. It includes complete and full
addressing, protocols that are essential, and information under control. Physical
layers only just accept and transfer stream of bits without any regard to meaning
or structure. Therefore it is up to data link layer to simply develop and
recognize frame boundaries.
This can be achieved by attaching special types of bit patterns to start and end
of the frame. If all of these bit patterns might accidentally occur in data,
special care is needed to be taken to simply make sure that these bit patterns
are not interpreted incorrectly or wrong as frame delimiters.
Framing is simply point-to-point connection among two computers or devices that
consists or includes wire in which data is transferred as stream of bits.
However, all of these bits should be framed into discernible blocks of
information.

Methods of Framing :
There are basically four methods of framing as given below –
1. Character Count
2. Flag Byte with Character Stuffing
3. Starting and Ending Flags, with Bit Stuffing
4. Encoding Violations
These are explained as following below.
1. Character Count :
This method is rarely used and is generally required to count total number
of characters that are present in frame. This is be done by using field in
header. Character count method ensures data link layer at the receiver or
destination about total number of characters that follow, and about where
the frame ends.
There is disadvantage also of using this method i.e., if anyhow character count
is disturbed or distorted by an error occurring during transmission, then
destination or receiver might lose synchronization. The destination or receiver
might also be not able to locate or identify beginning of next frame.
2. Character Stuffing :
Character stuffing is also known as byte stuffing or character-oriented
framing and is same as that of bit stuffing but byte stuffing actually
operates on bytes whereas bit stuffing operates on bits. In byte stuffing,
special byte that is basically known as ESC (Escape Character) that has
predefined pattern is generally added to data section of the data stream or
frame when there is message or character that has same pattern as that of
flag byte.
But receiver removes this ESC and keeps data part that causes some problems or
issues. In simple words, we can say that character stuffing is addition of 1
additional byte if there is presence of ESC or flag in text.

3. Bit Stuffing :
Bit stuffing is also known as bit-oriented framing or bit-oriented
approach. In bit stuffing, extra bits are being added by network protocol
designers to data streams. It is generally insertion or addition of extra
bits into transmission unit or message to be transmitted as simple way to
provide and give signaling information and data to receiver and to avoid or
ignore appearance of unintended or unnecessary control sequences.
It is type of protocol management simply performed to break up bit pattern that
results in transmission to go out of synchronization. Bit stuffing is very
essential part of transmission process in network and communication protocol. It
is also required in USB.
4. Physical Layer Coding Violations :
Encoding violation is method that is used only for network in which
encoding on physical medium includes some sort of redundancy i.e., use of
more than one graphical or visual structure to simply encode or represent
one variable of data.

Are you a student in Computer Science or an employed professional looking to take


up the GATE 2025 Exam? Of course, you can get a good score in it but to get the
best score our GATE CS/IT 2025 - Self-Paced Course is available on GeeksforGeeks
to help you with its preparation. Get comprehensive coverage of all topics of
GATE, detailed explanations, and practice questions for study. Study at your
pace. Flexible and easy-to-follow modules. Do well in GATE to enhance the
prospects of your career. Enroll now and let your journey to success begin!

[ ] CRC Checksum Error Detection Numerical

[ ] Sliding Window - Selective Repeat & Go-Back-N


Sliding Window Technique is a method used to efficiently solve problems that
involve defining a window or range in the input data (arrays or strings) and then
moving that window across the data to perform some operation within the window.
This technique is commonly used in algorithms like finding subarrays with a
specific sum, finding the longest substring with unique characters, or solving
problems that require a fixed-size window to process elements efficiently.
Let’s take an example to understand this properly, say we have an array of
size N and also an integer K. Now, we have to calculate the maximum sum of a
subarray having size exactly K. Now how should we approach this problem?
One way to do this by taking each subarray of size K from the array and find out
the maximum sum of these subarrays. This can be done using Nested loops which
will result into O(N2) Time Complexity.
But can we optimize this approach?
The answer is Yes, instead of taking each K sized subarray and calculating its
sum, we can just take one K size subarray from 0 to K-1 index and calculate its
sum now shift our range one by one along with the iterations and update the
result, like in next iteration increase the left and right pointer and update the
previous sum as shown in the below image:
Sliding Window Technique
Now follow this method for each iteration till we reach the end of the array:

Sliding Window Technique


So, we can see that instead of recalculating the sum of each K sized subarray we
are using previous window of size K and using its results we update the sum and
shift the window right by moving left and right pointers, this operation is
optimal because it take O(1) time to shift the range instead of recalculating.
This approach of shifting the pointers and calculating the results accordingly is
known as Sliding window Technique.

How to use Sliding Window Technique?


There are basically two types of sliding window:
1. Fixed Size Sliding Window:
The general steps to solve these questions by following below steps:
 Find the size of the window required, say K.
 Compute the result for 1st window, i.e. include the first K elements of the
data structure.
 Then use a loop to slide the window by 1 and keep computing the result
window by window.
2. Variable Size Sliding Window:
The general steps to solve these questions by following below steps:
 In this type of sliding window problem, we increase our right pointer one
by one till our condition is true.
 At any step if our condition does not match, we shrink the size of our
window by increasing left pointer.
 Again, when our condition satisfies, we start increasing the right pointer
and follow step 1.
 We follow these steps until we reach to the end of the array.

Go-Back-N Protocol Selective Repeat Protocol


In Go-Back-N Protocol, if the sent In selective Repeat protocol, only
frame are find suspected then all the those frames are re-transmitted which
frames are re-transmitted from the are found suspected.
lost packet to the last packet
transmitted.
Sender window size of Go-Back-N Sender window size of selective Repeat
Protocol is N. protocol is also N.
Receiver window size of Go-Back-N Receiver window size of selective
Protocol is 1. Repeat protocol is N.
Go-Back-N Protocol is less complex. Selective Repeat protocol is more
complex.
In Go-Back-N Protocol, neither sender In selective Repeat protocol, receiver
nor at receiver need sorting. side needs sorting to sort the frames.
In Go-Back-N Protocol, type of In selective Repeat protocol, type of
Acknowledgement is cumulative. Acknowledgement is individual.
In Go-Back-N Protocol, Out-of-Order In selective Repeat protocol, Out-of-
packets are NOT Accepted (discarded) Order packets are Accepted.
and the entire window is re-
transmitted.
In Go-Back-N Protocol, if Receives a In selective Repeat protocol, if
corrupt packet, then also, the entire Receives a corrupt packet, it
window is re-transmitted. immediately sends a negative
acknowledgement and hence only the
selective packet is retransmitted.
Efficiency of Go-Back-N Protocol is Efficiency of selective Repeat protocol
is also
N/(1+2*a)
N/(1+2*a)

[ ] CSMA/CD
CSMA/CD (Carrier Sense Multiple Access/ Collision Detection) is a media access
control method that was widely used in Early Ethernet technology/LANs when there
used to be shared Bus Topology and each node ( Computers) was connected by
Coaxial Cables. Nowadays Ethernet is Full Duplex and Topology is either Star
(connected via Switch or Router) or point-to-point ( Direct Connection). Hence
CSMA/CD is not used but they are still supported though.
Consider a scenario where there are ‘n’ stations on a link and all are waiting to
transfer data through that channel. In this case, all ‘n’ stations would want to
access the link/channel to transfer their own data. The problem arises when more
than one station transmits the data at the moment. In this case, there will be
collisions in the data from different stations.
CSMA/CD is one such technique where different stations that follow this protocol
agree on some terms and collision detection measures for effective transmission.
This protocol decides which station will transmit when so that data reaches the
destination without corruption.
How Does CSMA/CD Work?
 Step 1: Check if the sender is ready to transmit data packets.
 Step 2: Check if the transmission link is idle.
The sender has to keep on checking if the transmission link/medium is idle.
For this, it continuously senses transmissions from other nodes. The sender
sends dummy data on the link. If it does not receive any collision signal,
this means the link is idle at the moment. If it senses that the carrier is
free and there are no collisions, it sends the data. Otherwise, it refrains
from sending data.
 Step 3: Transmit the data & check for collisions.
The sender transmits its data on the link. CSMA/CD does not use an
‘acknowledgment’ system. It checks for successful and unsuccessful
transmissions through collision signals. During transmission, if a
collision signal is received by the node, transmission is stopped. The
station then transmits a jam signal onto the link and waits for random time
intervals before it resends the frame. After some random time, it again
attempts to transfer the data and repeats the above process.
 Step 4: If no collision was detected in propagation, the sender completes
its frame transmission and resets the counters.
How Does a Station Know if Its Data Collide?

Consider the above situation. Two stations, A & B.


Propagation Time: Tp = 1 hr ( Signal takes 1 hr to go from A to B)
At time t=0, A transmits its data.
t= 30 mins : Collision occurs.
After the collision occurs, a collision signal is generated and sent to both A &
B to inform the stations about the collision. Since the collision happened
midway, the collision signal also takes 30 minutes to reach A & B.
Therefore, t=1 hr: A & B receive collision signals.
This collision signal is received by all the stations on that link. Then,
How to Ensure that it is our Station’s Data that Collided?
For this, Transmission time (Tt) > Propagation Time (Tp) [Rough bound]
This is because we want that before we transmit the last bit of our data from our
station, we should at least be sure that some of the bits have already reached
their destination. This ensures that the link is not busy and collisions will not
occur.
But, above is a loose bound. We have not taken the time taken by the collision
signal to travel back to us. For this consider the worst-case scenario.
Consider the above system again.
At time t=0, A transmits its data.
t= 59:59 mins : Collision occurs
This collision occurs just before the data reaches B. Now the collision signal
takes 59:59 minutes again to reach A. Hence, A receives the collision information
approximately after 2 hours, that is, after 2 * Tp.
Hence, to ensure tighter bound, to detect the collision completely,
Tt > >= 2 * Tp
This is the maximum collision time that a system can take to detect if the
collision was of its own data.
What should be the Minimum length of the Packet to be Transmitted?
Transmission Time = Tt = Length of the packet/ Bandwidth of the link
[Number of bits transmitted by sender per second]
Substituting above, we get,
Length of the packet/ Bandwidth of the link>= 2 * Tp
Length of the packet >= 2 * Tp * Bandwidth of the link
Padding helps in cases where we do not have such long packets. We can pad extra
characters to the end of our data to satisfy the above condition.
Features of Collision Detection in CSMA/CD
 Carrier Sense: Before transmitting data, a device listens to the network to
check if the transmission medium is free. If the medium is busy, the device
waits until it becomes free before transmitting data.
 Multiple Access: In a CSMA/CD network, multiple devices share the same
transmission medium. Each device has equal access to the medium, and any
device can transmit data when the medium is free.
 Collision Detection: If two or more devices transmit data simultaneously, a
collision occurs. When a device detects a collision, it immediately stops
transmitting and sends a jam signal to inform all other devices on the
network of the collision. The devices then wait for a random time before
attempting to transmit again, to reduce the chances of another collision.
 Backoff Algorithm: In CSMA/CD, a backoff algorithm is used to determine
when a device can retransmit data after a collision. The algorithm uses a
random delay before a device retransmits data, to reduce the likelihood of
another collision occurring.
 Minimum Frame Size: CSMA/CD requires a minimum frame size to ensure that
all devices have enough time to detect a collision before the transmission
ends. If a frame is too short, a device may not detect a collision and
continue transmitting, leading to data corruption on the network.
Advantages of CSMA/CD
 Simple and widely used: CSMA/CD is a widely used protocol for Ethernet
networks, and its simplicity makes it easy to implement and use.
 Fairness: In a CSMA/CD network, all devices have equal access to the
transmission medium, which ensures fairness in data transmission.
 Efficiency: CSMA/CD allows for efficient use of the transmission medium by
preventing unnecessary collisions and reducing network congestion.
Disadvantages of CSMA/CD
 Limited Scalability: CSMA/CD has limitations in terms of scalability, and
it may not be suitable for large networks with a high number of devices.
 Vulnerability to Collisions: While CSMA/CD can detect collisions, it cannot
prevent them from occurring. Collisions can lead to data corruption,
retransmission delays, and reduced network performance.
 Inefficient Use of Bandwidth: CSMA/CD uses a random backoff algorithm that
can result in inefficient use of network bandwidth if a device continually
experiences collisions.
 Susceptibility to Security Attacks: CSMA/CD does not provide any security
features, and the protocol is vulnerable to security attacks such as packet
sniffing and spoofing.

[ ] CSMA/CA
The basic idea behind CSMA/CA is that the station should be able to receive while
transmitting to detect a collision from different stations. In wired networks, if
a collision has occurred then the energy of the received signal almost doubles,
and the station can sense the possibility of collision. In the case of wireless
networks, most of the energy is used for transmission, and the energy of the
received signal increases by only 5-10% if a collision occurs. It can’t be used
by the station to sense collision. Therefore CSMA/CA has been specially designed
for wireless networks.
These are three types of strategies:
1. InterFrame Space (IFS): When a station finds the channel busy it senses the
channel again, when the station finds a channel to be idle it waits for a
period of time called IFS time. IFS can also be used to define the priority
of a station or a frame. Higher the IFS lower is the priority.
2. Contention Window: It is the amount of time divided into slots. A station
that is ready to send frames chooses a random number of slots as wait time.
3. Acknowledgments: The positive acknowledgments and time-out timer can help
guarantee a successful transmission of the frame.
Characteristics of CSMA/CA
1. Carrier Sense: The device listens to the channel before transmitting, to
ensure that it is not currently in use by another device.
2. Multiple Access: Multiple devices share the same channel and can transmit
simultaneously.
3. Collision Avoidance: If two or more devices attempt to transmit at the same
time, a collision occurs. CSMA/CA uses random backoff time intervals to
avoid collisions.
4. Acknowledgment (ACK): After successful transmission, the receiving device
sends an ACK to confirm receipt.
5. Fairness: The protocol ensures that all devices have equal access to the
channel and no single device monopolizes it.
6. Binary Exponential Backoff: If a collision occurs, the device waits for a
random period of time before attempting to retransmit. The backoff time
increases exponentially with each retransmission attempt.
7. Interframe Spacing: The protocol requires a minimum amount of time between
transmissions to allow the channel to be clear and reduce the likelihood of
collisions.
8. RTS/CTS Handshake: In some implementations, a Request-To-Send (RTS) and
Clear-To-Send (CTS) handshake is used to reserve the channel before
transmission. This reduces the chance of collisions and increases
efficiency.
9. Wireless Network Quality: The performance of CSMA/CA is greatly influenced
by the quality of the wireless network, such as the strength of the signal,
interference, and network congestion.
10. Adaptive Behavior: CSMA/CA can dynamically adjust its behavior in
response to changes in network conditions, ensuring the efficient use of
the channel and avoiding congestion.
Overall, CSMA/CA balances the need for efficient use of the shared channel with
the need to avoid collisions, leading to reliable and fair communication in a
wireless network.
Process: The entire process of collision avoidance can be explained as follows:
Types of CSMA Access Modes
There are 4 types of access modes available in CSMA. It is also referred as 4
different types of CSMA protocols which decide the time to start sending data
across shared media.
1. 1-Persistent: It senses the shared channel first and delivers the data
right away if the channel is idle. If not, it must wait
and continuously track for the channel to become idle and then broadcast
the frame without condition as soon as it does. It is an aggressive
transmission algorithm.
2. Non-Persistent: It first assesses the channel before transmitting data; if
the channel is idle, the node transmits data right away. If not, the
station must wait for an arbitrary amount of time (not continuously), and
when it discovers the channel is empty, it sends the frames.
3. P-Persistent: It consists of the 1-Persistent and Non-Persistent modes
combined. Each node observes the channel in the 1Persistent mode, and if
the channel is idle, it sends a frame with a P probability. If the data is
not transferred, the frame restarts with the following time slot after
waiting for a (q = 1-p probability) random period.
4. O-Persistent: A supervisory node gives each node a transmission order.
Nodes wait for their time slot according to their allocated transmission
sequence when the transmission medium is idle.
Advantages of CSMA
 Increased Efficiency: CSMA ensures that only one device communicates on the
network at a time, reducing collisions and improving network efficiency.
 Simplicity: CSMA is a simple protocol that is easy to implement and does
not require complex hardware or software.
 Flexibility: CSMA is a flexible protocol that can be used in a wide range
of network environments, including wired and wireless networks.
 Low cost: CSMA does not require expensive hardware or software, making it a
cost-effective solution for network communication.
Disadvantages of CSMA
 Limited Scalability: CSMA is not a scalable protocol and can become
inefficient as the number of devices on the network increases.
 Delay: In busy networks, the requirement to sense the medium and wait for
an available channel can result in delays and increased latency.
 Limited Reliability: CSMA can be affected by interference, noise, and other
factors, resulting in unreliable communication.
 Vulnerability to Attacks: CSMA can be vulnerable to certain types of
attacks, such as jamming and denial-of-service attacks, which can disrupt
network communication.

[ ] Channel allocation problem (ALOHA)


Channel allocation is a process in which a single channel is divided and allotted
to multiple users in order to carry user specific tasks. There are user’s
quantity may vary every time the process takes place. If there are N number of
users and channel is divided into N equal-sized sub channels, Each user is
assigned one portion. If the number of users are small and don’t vary at times,
then Frequency Division Multiplexing can be used as it is a simple and efficient
channel bandwidth allocating technique.
Channel allocation problem can be solved by two schemes: Static Channel
Allocation in LANs and MANs, and Dynamic Channel Allocation.

These are explained as following below.


1. Static Channel Allocation in LANs and MANs:
It is the classical or traditional approach of allocating a single channel among
multiple competing users using Frequency Division Multiplexing (FDM). if there
are N users, the frequency channel is divided into N equal sized portions
(bandwidth), each user being assigned one portion. since each user has a private
frequency band, there is no interference between users.
However, it is not suitable in case of a large number of users with variable
bandwidth requirements.
It is not efficient to divide into fixed number of chunks.

T = 1/(U*C-L)

T(FDM) = N*T(1/U(C/N)-L/N)
Where,

T = mean time delay,


C = capacity of channel,
L = arrival rate of frames,
1/U = bits/frame,
N = number of sub channels,
T(FDM) = Frequency Division Multiplexing Time
2. Dynamic Channel Allocation:
In dynamic channel allocation scheme, frequency bands are not permanently
assigned to the users. Instead channels are allotted to users dynamically as
needed, from a central pool. The allocation is done considering a number of
parameters so that transmission interference is minimized.
This allocation scheme optimises bandwidth usage and results is faster
transmissions.
Dynamic channel allocation is further divided into:
1. Centralised Allocation
2. Distributed Allocation
Possible assumptions include:

Station Model:
Assumes that each of N stations independently produce frames. The probability of
producing a packet in the interval IDt where I is the constant arrival rate of
new frames.

Single Channel Assumption:


In this allocation all stations are equivalent and can send and receive on that
channel.

Collision Assumption:
If two frames overlap in time-wise, then that’s collision. Any collision is an
error, and both frames must re transmitted. Collisions are only possible error.

Time can be divided into Slotted or Continuous.

Stations can sense a channel is busy before they try it.


Protocol Assumption:

 N independent stations.
 A station is blocked until its generated frame is transmitted.
 probability of a frame being generated in a period of length Dt is IDt
where I is the arrival rate of frames.
 Only a single Channel available.
 Time can be either: Continuous or slotted.
 Carrier Sense: A station can sense if a channel is already busy before
transmission.
 No Carrier Sense: Time out used to sense loss data.

[ ] ALOHA and slotted ALOHA numerical


Aloha is a type of Random access protocol it was developed at the University of
Hawaii in early 1970, it is a LAN-based protocol this type there are more chances
of occurrence of collisions during the transmission of data from any source to
the destination, Aloha has two types one Pure Aloha and another Slotted Aloha.
Pure Aloha
Pure Aloha can be termed as the main Aloha or the original Aloha. Whenever any
frame is available, each station sends it, and due to the presence of only one
channel for communication, it can lead to the chance of collision.
In the case of the pure aloha, the user transmits the frame and waits till the
receiver acknowledges it, if the receiver does not send the acknowledgment, the
sender will assume that it has not been received and sender resends the
acknowledgment.

Pure Aloha
For more, refer to Pure Aloha.
Slotted Aloha
Slotted Aloha is simply an advanced version of pure Aloha that helps in improving
the communication network. A station is required to wait for the beginning of the
next slot to transmit. The vulnerable period is halved as opposed to Pure Aloha.
Slotted Aloha helps in reducing the number of collisions by properly utilizing
the channel and this basically results in the somehow delay of the users. In
Slotted Aloha, the channel time is separated into particular time slots.
Slotted Aloha
For more, refer to Slotted Aloha.
Differences Between Pure Aloha and Slotted Aloha
The difference between Pure and Slotted Aloha lies in their approach to handling
data collisions. While Pure Aloha sends data anytime, Slotted Aloha reduces
collisions by organizing time slots. To gain more insight into these protocols
and other network fundamentals, explore this GATE Computer Science & IT – 2025
course.

Pure Aloha Slotted Aloha


In this Aloha, any station can In this, any station can transmit the data
transmit the data at any time. at the beginning of any time slot.
In this, The time is continuous and In this, The time is discrete and globally
not globally synchronized. synchronized.
Vulnerable time for Pure Aloha = 2 Vulnerable time for Slotted Aloha = Tt
x Tt
In Pure Aloha, the Probability of In Slotted Aloha, the Probability of
successful transmission of the data successful transmission of the data packet
packet
= G x e-G
= G x e-2G
In Pure Aloha, Maximum efficiency In Slotted Aloha, Maximum efficiency
= 18.4% = 36.8%
Pure Aloha doesn’t reduce the Slotted Aloha reduces the number of
number of collisions to half. collisions to half and doubles the
efficiency of Pure Aloha.
MODULE 4
[ ] IPv4 Header Format
 VERSION: Version of the IP protocol (4 bits), which is 4 for IPv4
 HLEN: IP header length (4 bits), which is the number of 32 bit words in the
header. The minimum value for this field is 5 and the maximum is 15.
 Type of service: Low Delay, High Throughput, Reliability (8 bits)
 Total Length: Length of header + Data (16 bits), which has a minimum value
20 bytes and the maximum is 65,535 bytes.
 Identification: Unique Packet Id for identifying the group of fragments of
a single IP datagram (16 bits)
 Flags: 3 flags of 1 bit each : reserved bit (must be zero), do not fragment
flag, more fragments flag (same order)
 Fragment Offset: Represents the number of Data Bytes ahead of the
particular fragment in the particular Datagram. Specified in terms of
number of 8 bytes, which has the maximum value of 65,528 bytes.
 Time to live: Datagram’s lifetime (8 bits), It prevents the datagram to
loop through the network by restricting the number of Hops taken by a
Packet before delivering to the Destination.
 Protocol: Name of the protocol to which the data is to be passed (8 bits)
 Header Checksum: 16 bits header checksum for checking errors in the
datagram header
 Source IP address: 32 bits IP address of the sender
 Destination IP address: 32 bits IP address of the receiver
 Option: Optional information such as source route, record route. Used by
the Network administrator to check whether a path is working or not.

IPv4 Datagram Header


Due to the presence of options, the size of the datagram header can be of
variable length (20 bytes to 60 bytes).
[ ] ARP & RARP
In Address Resolution Protocol (ARP), Receiver’s MAC address is fetched. Through
ARP, 32-bit IP address mapped into 48-bit MAC address. Whereas, In Reverse
Address Resolution Protocol (RARP), IP address is fetched through server. Through
RARP, 48-bit MAC address of 48 bits mapped into 32-bit IP address. In this
article we will see difference between ARP and RARP protocol in detail.

What is ARP?
Address Resolution Protocol is a protocol used to map an IP address 32-bit to a
physical MAC address 48-bit. The MAC address is known as the hardware id number.
This is important in local area networks where devices need to know each other
MAC addresses to communicate easily at the data link layer.
How Does ARP Work?
 When a device wants to communicate with another device on the local network
but only knows its IP address, it sends out an ARP request. This request is
broadcasted to all devices on the local network.
 The ARP request packet includes the senders IP address, senders MAC
address, and the IP address of the device whose MAC address is being
queried.
 The sender receives the ARP reply and updates its ARP table with the IP to
MAC address mapping, allowing it to send packets directly to the
destination device.
What is RARP?
Reverse Address Resolution Protocol is used to map a MAC address 48-bit to an IP
address 32-bit. This protocol is typically used by devices that know their Media
Access Control address but need to find their IP address.
How Does RARP Work?
 When a device with only its MAC address but not its IP address needs to
find its IP address, it sends out a RARP request. This request is
broadcasted to all devices on the local network.
 The RARP request packet includes the devices MAC address and requests an IP
address in return.
 The device receives the RARP reply and configures itself with the IP
address provided by the RARP server.

Difference between ARP and RARP


ARP and RARP are essential protocols in the network layer for address resolution.
If you’re preparing for GATE or other competitive exams, mastering these
protocols can help you score better in the networking section. The GATE CS Self-
Paced Course explains ARP, RARP, and other crucial networking concepts in a
concise and easy-to-understand manner, making your exam preparation more
efficient
Let us see that the difference between ARP and RARP that are as follows:

ARP RARP
A protocol used to map an IP address A protocol used to map a physical address
to a physical address to an IP address
To obtain the MAC address of a To obtain the IP address of a network
network device when only its IP device when only its MAC address is known
address is known
IP addresses MAC addresses
ARP stands for Address Resolution Whereas RARP stands for Reverse Address
Protocol. Resolution Protocol.
In ARP, broadcast MAC address is While in RARP, broadcast IP address is
used. used.
In ARP, ARP table is managed or While in RARP, RARP table is managed or
maintained by local host. maintained by RARP server.
In Address Resolution Protocol, While in RARP, IP address is fetched.
Receiver’s MAC address is fetched.
ARP is used in sender’s side to map RARP is used in receiver’s side to map
the receiver’s MAC address. the sender’s IP.

[ ] IPv4 Subnetting Numerical

[ ] Dijkstra's Algorithm

[ ] Link State Routing


Link state routing is the second family of routing protocols. While distance-
vector routers use a distributed algorithm to compute their routing tables, link-
state routing uses link-state routers to exchange messages that allow each router
to learn the entire network topology. Based on this learned topology, each router
is then able to compute its routing table by using the shortest path
computation.
Link state routing is a popular algorithm used in unicast routing to determine
the shortest path in a network. Understanding how link state protocols work is
key to mastering routing algorithms. To gain a more detailed understanding of
unicast routing and link state protocols, the GATE CS and IT – 2025 course offers
thorough coverage of routing algorithms, complete with practical exercises and
real-world applications, helping you prepare for both theoretical exams and
practical network design
Link state routing is a technique in which each router shares the knowledge of
its neighborhood with every other router i.e. the internet work. The three keys
to understand the link state routing algorithm.
1. Knowledge about the neighborhood : Instead of sending its routing table, a
router sends the information about its neighborhood only. A router
broadcast its identities and cost of the directly attached links to other
routers.
2. Flooding: Each router sends the information to every other router on the
internetwork except its neighbors. This process is known as flooding. Every
router that receives the packet sends the copies to all the neighbors.
Finally each and every router receives a copy of the same information.
3. Information Sharing : A router send the information to every other router
only when the change occurs in the information.
Link state routing has two phase:
1. Reliable Flooding: Initial state – Each node knows the cost of its
neighbors. Final state- Each node knows the entire graph.
2. Route Calculation : Each node uses Dijkstra’ s algorithm on the graph to
calculate the optimal routes to all nodes. The link state routing algorithm
is also known as Dijkstra’s algorithm which is used to find the shortest
path from one node to every other node in the network.
Features of Link State Routing Protocols
 Link State Packet: A small packet that contains routing information.
 Link-State Database: A collection of information gathered from the link-
state packet.
 Shortest Path First Algorithm (Dijkstra algorithm): A calculation performed
on the database results in the shortest path
 Routing Table: A list of known paths and interfaces.
Characteristics of Link State Protocol
 It requires a large amount of memory.
 Shortest path computations require many CPU circles.
 If a network uses little bandwidth; it quickly reacts to topology changes
 All items in the database must be sent to neighbors to form link-state
packets.
 All neighbors must be trusted in the topology.
 Authentication mechanisms can be used to avoid undesired adjacency and
problems.
 No split horizon techniques are possible in the link-state routing.
 OSPF Protocol

[ ] Distance Vector Routing


o The Distance vector algorithm is iterative, asynchronous and distributed.
o Distributed: It is distributed in that each node receives information
from one or more of its directly attached neighbors, performs
calculation and then distributes the result back to its neighbors.
o Iterative: It is iterative in that its process continues until no more
information is available to be exchanged between neighbors.
o Asynchronous: It does not require that all of its nodes operate in the
lock step with each other.
o The Distance vector algorithm is a dynamic algorithm.
o It is mainly used in ARPANET, and RIP.
o Each router maintains a distance table known as Vector.
Three Keys to understand the working of Distance Vector Routing Algorithm:
o Knowledge about the whole network: Each router shares its knowledge through
the entire network. The Router sends its collected knowledge about the
network to its neighbors.
o Routing only to neighbors: The router sends its knowledge about the network
to only those routers which have direct links. The router sends whatever it
has about the network through the ports. The information is received by the
router and uses the information to update its own routing table.
o Information sharing at regular intervals: Within 30 seconds, the router
sends the information to the neighboring routers.
Distance Vector Routing Algorithm
Let dx(y) be the cost of the least-cost path from node x to node y. The least
costs are related by Bellman-Ford equation,
dx(y) = minv{c(x,v) + dv(y)}
Where the minv is the equation taken for all x neighbors. After traveling from x
to v, if we consider the least-cost path from v to y, the path cost will be
c(x,v)+dv(y). The least cost from x to y is the minimum of c(x,v)+dv(y) taken over
all neighbors.
With the Distance Vector Routing algorithm, the node x contains the following
routing information:
o For each neighbor v, the cost c(x,v) is the path cost from x to directly
attached neighbor, v.
o The distance vector x, i.e., Dx = [ Dx(y) : y in N ], containing its cost to
all destinations, y, in N.
o The distance vector of each of its neighbors, i.e., Dv = [ Dv(y) : y in N ]
for each neighbor v of x.
Distance vector routing is an asynchronous algorithm in which node x sends the
copy of its distance vector to all its neighbors. When node x receives the new
distance vector from one of its neighboring vector, v, it saves the distance
vector of v and uses the Bellman-Ford equation to update its own distance vector.
The equation is given below:
dx(y) = minv{ c(x,v) + dv(y)} for each node y in N
The node x has updated its own distance vector table by using the above equation
and sends its updated table to all its neighbors so that they can update their
own distance vectors.
Algorithm
At each node x,
Initialization

for all destinations y in N:


Dx(y) = c(x,y) // If y is not a neighbor then c(x,y) = ∞
for each neighbor w
Dw(y) = ? for all destination y in N.
for each neighbor w
send distance vector Dx = [ Dx(y) : y in N ] to w
loop
wait(until I receive any distance vector from some neighbor w)
for each y in N:
Dx(y) = minv{c(x,v)+Dv(y)}
If Dx(y) is changed for any destination y
Send distance vector Dx = [ Dx(y) : y in N ] to all neighbors
forever

[ ] ICMP & IGMP


ICMP:
ICMP is used for reporting errors and management queries. It is a supporting
protocol and is used by network devices like routers for sending error messages
and operations information. For example, the requested service is not available
or a host or router could not be reached.
Since the IP protocol lacks an error-reporting or error-correcting mechanism,
information is communicated via a message. For instance, when a message is sent
to its intended recipient, it may be intercepted along the route from the sender.
The sender may believe that the communication has reached its destination if no
one reports the problem. If a middleman reports the mistake, ICMP helps in
notifying the sender about the issue. For example, if a message can’t reach its
destination, if there’s network congestion, or if packets are lost, ICMP sends
back feedback about these issues. This feedback is essential for diagnosing and
fixing network problems, making sure that communication can be adjusted or
rerouted to keep everything running smoothly.
Uses of ICMP
ICMP is used for error reporting if two devices connect over the internet and
some error occurs, So, the router sends an ICMP error message to the source
informing about the error. For Example, whenever a device sends any message which
is large enough for the receiver, in that case, the receiver will drop the
message and reply to the ICMP message to the source.
Another important use of ICMP protocol is used to perform network diagnosis by
making use of traceroute and ping utility.
Traceroute: Traceroute utility is used to know the route between two devices
connected over the internet. It routes the journey from one router to another,
and a traceroute is performed to check network issues before data transfer.
Ping: Ping is a simple kind of traceroute known as the echo-request message, it
is used to measure the time taken by data to reach the destination and return to
the source, these replies are known as echo-replies messages.
How Does ICMP Work?
ICMP is the primary and important protocol of the IP suite, but ICMP isn’t
associated with any transport layer protocol (TCP or UDP) as it doesn’t need to
establish a connection with the destination device before sending any message as
it is a connectionless protocol.
The working of ICMP is just contrasting with TCP, as TCP is a connection-oriented
protocol whereas ICMP is a connectionless protocol. Whenever a connection is
established before the message sending, both devices must be ready through a TCP
Handshake.
ICMP packets are transmitted in the form of datagrams that contain an IP header
with ICMP data. ICMP datagram is similar to a packet, which is an independent
data entity.
ICMP Packet Format
ICMP header comes after IPv4 and IPv6 packet header.

ICMPv4 Packet Format


In the ICMP packet format, the first 32 bits of the packet contain three fields:
Type (8-bit): The initial 8-bit of the packet is for message type, it provides a
brief description of the message so that receiving network would know what kind
of message it is receiving and how to respond to it. Some common message types
are as follows:
 Type 0 – Echo reply
 Type 3 – Destination unreachable
 Type 5 – Redirect Message
 Type 8 – Echo Request
 Type 11 – Time Exceeded
 Type 12 – Parameter problem
Code (8-bit): Code is the next 8 bits of the ICMP packet format, this field
carries some additional information about the error message and type.
Checksum (16-bit): Last 16 bits are for the checksum field in the ICMP packet
header. The checksum is used to check the number of bits of the complete message
and enable the ICMP tool to ensure that complete data is delivered.
The next 32 bits of the ICMP Header are Extended Header which has the work of
pointing out the problem in IP Message. Byte locations are identified by the
pointer which causes the problem message and receiving device looks here for
pointing to the problem.
The last part of the ICMP packet is Data or Payload of variable length. The bytes
included in IPv4 are 576 bytes and in IPv6, 1280 bytes.
Advantages of ICMP
 Network devices use ICMP to send error messages, and administrators can use
the Ping and Tracert commands to debug the network.
 These alerts are used by administrators to identify issues with network
connectivity.
 A prime example is when a destination or gateway host notifies the source
host via an ICMP message if there is a problem or a change in network
connectivity that needs to be reported. Examples include when a destination
host or networking becomes unavailable, when a packet is lost during
transmission, etc.
 Furthermore, network performance and connection monitoring tools commonly
employ ICMP to identify the existence of issues that the network team has
to resolve.
 One quick and simple method to test connections and find the source is to
use the ICMP protocol, which consists of queries and answers.
Disadvantages of ICMP
 If the router drops a packet, it may be due to an error; but, because to
the way the IP (internet protocol) is designed, there is no way for the
sender to be notified of this problem.
 Assume, while a data packet is being transmitted over the internet, that
its lifetime is over and that the value of the time to live field has
dropped to zero. In this case, the data packet is destroyed.
 Although devices frequently need to interact with one another, there isn’t
a standard method for them to do so in Internet Protocol. For instance, the
host needs to verify the destination’s vital signs to see if it is still
operational before transmitting data.

IGMP:
IGMP is an acronym for Internet Group Management Protocol. IGMP is a
communication protocol used by hosts and adjacent routers for multicasting
communication with IP networks and uses the resources efficiently to transmit the
message/data packets. Multicast communication can have single or multiple senders
and receivers and thus, IGMP can be used in streaming videos, gaming, or web
conferencing tools. This protocol is used on IPv4 networks and for using this on
IPv6, multicasting is managed by Multicast Listener Discovery (MLD).
Like other network protocols, IGMP is used on the network layer. MLDv1 is almost
the same in functioning as IGMPv2 and MLDv2 is almost similar to IGMPv3. The
communication protocol, IGMPv1 was developed in 1989 at Stanford University.
IGMPv1 was updated to IGMPv2 in the year 1997 and again updated to IGMPv3 in the
year 2002. The IGMP protocol is used by the hosts and router to identify the
hosts in a LAN that are the members of a group. IGMP is a part of the IP layer
and IGMP has a fixed-size message. The IGMP message is encapsulated within an IP
datagram.
The IP protocol supports two types of communication:
 Unicasting- It is a communication between one sender and one receiver.
Therefore, we can say that it is one-to-one communication.
 Multicasting: Sometimes the sender wants to send the same message to a
large of receivers simultaneously. This process is known as multicasting
which has one-to-many communication.
Applications:
 Streaming – Multicast routing protocols are used for audio and video
streaming over the network i.e., either one-to-many or many-to-many.
 Gaming – Internet group management protocol is often used in simulation
games which has multiple users over the network such as online games.
 Web Conferencing tools – Video conferencing is a new method to meet people
from your own convenience and IGMP connects to the users for conferencing
and transfers the message/data packets efficiently.
IGMP works on devices that are capable of handling multicast groups and dynamic
multicasting. These devices allow the host to join or leave the membership in the
multicast group. These devices also allow to add and remove clients from the
group. This communication protocol is operated between the host and the local
multicast router. When a multicast group is created, the multicast group address
is in the range of class D (224-239) IP addresses and is forwarded as the
destination IP address in the packet.

L2 or Level-2 devices such as switches are used in between host and multicast
router for IGMP snooping. IGMP snooping is a process to listen to the IGMP
network traffic in controlled manner. Switch receives the message from host and
forwards the membership report to the local multicast router. The multicast
traffic is further forwarded to remote routers from local multicast routers using
PIM (Protocol Independent Multicast) so that clients can receive the message/data
packets. Clients wishing to join the network sends join message in the query and
switch intercepts the message and adds the ports of clients to its multicast
routing table.
Advantages
 IGMP communication protocol efficiently transmits the multicast data to the
receivers and so, no junk packets are transmitted to the host which shows
optimized performance.
 Bandwidth is consumed totally as all the shared links are connected.
 Hosts can leave a multicast group and join another.
Disadvantages
 It does not provide good efficiency in filtering and security.
 Due to lack of TCP, network congestion can occur.
 IGMP is vulnerable to some attacks such as DOS attack (Denial-Of-Service).

[ ] Classful & Classless IPv4 Addressing


An IPv4 address is a unique number assigned to every device that connects to the
internet or a computer network. It’s like a home address for your computer,
smartphone, or any other device, allowing it to communicate with other devices.
 Format: An IPv4 address is written as four numbers separated by periods,
like this: 192.168.1.1. Each number can range from 0 to 255.
 The IPv4 address is divided into two parts: Network ID and Host ID.
 Purpose: The main purpose of an IPv4 address is to identify devices on a
network and ensure that data sent from one device reaches the correct
destination.
 Example: When you type a website address into your browser, your device
uses the IPv4 address to find and connect to the server where the website
is hosted.
 Host ID
Think of an IPv4 address as a phone number for your device. Just as you dial a
specific number to reach a particular person, devices use IPv4 addresses to
connect and share information.
There are two notations in which the IP address is written, dotted decimal and
hexadecimal notation.
Dotted Decimal Notation
Some points to be noted about dotted decimal notation:
 The value of any segment (byte) is between 0 and 255 (both included).
 No zeroes are preceding the value in any segment (054 is wrong, 54 is
correct).

Dotted Decimal Notation


Hexadecimal Notation

Need For Classful Addressing


Initially in 1980’s IP address was divided into two fixed part i.e., NID(Network
ID) = 8bit, and HID(Host ID) = 24bit. So there are 28 that is 256 total network
are created and 224 that is 16M Host per network.
There are one 256 Networks and even a small organization must buy 16M
computer(Host) to purchase one network. That’s why we need classfull addressing.
Classful Addressing
The 32-bit IP address is divided into five sub-classes. These are given below:
 Class A
 Class B
 Class C
 Class D
 Class E
Each of these classes has a valid range of IP addresses. Classes D and E are
reserved for multicast and experimental purposes respectively. The order of bits
in the first octet determines the classes of the IP address.
The class of IP address is used to determine the bits used for network ID and
host ID and the number of total networks and hosts possible in that particular
class. Each ISP or network administrator assigns an IP address to each device
that is connected to its network.

Classful Addressing
Note:
 IP addresses are globally managed by Internet Assigned Numbers
Authority(IANA) and Regional Internet Registries(RIR).
 While finding the total number of host IP addresses, 2 IP addresses are not
counted and are therefore, decreased from the total count because the first
IP address of any network is the network number and whereas the last IP
address is reserved for broadcast IP.
Occupation of The Address Space In Classful Addressing
Class A
IP addresses belonging to class A are assigned to the networks that contain a
large number of hosts.
 The network ID is 8 bits long.
 The host ID is 24 bits long.
The higher-order bit of the first octet in class A is always set to 0. The
remaining 7 bits in the first octet are used to determine network ID. The 24 bits
of host ID are used to determine the host in any network. The default subnet mask
for Class A is 255.x.x.x. Therefore, class A has a total of:
 2^24 – 2 = 16,777,214 host ID
IP addresses belonging to class A ranges from 0.0.0.0 – 127.255.255.255.

Class A
Class B
IP address belonging to class B is assigned to networks that range from medium-
sized to large-sized networks.
 The network ID is 16 bits long.
 The host ID is 16 bits long.
The higher-order bits of the first octet of IP addresses of class B are always
set to 10. The remaining 14 bits are used to determine the network ID. The 16
bits of host ID are used to determine the host in any network. The default subnet
mask for class B is 255.255.x.x. Class B has a total of:
 2^14 = 16384 network address
 2^16 – 2 = 65534 host address
IP addresses belonging to class B ranges from 128.0.0.0 – 191.255.255.255.

Class B
Class C
IP addresses belonging to class C are assigned to small-sized networks.
 The network ID is 24 bits long.
 The host ID is 8 bits long.
The higher-order bits of the first octet of IP addresses of class C is always set
to 110. The remaining 21 bits are used to determine the network ID. The 8 bits of
host ID are used to determine the host in any network. The default subnet
mask for class C is 255.255.255.x. Class C has a total of:
 2^21 = 2097152 network address
 2^8 – 2 = 254 host address
IP addresses belonging to class C range from 192.0.0.0 – 223.255.255.255.

Class C
Class D
IP address belonging to class D is reserved for multi-casting. The higher-order
bits of the first octet of IP addresses belonging to class D is always set to
1110. The remaining bits are for the address that interested hosts recognize.
Class D does not possess any subnet mask. IP addresses belonging to class D range
from 224.0.0.0 – 239.255.255.255.

Class D
Class E
IP addresses belonging to class E are reserved for experimental and research
purposes. IP addresses of class E range from 240.0.0.0 – 255.255.255.255. This
class doesn’t have any subnet mask. The higher-order bits of the first octet of
class E are always set to 1111.

Class E
Range of Special IP Addresses
169.254.0.0 – 169.254.0.16 : Link-local addresses
127.0.0.0 – 127.255.255.255 : Loop-back addresses
0.0.0.0 – 0.0.0.8: used to communicate within the current network.
Rules for Assigning Host ID
Host IDs are used to identify a host within a network. The host ID is assigned
based on the following rules:
 Within any network, the host ID must be unique to that network.
 A host ID in which all bits are set to 0 cannot be assigned because this
host ID is used to represent the network ID of the IP address.
 Host ID in which all bits are set to 1 cannot be assigned because this host
ID is reserved as a broadcast address to send packets to all the hosts
present on that particular network.
Rules for Assigning Network ID
Hosts that are located on the same physical network are identified by the network
ID, as all host on the same physical network is assigned the same network ID. The
network ID is assigned based on the following rules:
 The network ID cannot start with 127 because 127 belongs to the class A
address and is reserved for internal loopback functions.
 All bits of network ID set to 1 are reserved for use as an IP broadcast
address and therefore, cannot be used.
 All bits of network ID set to 0 are used to denote a specific host on the
local network and are not routed and therefore, aren’t used.
Summary of Classful Addressing
In the above table No. of networks for class A should be 127. (Network ID with
all 0 s is not considered)
Problems With Classful Addressing
The problem with this classful addressing method is that millions of class A
addresses are wasted, many of the class B addresses are wasted, whereas, the
number of addresses available in class C is so small that it cannot cater to the
needs of organizations. Class D addresses are used for multicast routing and are
therefore available as a single block only. Class E addresses are reserved.
Since there are these problems, Classful networking was replaced by Classless
Inter-Domain Routing (CIDR) in 1993. We will be discussing Classless addressing
in the next post.
 The network ID is 24 bits long.
 The host ID is 8 bits long.
 2^21 = 2097152 network address
 2^8 – 2 = 254 host address
 Within any network, the host ID must be unique to that network.
 Host ID in which all bits are set to 0 cannot be assigned because this host
ID is used to represent the network ID of the IP address.
 Host ID in which all bits are set to 1 cannot be assigned because this host
ID is reserved as a broadcast address to send packets to all the hosts
present on that particular network.
 The network ID cannot start with 127 because 127 belongs to the class A
address and is reserved for internal loopback functions.
 All bits of network ID set to 1 are reserved for use as an IP broadcast
address and therefore, cannot be used.
 All bits of network ID set to 0 are used to denote a specific host on the
local network and are not routed and therefore, aren’t used.
Classful and Classless Addressing
Here is the main diffrence between Classful and Classless Addressing:

Parameter Classful Addressing Classless Addressing


Basics In Classful addressing IP Classless addressing came
addresses are allocated according to replace the classful
to the classes- A to E. addressing and to handle
the issue of rapid
exhaustion of IP
addresses.
Practical It is less practical. It is more practical.
Network ID and The changes in the Network ID and There is no such
Host ID Host ID depend on the class. restriction of class in
classless addressing.
VLSM It does not support the Variable It supports the Variable
Length Subnet Mask (VLSM). Length Subnet Mask
(VLSM).
Bandwidth Classful addressing requires more It requires less
bandwidth. As a result, it bandwidth. Thus, fast and
becomes slower and more expensive less expensive as
as compared to classless compared to classful
addressing. addressing.
CIDR It does not support Classless It supports Classless
Inter-Domain Routing (CIDR). Inter-Domain Routing
(CIDR).
Updates Regular or periodic updates Triggered Updates
Troubleshooting Troubleshooting and problem It is not as easy
and Problem detection are easy than classless compared to classful
detection addressing because of the addressing.
division of network, host and
subnet parts in the address.
Division of  Network  Host
Address
 Host  Subnet
 Subnet

Subnetting
Dividing a large block of addresses into several contiguous sub-blocks and
assigning these sub-blocks to different smaller networks is called subnetting. It
is a practice that is widely used when classless addressing is done.
A subnet or subnetwork is a network inside a network. Subnets make networks more
efficient. Through subnetting, network traffic can travel a shorter distance
without passing through unnecessary routers to reach its destination.
Classless Addressing
To reduce the wastage of IP addresses in a block, we use sub-netting. What we do
is that we use host id bits as net id bits of a classful IP address. We give the
IP address and define the number of bits for mask along with it (usually followed
by a ‘/’ symbol), like, 192.168.1.1/28. Here, subnet mask is found by putting the
given number of bits out of 32 as 1, like, in the given address, we need to put
28 out of 32 bits as 1 and the rest as 0, and so, the subnet mask would be
255.255.255.240. A classless addressing system or classless interdomain routing
(CIDR or supernetting) is the way to combine two or more class C networks to
create a/23 or a /22 supernet. A classless addressing system or classless
interdomain routing (CIDR) is an improved IP addressing system. In a classless
addressing system the block of IP address is assigned dynamically based on
specific rules.
Some Values Calculated in Subnetting:
1. Number of subnets : 2(Given bits for mask – No. of bits in default mask)
2. Subnet address : AND result of subnet mask and the given IP address
3. Broadcast address : By putting the host bits as 1 and retaining the network
bits as in the IP address
4. Number of hosts per subnet : 2(32 – Given bits for mask) – 2
5. First Host ID : Subnet address + 1 (adding one to the binary representation of
the subnet address)
6. Last Host ID : Subnet address + Number of Hosts

[ ] Open-loop & closed-loop congestion methods


Congestion control refers to the techniques used to control or prevent
congestion. Congestion control techniques can be broadly classified into two
categories:

Open Loop Congestion Control


Open loop congestion control policies are applied to prevent congestion before it
happens. The congestion control is handled either by the source or the
destination.
Policies adopted by open loop congestion control –

1. Retransmission Policy :
It is the policy in which retransmission of the packets are taken care of.
If the sender feels that a sent packet is lost or corrupted, the packet
needs to be retransmitted. This transmission may increase the congestion in
the network.
To prevent congestion, retransmission timers must be designed to prevent
congestion and also able to optimize efficiency.
2. Window Policy :
The type of window at the sender’s side may also affect the congestion.
Several packets in the Go-back-n window are re-sent, although some packets
may be received successfully at the receiver side. This duplication may
increase the congestion in the network and make it worse.
Therefore, Selective repeat window should be adopted as it sends the
specific packet that may have been lost.
3. Discarding Policy :
A good discarding policy adopted by the routers is that the routers may
prevent congestion and at the same time partially discard the corrupted or
less sensitive packages and also be able to maintain the quality of a
message.
In case of audio file transmission, routers can discard less sensitive
packets to prevent congestion and also maintain the quality of the audio
file.
4. Acknowledgment Policy :
Since acknowledgements are also the part of the load in the network, the
acknowledgment policy imposed by the receiver may also affect congestion.
Several approaches can be used to prevent congestion related to
acknowledgment.
The receiver should send acknowledgement for N packets rather than sending
acknowledgement for a single packet. The receiver should send an
acknowledgment only if it has to send a packet or a timer expires.
5. Admission Policy :
In admission policy a mechanism should be used to prevent congestion.
Switches in a flow should first check the resource requirement of a network
flow before transmitting it further. If there is a chance of a congestion
or there is a congestion in the network, router should deny establishing a
virtual network connection to prevent further congestion.
All the above policies are adopted to prevent congestion before it happens in the
network.
Congestion control ensures that a network can handle traffic efficiently without
data loss. From techniques like TCP congestion control to advanced algorithms,
managing network traffic is essential. The GATE CS Self-Paced Course provides a
thorough explanation of congestion control methods, making it easier to grasp
complex networking concepts.
Closed Loop Congestion Control
Closed loop congestion control techniques are used to treat or alleviate
congestion after it happens. Several techniques are used by different protocols;
some of them are:

1. Backpressure :
Backpressure is a technique in which a congested node stops receiving packets
from upstream node. This may cause the upstream node or nodes to become congested
and reject receiving data from above nodes. Backpressure is a node-to-node
congestion control technique that propagate in the opposite direction of data
flow. The backpressure technique can be applied only to virtual circuit where
each node has information of its above upstream node.
In above diagram the 3rd node is congested and stops receiving packets
as a result 2nd node may be get congested due to slowing down of the output data
flow. Similarly 1st node may get congested and inform the source to slow down.

2. Choke Packet Technique :


Choke packet technique is applicable to both virtual networks as well as datagram
subnets. A choke packet is a packet sent by a node to the source to inform it of
congestion. Each router monitors its resources and the utilization at each of its
output lines. Whenever the resource utilization exceeds the threshold value which
is set by the administrator, the router directly sends a choke packet to the
source giving it a feedback to reduce the traffic. The intermediate nodes through
which the packets has traveled are not warned about congestion.

3. Implicit Signaling :
In implicit signaling, there is no communication between the congested nodes and
the source. The source guesses that there is congestion in a network. For example
when sender sends several packets and there is no acknowledgment for a while, one
assumption is that there is a congestion.

4. Explicit Signaling :
In explicit signaling, if a node experiences congestion it can explicitly sends a
packet to the source or destination to inform about congestion. The difference
between choke packet and explicit signaling is that the signal is included in the
packets that carry data rather than creating a different packet as in case of
choke packet technique.
Explicit signaling can occur in either forward or backward direction.
 Forward Signaling : In forward signaling, a signal is sent in the direction
of the congestion. The destination is warned about congestion. The receiver
in this case adopt policies to prevent further congestion.
 Backward Signaling : In backward signaling, a signal is sent in the
opposite direction of the congestion. The source is warned about congestion
and it needs to slow down.

S.No. Open-Loop Control System Closed-Loop Control System

1. It easier to build. It is difficult to build.

2. It can perform better if the It can perform better because of


calibration is properly done. the feedback.

3. It is more stable. It is comparatively less stable.

4. Optimization for desired output can Optimization can be done very


not be performed. easily.

5. It does not consists of feedback Feedback mechanism is present.


mechanism.

6. It requires less maintenance. Maintenance is difficult.

7. It is less reliable. It is more reliable.

8. It is comparatively slower. It is faster.

9. It can be easily installed and is Complicated installation is


economical. required and is expensive.

[ ] Token Bucket & Leaky Bucket Algorithm


In the network layer, before the network can make Quality of service guarantees,
it must know what traffic is being guaranteed. One of the main causes of
congestion is that traffic is often bursty.
To understand this concept first we have to know little about traffic
shaping. Traffic Shaping is a mechanism to control the amount and the rate of
traffic sent to the network. Approach of congestion management is called Traffic
shaping. Traffic shaping helps to regulate the rate of data transmission and
reduces congestion.
There are 2 types of traffic shaping algorithms:
1. Leaky Bucket
2. Token Bucket
Suppose we have a bucket in which we are pouring water, at random points in time,
but we have to get water at a fixed rate, to achieve this we will make a hole at
the bottom of the bucket. This will ensure that the water coming out is at some
fixed rate, and also if the bucket gets full, then we will stop pouring water
into it.
The input rate can vary, but the output rate remains constant. Similarly, in
networking, a technique called leaky bucket can smooth out bursty traffic. Bursty
chunks are stored in the bucket and sent out at an average rate.
In the above figure, we assume that the network has committed a bandwidth of 3
Mbps for a host. The use of the leaky bucket shapes the input traffic to make it
conform to this commitment. In the above figure, the host sends a burst of data
at a rate of 12 Mbps for 2s, for a total of 24 Mbits of data. The host is silent
for 5 s and then sends data at a rate of 2 Mbps for 3 s, for a total of 6 Mbits
of data. In all, the host has sent 30 Mbits of data in 10 s. The leaky bucket
smooths out the traffic by sending out data at a rate of 3 Mbps during the same
10 s.
Without the leaky bucket, the beginning burst may have hurt the network by
consuming more bandwidth than is set aside for this host. We can also see that
the leaky bucket may prevent congestion.
The leaky bucket algorithm is a simple yet effective way to control data flow and
prevent congestion. If you’re studying for GATE or want to better understand
traffic shaping algorithms like the leaky bucket, the GATE CS Self-Paced
Course covers this topic in detail with practical examples.
A simple leaky bucket algorithm can be implemented using FIFO queue. A FIFO queue
holds the packets. If the traffic consists of fixed-size packets (e.g., cells in
ATM networks), the process removes a fixed number of packets from the queue at
each tick of the clock. If the traffic consists of variable-length packets, the
fixed output rate must be based on the number of bytes or bits.
The following is an algorithm for variable-length packets:
1. Initialize a counter to n at the tick of the clock.
2. Repeat until n is smaller than the packet size of the packet at the head of
the queue.
1. Pop a packet out of the head of the queue, say P.
2. Send the packet P, into the network
3. Decrement the counter by the size of packet P.
3. Reset the counter and go to step 1.
Note: In the below examples, the head of the queue is the rightmost position and
the tail of the queue is the leftmost position.
Example: Let n=1000
Packet=

Since n > size of the packet at the head of the Queue, i.e. n > 200
Therefore, n = 1000-200 = 800
Packet size of 200 is sent into the network.

Now, again n > size of the packet at the head of the Queue, i.e. n > 400
Therefore, n = 800-400 = 400
Packet size of 400 is sent into the network.

Since, n < size of the packet at the head of the Queue, i.e. n < 450
Therefore, the procedure is stopped.
Initialise n = 1000 on another tick of the clock.
This procedure is repeated until all the packets are sent into the network.

Leaky Bucket Token Bucket

When the host has to send a packet In this, the bucket holds tokens generated
, packet is thrown in bucket. at regular intervals of time.

Bucket leaks at constant rate Bucket has maximum capacity.

Bursty traffic is converted into If there is a ready packet , a token is


uniform traffic by leaky bucket. removed from Bucket and packet is send.

In practice bucket is a finite If there is no token in the bucket, then


queue outputs at finite rate the packet cannot be sent.

Some advantage of token Bucket over leaky bucket


 If a bucket is full in tokens Bucket, tokens are discard not packets. While
in leaky bucket, packets are discarded.
 Token Bucket can send large bursts at a faster rate while leaky bucket
always sends packets at constant rate.
 Predictable Traffic Shaping: Token Bucket offers more predictable traffic
shaping compared to leaky bucket. With token bucket, the network
administrator can set the rate at which tokens are added to the bucket, and
the maximum number of tokens that the bucket can hold. This allows for
better control over the network traffic and can help prevent congestion.
 Better Quality of Service (QoS): Token Bucket provides better QoS compared
to leaky bucket. This is because token bucket can prioritize certain types
of traffic by assigning different token arrival rates to different classes
of packets. This ensures that important packets are sent first, while less
important packets are sent later, helping to ensure that the network runs
smoothly.
 More efficient use of network bandwidth: Token Bucket allows for more
efficient use of network bandwidth as it allows for larger bursts of data
to be sent at once. This can be useful for applications that require high-
speed data transfer or for streaming video content.
 More granular control: Token Bucket provides more granular control over
network traffic compared to leaky bucket. This is because it allows the
network administrator to set the token arrival rate and the maximum token
count, which can be adjusted according to the specific needs of the
network.
 Easier to implement: Token Bucket is generally considered easier to
implement compared to leaky bucket. This is because token bucket only
requires the addition and removal of tokens from a bucket, while leaky
bucket requires the use of timers and counters to determine when to release
packets.
Some Disadvantage of token Bucket over leaky bucket
 Tokens may be wasted: In Token Bucket, tokens are generated at a fixed
rate, even if there is no traffic on the network. This means that if no
packets are sent, tokens will accumulate in the bucket, which could result
in wasted resources. In contrast, with leaky bucket, the network only
generates packets when there is traffic, which helps to conserve resources.
 Delay in packet delivery: Token Bucket may introduce delay in packet
delivery due to the accumulation of tokens. If the token bucket is empty,
packets may need to wait for the arrival of new tokens, which can lead to
increased latency and packet loss.
 Lack of flexibility: Token Bucket is less flexible compared to leaky bucket
in terms of shaping network traffic. This is because the token generation
rate is fixed, and cannot be changed easily to meet the changing needs of
the network. In contrast, leaky bucket can be adjusted more easily to adapt
to changes in network traffic.
 Complexity: Token Bucket can be more complex to implement compared to leaky
bucket, especially when different token generation rates are used for
different types of traffic. This can make it more difficult for network
administrators to configure and manage the network.
 Inefficient use of bandwidth: In some cases, Token Bucket may lead to
inefficient use of bandwidth. This is because Token Bucket allows for large
bursts of data to be sent at once, which can cause congestion and lead to
packet loss. In contrast, leaky bucket helps to prevent congestion by
limiting the amount of data that can be sent at any given time.
Working of Token Bucket Algorithm
It allows bursty traffic at a regulated maximum rate. It allows idle hosts to
accumulate credit for the future in the form of tokens. The system removes one
token for every cell of data sent. For each tick of the clock the system send n
tokens to the bucket. If n is 100 and host is idle for 100 ticks, bucket collects
10000 tokens. Host can now consume all these tokens with 10 cells per tick.
Token bucket can be easily implemented with a counter. The token is initiated to
zero. Each time a token is added, counter is incremented to 1. Each time a unit
of data is sent, counter is decremented by 1. When the counter is zero, host
cannot send data.

Process depicting how token bucket algorithm works


Steps Involved in Token Bucket Algorithm
Step 1: Creation of Bucket: An imaginative bucket is assigned a fixed capacity,
known as "rate limit". It can hold up to a certain number of tokens.
Step 2: Refill the Bucket: The bucket is dynamic; it gets periodically filled
with tokens. Tokens are added to the bucket at a fixed rate.
Step 3: Incoming Requests: Upon receiving a request, we verify the presence of
tokens in the bucket.
Step 4: Consume Tokens: If there are tokens in the bucket, we pick one token from
it. This means the request is allowed to proceed. The time of token consumption
is also recorded.
Step 5: Empty Bucket: If the bucket is depleted, meaning there are no tokens
remaining, the request is denied. This precautionary measure prevents server or
system overload, ensuring operation stays within predefined limits.
Advantage of Token Bucket over Leaky Bucket
 If a bucket is full in tokens, then tokens are discarded and not the
packets. While in leaky bucket algorithm, packets are discarded.
 Token bucket can send large bursts at a faster rate while leaky bucket
always sends packets at constant rate.
 Token bucket ensures predictable traffic shaping as it allows for setting
token arrival rate and maximum token count. In leaky bucket, such control
may not be present.
 Premium Quality of Service(QoS) is provided by prioritizing different
traffic types through distinct token arrival rates. Such flexibility in
prioritization is not provided by leaky bucket.
 Token bucket is suitable for high-speed data transfer or streaming video
content as it allows transmission of large bursts of data. As leaky bucket
operates at a constant rate, it can lead to less efficient bandwidth
utilization.
 Token Bucket provides more granular control as administrators can adjust
token arrival rate and maximum token count based on network requirements.
Leaky Bucket has limited granularity in controlling traffic compared to
Token Bucket.
Disadvantages of Token Bucket Algorithm
 Token Bucket has the tendency to generate tokens at a fixed rate, even when
the network traffic is not present. This is leads of accumulation of unused
tokens during times when there is no traffic, hence leading to wastage.
 Due to token accumulation, delays can introduced in the packet delivery. If
the token bucket happens to be empty, packets will have to wait for new
tokens, leading to increased latency and potential packet loss.
 Token Bucket happens to be less flexible than leaky bucket when it comes to
network traffic shaping. The fixed token generation rate cannot be easily
altered to meet changing network requirements, unlike the adaptable nature
of leaky bucket.
 The implementation involved in token bucket can be more complex, especially
due to the fact that different token generation rates are used for
different traffic types. Configuration and management might be more
difficult due to this.
 Usage of large bursts of data may lead to inefficient use of bandwidth, and
may cause congestion. Leaky bucket algorithm, on the other hand helps
prevent congestion by limiting the amount of data sent at any given time,
promoting more efficient bandwidth utilization.

[ ] IPv6 header format + comparison with IPv4


IPv6 Header Representation
The IPv6 header is the first part of an IPv6 packet, containing essential
information for routing and delivering the packet across networks. The IPv6
header representation is a structured layout of fields in an IPv6 packet,
including source and destination addresses, traffic class, flow label, payload
length, next header, and hop limit. It ensures proper routing and delivery of
data across networks.
IPv6 introduces a more simplified header structure compared to IPv4, and its
ability to handle a larger address space is essential for the future of
networking. Gaining a solid understanding of the IPv6 header is key to mastering
networking protocols. A detailed breakdown of each field in the IPv6 header can
be found in the GATE CS and IT – 2025 course, which offers clear explanations and
practical applications for real-world scenarios

IPv6 Fixed Header


The IPv6 header is a part of the information sent over the internet. It’s always
40 bytes long and includes details like where data should go and how it should
get there. This helps devices talk to each other and share information smoothly
online.
Version (4-bits)
The size of this field is 4-bit. Indicates the version of the Internet Protocol,
which is always 6 for IPv6, so the bit sequence is 0110.
Traffic Class(8-bit)
The Traffic Class field indicates class or priority of IPv6 packet which is
similar to Service Field in IPv4 packet. It helps routers to handle the traffic
based on the priority of the packet. If congestion occurs on the router then
packets with the least priority will be discarded.
As of now, only 4-bits are being used (and the remaining bits are under
research), in which 0 to 7 are assigned to Congestion controlled traffic and 8 to
15 are assigned to Uncontrolled traffic.
Priority assignment of Congestion controlled traffic :
Uncontrolled data traffic is mainly used for Audio/Video data. So we give higher
priority to Uncontrolled data traffic.
The source node is allowed to set the priorities but on the way, routers can
change it. Therefore, the destination should not expect the same priority which
was set by the source node.
Flow Label (20-bits)
Flow Label field is used by a source to label the packets belonging to the same
flow in order to request special handling by intermediate IPv6 routers, such as
non-default quality-of-service or real-time service. In order to distinguish the
flow, an intermediate router can use the source address, a destination address,
and flow label of the packets. Between a source and destination, multiple flows
may exist because many processes might be running at the same time. Routers or
Host that does not support the functionality of flow label field and for default
router handling, flow label field is set to 0. While setting up the flow label,
the source is also supposed to specify the lifetime of the flow.
Payload Length (16-bits)
It is a 16-bit (unsigned integer) field, indicates the total size of
the payload which tells routers about the amount of information a particular
packet contains in its payload. The payload Length field includes extension
headers(if any) and an upper-layer packet. In case the length of the payload is
greater than 65,535 bytes (payload up to 65,535 bytes can be indicated with 16-
bits), then the payload length field will be set to 0 and the jumbo payload
option is used in the Hop-by-Hop options extension header.
Next Header (8-bits)
Next Header indicates the type of extension header(if present) immediately
following the IPv6 header. Whereas In some cases it indicates the protocols
contained within upper-layer packets, such as TCP, UDP.
Hop Limit (8-bits)
Hop Limit field is the same as TTL in IPv4 packets. It indicates the maximum
number of intermediate nodes IPv6 packet is allowed to travel. Its value gets
decremented by one, by each node that forwards the packet and the packet is
discarded if the value decrements to 0. This is used to discard the packets that
are stuck in an infinite loop because of some routing error.
Source Address (128-bits)
Source Address is the 128-bit IPv6 address of the original source of the packet.
Destination Address (128-bits)
The destination Address field indicates the IPv6 address of the final
destination(in most cases). All the intermediate nodes can use this information
in order to correctly route the packet.

IPv4 IPv6

IPv4 has a 32-bit address length IPv6 has a 128-bit address length

It Supports Manual It supports Auto and renumbering address


and DHCP address configuration configuration

In IPv4 end to end, connection In IPv6 end-to-end, connection integrity is


integrity is Unachievable Achievable

It can generate 4.29×10 9 address The address space of IPv6 is quite large it
space can produce 3.4×10 38 address space

The Security feature is dependent IPSEC is an inbuilt security feature in the


on the application IPv6 protocol

Address representation of IPv4 is Address representation of IPv6 is in


in decimal hexadecimal

Fragmentation performed by Sender In IPv6 fragmentation is performed only by


and forwarding routers the sender

In IPv4 Packet flow In IPv6 packet flow identification are


identification is not available Available and uses the flow label field in
the header

In IPv4 checksum field is In IPv6 checksum field is not available


available

It has a broadcast Message In IPv6 multicast and anycast message


Transmission Scheme transmission scheme is available

In IPv4 Encryption and In IPv6 Encryption and Authentication are


Authentication facility not provided
provided

IPv4 has a header of 20-60 bytes. IPv6 has a header of 40 bytes fixed

IPv4 can be converted to IPv6 Not all IPv6 can be converted to IPv4

IPv4 consists of 4 fields which IPv6 consists of 8 fields, which are


are separated by addresses dot separated by a colon (:)
(.)

IPv4’s IP addresses are divided IPv6 does not have any classes of the IP
into five different classes. address.
Class A , Class B, Class C, Class
D , Class E.

IPv4 supports VLSM( Variable IPv6 does not support VLSM.


Length subnet mask ).

Example of IPv4: 66.94.29.13 Example of IPv6:


2001:0000:3238:DFE1:0063:0000:0000:FEFB
MODULE 5
[ ] TCP v/s UDP

Basis Transmission Control Protocol User Datagram Protocol (UDP)


(TCP)

Type of TCP is a connection-oriented UDP is the Datagram-oriented


Service protocol. protocol. This is
Connection orientation means because there is no overhead
that the communicating devices for opening a connection,
should establish a connection maintaining a connection, or
before transmitting data and terminating a connection. UDP
should close the connection is efficient for broadcast and
after transmitting the data. multicast types of network
transmission.

Reliability TCP is reliable as it The delivery of data to the


guarantees the delivery of data destination cannot be
to the destination router. guaranteed in UDP.

Error checking TCP provides extensive error- UDP has only the basic error-
mechanism checking mechanisms. It is checking mechanism
because it provides flow using checksums.
control and acknowledgment of
data.

Acknowledgment An acknowledgment segment is No acknowledgment segment.


present.

Sequence Sequencing of data is a feature There is no sequencing of data


of Transmission in UDP. If the order is
Control Protocol (TCP). this required, it has to be managed
means that packets arrive in by the application layer.
order at the receiver.

Speed TCP is comparatively slower UDP is faster, simpler, and


than UDP. more efficient than TCP.

Retransmission Retransmission of lost packets There is no retransmission of


is possible in TCP, but not in lost packets in the User
UDP. Datagram Protocol (UDP).

Header Length TCP has a (20-60) bytes UDP has an 8 bytes fixed-
variable length header. length header.

Weight TCP is heavy-weight. UDP is lightweight.

Handshaking Uses handshakes such as SYN, It’s a connectionless protocol


Techniques ACK, SYN-ACK i.e. No handshake

Broadcasting TCP doesn’t support UDP supports Broadcasting.


Broadcasting.

Protocols TCP is used by HTTP, UDP is used by DNS , DHCP ,


HTTPs , FTP , SMTP and Telnet . TFTP, SNMP , RIP , and VoIP .

Stream Type The TCP connection is a byte UDP connection is a message


stream. stream.

Overhead Low but higher than UDP. Very low.


Applications This protocol is primarily This protocol is used in
utilized in situations when a situations where quick
safe and trustworthy communication is necessary but
communication procedure is where dependability is not a
necessary, such as in email, on concern, such as VoIP, game
the web surfing, and streaming, video, and music
in military services. streaming, etc.

[ ] Slow start TCP


TCP slow start helps buildup transmission speeds to the network's capabilities.
It does this without initially knowing what those capabilities are and without
creating congestion. TCP slow start is an algorithm used to detect the available
bandwidth for packet transmission, and balances the speed of a network
connection. It prevents the appearance of network congestion whose capabilities
are initially unknown, and slowly increases the volume of information diffused
until the network's maximum capacity is found.
To implement TCP slow start, the congestion window (cwnd) sets an upper limit on
the amount of data a source can transmit over the network before receiving an
acknowledgment (ACK). The slow start threshold (ssthresh) determines the
(de)activation of slow start. When a new connection is made, cwnd is initialized
to one TCP data or acknowledgment packet, and waits for an acknowledgement, or
ACK. When that ACK is received, the congestion window is incremented until
the cwnd is greater than ssthresh. Slow start also terminates when congestion is
experienced.
Congestion control
Congestion itself is a state that happens within a network layer when the message
traffic is too busy it slows the network response time. The server sends data in
TCP packets, the user's client then confirms delivery by returning
acknowledgements, or ACKs. The connection has a limited capacity depending on
hardware and network conditions. If the server sends too many packets too
quickly, they will be dropped. Meaning, there will be no acknowledgement. The
server registers this as missing ACKs. Congestion control algorithms use this
flow of sent packets and ACKs to determine a send rate.
TCP congestion control is a method used by the TCP protocol to manage data flow
over a network and prevent congestion. TCP uses a congestion window and
congestion policy that avoids congestion. Previously, we assumed that only the
receiver could dictate the sender’s window size. We ignored another entity here,
the network. If the network cannot deliver the data as fast as it is created by
the sender, it must tell the sender to slow down. In other words, in addition to
the receiver, the network is a second entity that determines the size of the
sender’s window.
Congestion Policy in TCP
 Slow Start Phase: Starts slow increment is exponential to the threshold.
 Congestion Avoidance Phase: After reaching the threshold increment is by 1.
 Congestion Detection Phase: The sender goes back to the Slow start phase or
the Congestion avoidance phase.
TCP Congestion Control ensures smooth data transmission over the network. For an
in-depth understanding of network protocols, the GATE CS Self-Paced
Course includes detailed networking modules.
Slow Start Phase
Exponential Increment: In this phase after every RTT the congestion window size
increments exponentially.
Example: If the initial congestion window size is 1 segment, and the first
segment is successfully acknowledged, the congestion window size becomes 2
segments. If the next transmission is also acknowledged, the congestion window
size doubles to 4 segments. This exponential growth continues as long as all
segments are successfully acknowledged.
Initially cwnd = 1
After 1 RTT, cwnd = 2^(1) = 2
2 RTT, cwnd = 2^(2) = 4
3 RTT, cwnd = 2^(3) = 8
Congestion Avoidance Phase
Additive Increment: This phase starts after the threshold value also denoted
as ssthresh. The size of CWND (Congestion Window) increases additive. After each
RTT cwnd = cwnd + 1.
For example: if the congestion window size is 20 segments and all 20 segments are
successfully acknowledged within an RTT, the congestion window size would be
increased to 21 segments in the next RTT. If all 21 segments are again
successfully acknowledged, the congestion window size will be increased to 22
segments, and so on.
Initially cwnd = i
After 1 RTT, cwnd = i+1
2 RTT, cwnd = i+2
3 RTT, cwnd = i+3
Congestion Detection Phase
Multiplicative Decrement: If congestion occurs, the congestion window size is
decreased. The only way a sender can guess that congestion has happened is the
need to retransmit a segment. Retransmission is needed to recover a missing
packet that is assumed to have been dropped by a router due to congestion.
Retransmission can occur in one of two cases: when the RTO timer times out or
when three duplicate ACKs are received.
Case 1: Retransmission due to Timeout – In this case, the congestion possibility
is high.
(a) ssthresh is reduced to half of the current window size.
(b) set cwnd = 1
(c) start with the slow start phase again.
Case 2: Retransmission due to 3 Acknowledgement Duplicates – The congestion
possibility is less.
(a) ssthresh value reduces to half of the current window size.
(b) set cwnd= ssthresh
(c) start with congestion avoidance phase
Example
Assume a TCP protocol experiencing the behavior of slow start. At the 5th
transmission round with a threshold (ssthresh) value of 32 goes into the
congestion avoidance phase and continues till the 10th transmission. At the 10th
transmission round, 3 duplicate ACKs are received by the receiver and entered
into additive increase mode. Timeout occurs at the 16th transmission round. Plot
the transmission round (time) vs congestion window size of TCP segments.

Congestion Detection Phase

[ ] Berkeley Sockets

[ ] Three-way handshake
The TCP 3-Way Handshake is a fundamental process that establishes a reliable
connection between two devices over a TCP/IP network. It involves three steps:
SYN (Synchronize), SYN-ACK (Synchronize-Acknowledge), and ACK (Acknowledge).
During the handshake, the client and server exchange initial sequence numbers and
confirm the connection establishment. In this article, we will discuss the TCP 3-
Way Handshake Process.
What is the TCP 3-Way Handshake?
The TCP 3-Way Handshake is a fundamental process used in the Transmission Control
Protocol (TCP) to establish a reliable connection between a client and a server
before data transmission begins. This handshake ensures that both parties are
synchronized and ready for communication.
TCP Segment Structure
A TCP segment consists of data bytes to be sent and a header that is added to the
data by TCP as shown:
The header of a TCP segment can range from 20-60 bytes. 40 bytes are for options.
If there are no options, a header is 20 bytes else it can be of upmost 60
bytes. Header fields:
 Source Port Address: A 16-bit field that holds the port address of the
application that is sending the data segment.
 Destination Port Address: A 16-bit field that holds the port address of the
application in the host that is receiving the data segment.
 Sequence Number: A 32-bit field that holds the sequence number , i.e, the
byte number of the first byte that is sent in that particular segment. It
is used to reassemble the message at the receiving end of the segments that
are received out of order.
 Acknowledgement Number: A 32-bit field that holds the acknowledgement
number, i.e, the byte number that the receiver expects to receive next. It
is an acknowledgement for the previous bytes being received successfully.
 Header Length (HLEN): This is a 4-bit field that indicates the length of
the TCP header by a number of 4-byte words in the header, i.e if the header
is 20 bytes(min length of TCP header ), then this field will hold 5
(because 5 x 4 = 20) and the maximum length: 60 bytes, then it’ll hold the
value 15(because 15 x 4 = 60). Hence, the value of this field is always
between 5 and 15.
 Control flags: These are 6 1-bit control bits that control connection
establishment, connection termination, connection abortion, flow control,
mode of transfer etc. Their function is:
o URG: Urgent pointer is valid
o ACK: Acknowledgement number is valid( used in case of cumulative
acknowledgement)
o PSH: Request for push
o RST: Reset the connection
o SYN: Synchronize sequence numbers
o FIN: Terminate the connection
 Window size: This field tells the window size of the sending TCP in bytes.
 Checksum: This field holds the checksum for error control . It is mandatory
in TCP as opposed to UDP.
 Urgent pointer: This field (valid only if the URG control flag is set) is
used to point to data that is urgently required that needs to reach the
receiving process at the earliest. The value of this field is added to the
sequence number to get the byte number of the last urgent byte.
To master concepts like the TCP 3-Way Handshake and other critical networking
principles, consider enrolling in the GATE CS Self-Paced course . This course
offers a thorough understanding of key topics essential for GATE preparation and
a successful career in computer science. Get the knowledge and skills you need
with expert-led instruction.
TCP 3-way Handshake Process
The process of communication between devices over the internet happens according
to the current TCP/IP suite model(stripped-out version of OSI reference model).
The Application layer is a top pile of a stack of TCP/IP models from where
network-referenced applications like web browsers on the client side establish a
connection with the server. From the application layer, the information is
transferred to the transport layer where our topic comes into the picture. The
two important protocols of this layer are – TCP, and UDP(User Datagram
Protocol) out of which TCP is prevalent(since it provides reliability for the
connection established). However, you can find an application of UDP in querying
the DNS server to get the binary equivalent of the Domain Name used for the
website.

TCP provides reliable communication with something called Positive


Acknowledgement with Re-transmission(PAR) . The Protocol Data Unit(PDU) of the
transport layer is called a segment. Now a device using PAR resend the data unit
until it receives an acknowledgement. If the data unit received at the receiver’s
end is damaged(It checks the data with checksum functionality of the transport
layer that is used for Error Detection ), the receiver discards the segment. So
the sender has to resend the data unit for which positive acknowledgement is not
received. You can realize from the above mechanism that three segments are
exchanged between sender(client) and receiver(server) for a reliable TCP
connection to get established. Let us delve into how this mechanism works

 Step 1 (SYN): In the first step, the client wants to establish a connection
with a server, so it sends a segment with SYN(Synchronize Sequence Number)
which informs the server that the client is likely to start communication
and with what sequence number it starts segments with
 Step 2 (SYN + ACK): Server responds to the client request with SYN-ACK
signal bits set. Acknowledgement(ACK) signifies the response of the segment
it received and SYN signifies with what sequence number it is likely to
start the segments with
 Step 3 (ACK): In the final part client acknowledges the response of the
server and they both establish a reliable connection with which they will
start the actual data transfer

[ ] TCP Timers
TCP uses several timers to ensure that excessive delays are not encountered
during communications. Several of these timers are elegant, handling problems
that are not immediately obvious at first analysis. Each of the timers used by
TCP is examined in the following sections, which reveal its role in ensuring data
is properly sent from one connection to another.
TCP implementation uses four timers –
 Retransmission Timer – To retransmit lost segments, TCP uses retransmission
timeout (RTO). When TCP sends a segment the timer starts and stops when the
acknowledgment is received. If the timer expires timeout occurs and the
segment is retransmitted. RTO (retransmission timeout is for 1 RTT) to
calculate retransmission timeout we first need to calculate the RTT(round
trip time).
RTT three types –
o Measured RTT(RTTm) – The measured round-trip time for a segment is the
time required for the segment to reach the destination and be
acknowledged, although the acknowledgement may include other segments.
o Smoothed RTT(RTTs) – It is the weighted average of RTTm. RTTm is
likely to change and its fluctuation is so high that a single
measurement cannot be used to calculate RTO.
Initially -> No value
After the first measurement -> RTTs=RTTm
After each measurement -> RTTs= (1-t)*RTTs + t*RTTm
Note: t=1/8 (default if not given)
 Deviated RTT(RTTd) – Most implementations do not use RTTs alone so RTT
deviated is also calculated to find out RTO.

Initially -> No value


After the first measurement -> RTTd=RTTm/2
After each measurement -> RTTd= (1-k)*RTTd + k*(RTTm-RTTs)
Note: k=1/4 (default if not given)
 Persistent Timer – To deal with a zero-window-size deadlock situation, TCP
uses a persistence timer. When the sending TCP receives an acknowledgment
with a window size of zero, it starts a persistence timer. When the
persistence timer goes off, the sending TCP sends a special segment called
a probe. This segment contains only 1 byte of new data. It has a sequence
number, but its sequence number is never acknowledged; it is even ignored
in calculating the sequence number for the rest of the data. The probe
causes the receiving TCP to resend the acknowledgment which was lost.
 Keep Alive Timer – A keepalive timer is used to prevent a long idle
connection between two TCPs. If a client opens a TCP connection to a server
transfers some data and becomes silent the client will crash. In this case,
the connection remains open forever. So a keepalive timer is used. Each
time the server hears from a client, it resets this timer. The time-out is
usually 2 hours. If the server does not hear from the client after 2 hours,
it sends a probe segment. If there is no response after 10 probes, each of
which is 75 s apart, it assumes that the client is down and terminates the
connection.
 Time Wait Timer – This timer is used during tcp connection termination. The
timer starts after sending the last Ack for 2nd FIN and closing the
connection.
After a TCP connection is closed, it is possible for datagrams that are still
making their way through the network to attempt to access the closed port. The
quiet timer is intended to prevent the just-closed port from reopening again
quickly and receiving these last datagrams.
The quiet timer is usually set to twice the maximum segment lifetime (the same
value as the Time-To-Live field in an IP header), ensuring that all segments
still heading for the port have been discarded.
Note: TCP Timers play a crucial role in managing the transmission of data
packets, ensuring efficiency and reliability in communication. For a deeper
understanding of how TCP handles timeouts, retransmissions, and delays, you can
explore the GATE CS Self-Paced Course. This course covers not only TCP but also
the intricacies of other network protocols, preparing you for both GATE and
practical applications.

[ ] TCP Flow Control (Sliding Window)


What is TCP Flow Control?
In a communication network, in order for two network hosts to communicate with
each other, one has to send a packet while another host has to receive it. It
might happen that both the hosts have
different hardware and software specifications and accordingly their processors
might differ. If the receiver host has a fast processor which can consume
messages sent at a higher rate by the sender then the communication works well
and no problem will occur. But have you ever wondered what would happen if the
receiver had a slower processor? Well, in this case, the incoming messages will
keep coming and will be added to the receiver’s queue. Once the receiver’s queue
is filled, the messages will start dropping leading to the wastage of channel
packets. In order to overcome this issue of the slow receiver and fast sender,
the concept of flow control comes into the picture.
For the slow sender and fast receiver, no flow control is required. Whereas for
the fast sender and slow receiver, flow control is important.

In the diagram given, there is a fast sender and a slow receiver. Here are the
following points to understand how the message will overflow after a certain
interval of time.
 In the diagram, the receiver is receiving the message sent by the sender at
the rate of 5 messages per second while the sender is sending the messages
at the rate of 10 messages per second.
 When the sender sends the message to the receiver, it gets into the network
queue of the receiver.
 Once the user reads the message from the application, the message gets
cleared from the queue and the space becomes free.
 According to the mentioned speed of the sender and receiver, the receiver
queue will be getting shortened and the buffer space will reduce at the
speed of 5 messages/ second. Since, the receiver buffer size can
accomodate 200 messages, hence, in 40 seconds, the receiver buffer will
become full.
 So, after 40 seconds, the messages will start dropping as there will be no
space remaining for the incoming messages.
This is why flow control becomes important for TCP protocol for data transfer and
communication purposes.
How Does Flow Control in TCP Work?
When the data is sent on the network, this is what normally happens in the
network layer.
The sender writes the data to a socket and sends it to the transport layer which
is TCP in this case. The transport layer will then wrap this data and send it to
the network layer which will route it to the receiving node.
If you look at the diagram closely, you will notice this part of the
diagram.

The TCP stores the data that needs to be sent in the send buffer and the data to
be received in the receive buffer. Flow control makes sure that no more packets
are sent by the sender once the receiver’s buffer is full as the messages will be
dropped and the receiver won’t be able to handle them. In order to control the
amount of data sent by the TCP, the receiver will create a buffer which is also
known as Receive
Window.

The TCP needs to send ACK every time it receives the data packet, acknowledging
that the packet is received successfully and with this value of ACK it sends the
value of the current receive window so that sender knows where to send the data.
The Sliding Window
The sliding window is used in TCP to control the number of bytes a channel can
accommodate. It is the number of bytes that were sent but not acknowledged. This
is done in TCP by using a window in which packets in sequence are sent and
authorized. When the sending host receives the ACK from the receiving host about
the packets the window size is incremented in order to allow new packets to come
in. There are several techniques used by the receiver window including go-back-
n and selective repeat but the fundamental of the communication remains the same.
Receive Window
The TCP flow control is maintained by the receive window on the sender side. It
tracks the amount of space left vacant inside the buffer on the receiver side.
The figure below shows the receive
window.

The formula for calculating the receive window is given in the figure. The
window is constantly updated and reported via the window segment of the header in
TCP. Some of the important terminals in the TCP receive window
are receiveBuffer, receiverWindow, lastByteRead, and lastByteReceived.
Whenever the receive buffer is full, the receiver sends receiveWindow=0 to the
sender. When the remaining buffer is consumed and there’s nothing back to
acknowledge then it creates a problem in the application of the receiver.
In order to create a solution for this problem, the TCP makes explicit
consideration by dictating the sender a single bit of data to the receiver side
continuously. This minimizes strain on the network while maintaining a constant
check on the status of the buffer at the receiving side. In this way, as soon as
buffer space frees up, an ACK is sent.
In this, a control mechanism is adopted to ensure that the rate of incoming data
is not greater than its consumption. This mechanism relies on the window field of
the TCP header and provides reliable data transport over a network.
The Persist Timer
From the above condition, there might be a possibility of deadlock. After the
receiver shows a zero window, if the ACK message is not sent by the receiver to
the sender, it will never know when to start sending the data. This situation in
which the sender is waiting for a message to start sending the data and the
receiver is waiting for more incoming data is called a deadlock condition in flow
control. In order to solve this problem, whenever the TCP receives the zero
window message, it starts a persistent timer that will send small packets to the
receiver periodically. This is also called WindowProbe.
Conclusion
 The flow control mechanism in TCP is to ensure that sender does not send
data more than what the receiver can handle.
 With every ACK message at the receiver, it advertises the current receive
window.
 Receive window is spare space in the receiver buffer whose formula is given
by: rwnd = ReceiveBuffer – (LastByteReceived – LastByteReadByApplication);
 In TCP, a sliding window is used for determining that no more bytes are
allotted in the window than what is advertised by the receiver.
 When the window size is zero, the persist timer starts and TCP will stop
the data transmission.
 WindowProbe message is sent in small packets of data periodically to the
receiver.
 When it receives a non-zero window size, it resumes its transmission.
MODULE 6
Short Notes :
[ ] DNS (Domain Name Server)
The Domain Name System (DNS) is like the internet’s phone book. It helps you find
websites by translating easy-to-remember names (like www.example.com) into the
numerical IP addresses (like 192.0.2.1) that computers use to locate each other
on the internet. Without DNS, you would have to remember long strings of numbers
to visit your favorite websites.
Domain Name System (DNS) is a hostname used for IP address translation services.
DNS is a distributed database implemented in a hierarchy of name servers. It is
an application layer protocol for message exchange between clients and servers.
It is required for the functioning of the Internet.
What is the Need for DNS?
Every host is identified by the IP address but remembering numbers is very
difficult for people also the IP addresses are not static therefore a mapping is
required to change the domain name to the IP address. So DNS is used to convert
the domain name of the websites to their numerical IP address.
DNS translates domain names to IP addresses, making it an essential part of the
internet. To learn more, the GATE CS Self-Paced Course is a great resource.
Types of Domain
There are various kinds of domains:
 Generic Domains: .com(commercial), .edu(educational), .mil(military),
.org(nonprofit organization), .net(similar to commercial) all these are
generic domains.
 Country Domain: .in (India) .us .uk
 Inverse Domain: if we want to know what is the domain name of the website.
IP to domain name mapping. So DNS can provide both the mapping for example
to find the IP addresses of geeksforgeeks.org then we have to type
nslookup www.geeksforgeeks.org

Types of DNS
Organization of Domain
It is very difficult to find out the IP address associated with a website because
there are millions of websites and with all those websites we should be able to
generate the IP address immediately, there should not be a lot of delays for that
to happen organization of the database is very important.

Root DNS Server


 DNS Record: Domain name, IP address what is the validity? what is the time
to live? and all the information related to that domain name. These records
are stored in a tree-like structure.
 Namespace: Set of possible names, flat or hierarchical. The naming system
maintains a collection of bindings of names to values – given a name, a
resolution mechanism returns the corresponding value.
 Name Server: It is an implementation of the resolution mechanism.
DNS = Name service in Internet – A zone is an administrative unit, and a domain
is a subtree.
Name-to-Address Resolution
The host requests the DNS name server to resolve the domain name. And the name
server returns the IP address corresponding to that domain name to the host so
that the host can future connect to that IP address.

Name-to-Address Resolution
 Hierarchy of Name Servers Root Name Servers: It is contacted by name
servers that can not resolve the name. It contacts the authoritative name
server if name mapping is not known. It then gets the mapping and returns
the IP address to the host.
 Top-level Domain (TLD) Server: It is responsible for com, org, edu, etc,
and all top-level country domains like uk, fr, ca, in, etc. They have info
about authoritative domain servers and know the names and IP addresses of
each authoritative name server for the second-level domains.
 Authoritative Name Servers are the organization’s DNS servers, providing
authoritative hostnames to IP mapping for organization servers. It can be
maintained by an organization or service provider. In order to reach
cse.dtu.in we have to ask the root DNS server, then it will point out to
the top-level domain server and then to the authoritative domain name
server which actually contains the IP address. So the authoritative domain
server will return the associative IP address.
Domain Name Server
The client machine sends a request to the local name server, which, if the root
does not find the address in its database, sends a request to the root name
server, which in turn, will route the query to a top-level domain (TLD) or
authoritative name server. The root name server can also contain some hostName to
IP address mappings. The Top-level domain (TLD) server always knows who the
authoritative name server is. So finally the IP address is returned to the local
name server which in turn returns the IP address to the host.

Domain Name Server


How Does DNS Work?
The working of DNS starts with converting a hostname into an IP Address. A domain
name serves as a distinctive identification for a website. It is used in place of
an IP address to make it simpler for consumers to visit websites. Domain Name
System works by executing the database whose work is to store the name of hosts
which are available on the Internet. The top-level domain server stores address
information for top-level domains such as .com and .net, .org, and so on. If the
Client sends the request, then the DNS resolver sends a request to DNS Server to
fetch the IP Address. In case, when it does not contain that particular IP
Address with a hostname, it forwards the request to another DNS Server. When IP
Address has arrived at the resolver, it completes the request over Internet
Protocol .
For more, you can refer to Working of DNS Server .
How Does DNS Works?
Authoritative DNS Server Vs Recursive DNS Resolver

Parameters Authoritative DNS Server Recursive DNS Resolver


Function Holds the official DNS Resolves DNS queries on
records for a domain behalf of clients
Role Provides answers to Actively looks up
specific DNS queries information for clients
Query Handling Responds with Queries other DNS servers
authoritative DNS data for DNS data
Client Interaction Doesn’t directly interact Serves end-users or
with end-users client applications
Data Source Stores the DNS records Looks up data from other
for specific domains DNS servers
Caching Generally, doesn’t Caches DNS responses for
perform caching faster lookups
Hierarchical Resolution Does not participate in Actively performs
the recursive resolution recursive name resolution
IP Address Has a fixed, known IP IP address may vary
address depending on ISP
Zone Authority Manages a specific DNS Does not manage any
zone (domain) specific DNS zone
What is DNS Lookup?
DNS Lookup or DNS Resolution can be simply termed as the process that helps in
allowing devices and applications that translate readable domain names to the
corresponding IP Addresses used by the computers for communicating over the web.
What Are The Steps in a DNS Lookup?
Often, DNS lookup information is stored temporarily either on your own computer
or within the DNS system itself. There are usually 8 steps involved in a DNS
lookup. If the information is already stored (cached), some of these steps can be
skipped, making the process faster. Here is an example of all 8 steps when
nothing is cached:
1. A user types “example.com” into a web browser.
2. The request goes to a DNS resolver.
3. The resolver asks a root server where to find the top-level domain (TLD)
server for .com.
4. The root server tells the resolver to contact the .com TLD server.
5. The resolver then asks the .com TLD server for the IP address of
“example.com.”
6. The .com TLD server gives the resolver the IP address of the domain’s
nameserver.
7. The resolver then asks the domain’s nameserver for the IP address of
“example.com.”
8. The domain’s nameserver returns the IP address to the resolver.
DNS Servers Involved in Loading a Webpage
Upon loading the webpage, several DNS Servers are responsible for translating the
domain name into the corresponding IP Address of the web server hosting the
website. Here is the list of main DNS servers involved in loading a Webpage.
 Local DNS Resolver
 Root DNS Servers
 Top-Level Domain (TLD) DNS Servers
 Authoritative DNS Servers
 Web Server
This hierarchical system of DNS servers ensures that when you type a domain name
into your web browser, it can be translated into the correct IP address, allowing
you to access the desired webpage on the internet.
For more information you can refer DNS Look-Up article.
What is DNS Resolver?
DNS Resolver is simply called a DNS Client and has the functionality for
initiating the process of DNS Lookup which is also called DNS Resolution. By
using the DNS Resolver, applications can easily access different websites and
services present on the Internet by using domain names that are very much
friendly to the user and that also resolves the problem of remembering IP
Address.
What Are The Types of DNS Queries?
There are basically three types of DNS Queries that occur in DNS Lookup. These
are stated below.
 Recursive Query: In this query, if the resolver is unable to find the
record, in that case, DNS client wants the DNS Server will respond to the
client in any way like with the requested source record or an error
message.
 Iterative Query: Iterative Query is the query in which DNS Client wants the
best answer possible from the DNS Server.
 Non-Recursive Query: Non-Recursive Query is the query that occurs when a
DNS Resolver queries a DNS Server for some record that has access to it
because of the record that exists in its cache.
What is DNS Caching?
DNS Caching can be simply termed as the process used by DNS Resolvers for storing
the previously resolved information of DNS that contains domain names, and IP
Addresses for some time. The main principle of DNS Caching is to speed up the
process of future DNS lookup and also help in reducing the overall time of DNS
Resolution.
Conclusion
In conclusion, the Domain Name System (DNS) is an essential part of
the application layer in networking. It acts like the internet’s directory,
translating human-friendly domain names into numerical IP addresses that
computers use to communicate. Without DNS, navigating the internet would be much
more difficult, as we’d need to remember complex IP addresses for every website.
DNS makes the internet user-friendly and efficient, allowing us to easily access
websites and online services by using simple, memorable names.

[ ] HTTP (Hypertext Transfer Protocol)


HTTP (Hypertext Transfer Protocol) is a fundamental protocol of the Internet,
enabling the transfer of data between a client and a server. It is the foundation
of data communication for the World Wide Web.
HTTP provides a standard between a web browser and a web server to establish
communication. It is a set of rules for transferring data from one computer to
another. Data such as text, images, and other multimedia files are shared on the
World Wide Web. Whenever a web user opens their web browser, the user indirectly
uses HTTP. It is an application protocol that is used for distributed,
collaborative, hypermedia information systems.

History of HTTP
 Tim Berners-Lee and his team at CERN are indeed credited with inventing the
original HTTP protocol.
 HTTP version 0.9 was the initial version introduced in 1991.
 HTTP version 1.0 followed in 1996 with the introduction of RFC 1945.
 HTTP version 1.1 was introduced in January 1997 with RFC 2068, later
refined in RFC 2616 in June 1999.
 HTTP version 2.0 was specified in RFC 7540 and published on May 14, 2015.
 HTTP version 3.0, also known as HTTP/3, is based on the QUIC protocol and
is designed to improve web performance. It was renamed as Hyper-Text
Transfer Protocol QUIC (HTTP/3) and developed by Google.
Methods of HTTP
 GET: Used to retrieve data from a specified resource. It should have no
side effects and is commonly used for fetching web pages, images, etc.
 POST: Used to submit data to be processed by a specified resource. It is
suitable for form submissions, file uploads, and creating new resources.
 PUT: Used to update or create a resource on the server. It replaces the
entire resource with the data provided in the request body.
 PATCH: Similar to PUT but used for partial modifications to a resource. It
updates specific fields of a resource rather than replacing the entire
resource.
 DELETE: Used to remove a specified resource from the server.
 HEAD: Similar to GET but retrieves only the response headers, useful for
checking resource properties without transferring the full content.
 OPTIONS: Used to retrieve the communication options available for a
resource, including supported methods and headers.
 TRACE: Used for debugging purposes to echo the received request back to the
client, though it's rarely used due to security concerns.
 CONNECT: Used to establish a tunnel to the server through an HTTP proxy,
commonly used for SSL/TLS connections.
HTTP Request/Response:
HTTP is a request-response protocol, which means that for every request sent by a
client (typically a web browser), the server responds with a corresponding
response. The basic flow of an HTTP request-response cycle is as follows:
 Client sends an HTTP request: The client (usually a web browser) initiates
the process by sending an HTTP request to the server. This request includes
a request method (GET, POST, PUT, DELETE, etc.), the target URI (Uniform
Resource Identifier, e.g., a URL), headers, and an optional request body.
 Server processes the request: The server receives the request and processes
it based on the requested method and resource. This may involve retrieving
data from a database, executing server-side scripts, or performing other
operations.
 Server sends an HTTP response: After processing the request, the server
sends an HTTP response back to the client. The response includes a status
code (e.g., 200 OK, 404 Not Found), response headers, and an optional
response body containing the requested data or content.
 Client processes the response: The client receives the server's response
and processes it accordingly. For example, if the response contains an HTML
page, the browser will render and display it. If it's an image or other
media file, the browser will display or handle it appropriately.
Features
 Stateless: Each request is independent, and the server doesn't retain
previous interactions' information.
 Text-Based: Messages are in plain text, making them readable and
debuggable.
 Client-Server Model: Follows a client-server architecture for requesting
and serving resources.
 Request-Response: Operates on a request-response cycle between clients and
servers.
 Request Methods: Supports various methods like GET, POST, PUT, DELETE for
different actions on resources.
Advantages
 Platform independence: Works on any operating system
 Compatibility: Compatible with various protocols and technologies
 Efficiency: Optimized for performance
 Security: Supports encryption for secure data transfer
Disadvantages
 Lack of security: Vulnerable to attacks like man in the middle
 Performance issues: Can be slow for large data transfers
 Statelessness: Requires additional mechanisms for maintaining state

[ ] SMTP (Simple Mail Transfer Protocol)


Simple Mail Transfer mechanism (SMTP) is a mechanism for exchanging email
messages between servers. It is an essential component of the email communication
process and operates at the application layer of the TCP/IP protocol stack. SMTP
is a protocol for transmitting and receiving email messages. In this article, we
are going to discuss every point about SMTP.
What is Simple Mail Transfer Protocol?
SMTP is an application layer protocol. The client who wants to send the mail
opens a TCP connection to the SMTP server and then sends the mail across the
connection. The SMTP server is an always-on listening mode. As soon as it listens
for a TCP connection from any client, the SMTP process initiates a connection
through port 25. After successfully establishing a TCP connection the client
process sends the mail instantly.

SMTP
SMTP Protocol
The SMTP model is of two types:
 End-to-End Method
 Store-and-Forward Method
The end-to-end model is used to communicate between different organizations
whereas the store and forward method is used within an organization. An SMTP
client who wants to send the mail will contact the destination’s host SMTP
directly, to send the mail to the destination. The SMTP server will keep the mail
to itself until it is successfully copied to the receiver’s SMTP.
The client SMTP is the one that initiates the session so let us call it the
client-SMTP and the server SMTP is the one that responds to the session request
so let us call it receiver-SMTP. The client-SMTP will start the session and the
receiver SMTP will respond to the request.
Model of SMTP System
In the SMTP model user deals with the user agent (UA), for example, Microsoft
Outlook, Netscape, Mozilla, etc. To exchange the mail using TCP, MTA is used. The
user sending the mail doesn’t have to deal with MTA as it is the responsibility
of the system admin to set up a local MTA. The MTA maintains a small queue of
mail so that it can schedule repeat delivery of mail in case the receiver is not
available. The MTA delivers the mail to the mailboxes and the information can
later be downloaded by the user agents.

SMTP Model
Components of SMTP
 Mail User Agent (MUA): It is a computer application that helps you in
sending and retrieving mail. It is responsible for creating email messages
for transfer to the mail transfer agent(MTA).
 Mail Submission Agent (MSA): It is a computer program that receives mail
from a Mail User Agent(MUA) and interacts with the Mail Transfer Agent(MTA)
for the transfer of the mail.
 Mail Transfer Agent (MTA): It is software that has the work to transfer
mail from one system to another with the help of SMTP.
 Mail Delivery Agent (MDA): A mail Delivery agent or Local Delivery Agent is
basically a system that helps in the delivery of mail to the local system.
How does SMTP Work?
 Communication between the sender and the receiver: The sender’s user agent
prepares the message and sends it to the MTA. The MTA’s responsibility is
to transfer the mail across the network to the receiver’s MTA. To send
mail, a system must have a client MTA, and to receive mail, a system must
have a server MTA.
 Sending Emails: Mail is sent by a series of request and response messages
between the client and the server. The message which is sent across
consists of a header and a body. A null line is used to terminate the mail
header and everything after the null line is considered the body of the
message, which is a sequence of ASCII characters. The message body contains
the actual information read by the receipt.
 Receiving Emails: The user agent on the server-side checks the mailboxes at
a particular time of intervals. If any information is received, it informs
the user about the mail. When the user tries to read the mail it displays a
list of emails with a short description of each mail in the mailbox. By
selecting any of the mail users can view its contents on the terminal.

Working of SMTP
What is an SMTP Envelope?
 Purpose
o The SMTP envelope contains information that guides email
delivery between servers.
o It is distinct from the email headers and body and is not visible to
the email recipient.
 Contents of the SMTP Envelope
o Sender Address: Specifies where the email originates.
o Recipient Addresses: Indicates where the email should be delivered.
o Routing Information: Helps servers determine the path for email
delivery.
 Comparison to Regular Mail
o Think of the SMTP envelope as the address on a physical envelope for
regular mail.
o Just like an envelope guides postal delivery, the SMTP envelope
directs email servers on where to send the email.

[ ] DHCP (Dynamic Host Configuration Protocol)


Dynamic Host Configuration Protocol is a network protocol used to automate the
process of assigning IP addresses and other network configuration parameters to
devices (such as computers, smartphones, and printers) on a network. Instead of
manually configuring each device with an IP address, DHCP allows devices to
connect to a network and receive all necessary network information, like IP
address, subnet mask, default gateway, and DNS server addresses, automatically
from a DHCP server.
This makes it easier to manage and maintain large networks, ensuring devices can
communicate effectively without conflicts in their network settings. DHCP plays a
crucial role in modern networks by simplifying the process of connecting devices
and managing network resources efficiently.
What is DHCP?
DHCP stands for Dynamic Host Configuration Protocol. It is the critical feature
on which the users of an enterprise network communicate. DHCP helps enterprises
to smoothly manage the allocation of IP addresses to the end-user clients’
devices such as desktops, laptops, cellphones, etc. is an application layer
protocol that is used to provide:
Subnet Mask (Option 1 - e.g., 255.255.255.0)
Router Address (Option 3 - e.g., 192.168.1.1)
DNS Address (Option 6 - e.g., 8.8.8.8)
Vendor Class Identifier (Option 43 - e.g.,
'unifi' = 192.168.1.9 ##where unifi = controller)
DHCP is based on a client-server model and based on discovery, offer, request,
and ACK.
DHCP simplifies network configuration by dynamically assigning IP addresses. To
build a solid foundation in networking protocols like DHCP, consider enrolling in
the GATE CS Self-Paced Course. This course will equip you with the networking
knowledge you need for both practical applications and exam preparation.
Why Do We Use DHCP?
DHCP helps in managing the entire process automatically and centrally. DHCP helps
in maintaining a unique IP Address for a host using the server. DHCP servers
maintain information on TCP/IP configuration and provide configuration of address
to DHCP-enabled clients in the form of a lease offer.
Components of DHCP
The main components of DHCP include:
 DHCP Server: DHCP Server is a server that holds IP Addresses and other
information related to configuration.
 DHCP Client: It is a device that receives configuration information from
the server. It can be a mobile, laptop, computer, or any other electronic
device that requires a connection.
 DHCP Relay: DHCP relays basically work as a communication channel between
DHCP Client and Server.
 IP Address Pool: It is the pool or container of IP Addresses possessed by
the DHCP Server. It has a range of addresses that can be allocated to
devices.
 Subnets: Subnets are smaller portions of the IP network partitioned to keep
networks under control.
 Lease: It is simply the time that how long the information received from
the server is valid, in case of expiration of the lease, the tenant must
have to re-assign the lease.
 DNS Servers: DHCP servers can also provide DNS (Domain Name System) server
information to DHCP clients, allowing them to resolve domain names to IP
addresses.
 Default Gateway: DHCP servers can also provide information about the
default gateway, which is the device that packets are sent to when the
destination is outside the local network.
 Options: DHCP servers can provide additional configuration options to
clients, such as the subnet mask, domain name, and time server information.
 Renewal: DHCP clients can request to renew their lease before it expires to
ensure that they continue to have a valid IP address and configuration
information.
 Failover: DHCP servers can be configured for failover, where two servers
work together to provide redundancy and ensure that clients can always
obtain an IP address and configuration information, even if one server goes
down.
 Dynamic Updates: DHCP servers can also be configured to dynamically update
DNS records with the IP address of DHCP clients, allowing for easier
management of network resources.
 Audit Logging: DHCP servers can keep audit logs of all DHCP transactions,
providing administrators with visibility into which devices are using which
IP addresses and when leases are being assigned or renewed.
DHCP Packet Format

DHCP Packet Format


 Hardware Length: This is an 8-bit field defining the length of the physical
address in bytes. e.g for Ethernet the value is 6.
 Hop count: This is an 8-bit field defining the maximum number of hops the
packet can travel.
 Transaction ID: This is a 4-byte field carrying an integer. The transcation
identification is set by the client and is used to match a reply with the
request. The server returns the same value in its reply.
 Number of Seconds: This is a 16-bit field that indicates the number of
seconds elapsed since the time the client started to boot.
 Flag: This is a 16-bit field in which only the leftmost bit is used and the
rest of the bit should be set to os. A leftmost bit specifies a forced
broadcast reply from the server. If the reply were to be unicast to the
client, the destination. IP address of the IP packet is the address
assigned to the client.
 Client IP Address: This is a 4-byte field that contains the client IP
address . If the client does not have this information this field has a
value of 0.
 Your IP Address: This is a 4-byte field that contains the client IP
address. It is filled by the server at the request of the client.
 Server IP Address: This is a 4-byte field containing the server IP address.
It is filled by the server in a reply message.
 Gateway IP Address: This is a 4-byte field containing the IP address of a
routers. IT is filled by the server in a reply message.
 Client Hardware Address: This is the physical address of the client
.Although the server can retrieve this address from the frame sent by the
client it is more efficient if the address is supplied explicity by the
client in the request message.
 Server Name: This is a 64-byte field that is optionally filled by the
server in a reply packet. It contains a null-terminated string consisting
of the domain name of the server. If the server does not want to fill this
filed with data, the server must fill it with all 0s.
 Boot Filename: This is a 128-byte field that can be optionally filled by
the server in a reply packet. It contains a null- terminated string
consisting of the full pathname of the boot file. The client can use this
path to retrieve other booting information. If the server does not want to
fill this field with data, the server must fill it with all 0s.
 Options: This is a 64-byte field with a dual purpose. IT can carry either
additional information or some specific vendor information. The field is
used only in a reply message. The server uses a number, called a magic
cookie, in the format of an IP address with the value of 99.130.83.99. When
the client finishes reading the message, it looks for this magic cookie. If
present the next 60 bytes are options.
Working of DHCP
DHCP works on the Application layer of the UDP Protocol. The main task of DHCP is
to dynamically assigns IP Addresses to the Clients and allocate information on
TCP/IP configuration to Clients. For more, you can refer to the Article Working
of DHCP.
The DHCP port number for the server is 67 and for the client is 68. It is a
client-server protocol that uses UDP services. An IP address is assigned from a
pool of addresses. In DHCP, the client and the server exchange mainly 4 DHCP
messages in order to make a connection, also called the DORA process, but there
are 8 DHCP messages in the process.

Working of DHCP
The 8 DHCP Messages
1. DHCP Discover Message: This is the first message generated in the
communication process between the server and the client. This message is
generated by the Client host in order to discover if there is any DHCP
server/servers are present in a network or not. This message is broadcasted to
all devices present in a network to find the DHCP server. This message is 342 or
576 bytes long.

DHCP Discover Message


As shown in the figure, the source MAC address (client PC) is 08002B2EAF2A, the
destination MAC address(server) is FFFFFFFFFFFF, the source IP address is
0.0.0.0(because the PC has had no IP address till now) and the destination IP
address is 255.255.255.255 (IP address used for broadcasting). As they discover
message is broadcast to find out the DHCP server or servers in the network
therefore broadcast IP address and MAC address is used.
2. DHCP Offers A Message: The server will respond to the host in this message
specifying the unleased IP address and other TCP configuration information. This
message is broadcasted by the server. The size of the message is 342 bytes. If
there is more than one DHCP server present in the network then the client host
will accept the first DHCP OFFER message it receives. Also, a server ID is
specified in the packet in order to identify the server.
DHCP Offer Message

Now, for the offer message, the source IP address is 172.16.32.12 (server’s IP
address in the example), the destination IP address is 255.255.255.255 (broadcast
IP address), the source MAC address is 00AA00123456, the destination MAC address
is 00:11:22:33:44:55 (client’s MAC address). Here, the offer message is broadcast
by the DHCP server therefore destination IP address is the broadcast IP address
and destination MAC address is 00:11:22:33:44:55 (client’s MAC address)and the
source IP address is the server IP address and the MAC address is the server MAC
address.
Also, the server has provided the offered IP address 192.16.32.51 and a lease
time of 72 hours(after this time the entry of the host will be erased from the
server automatically). Also, the client identifier is the PC MAC address
(08002B2EAF2A) for all the messages.
3. DHCP Request Message: When a client receives an offer message, it responds by
broadcasting a DHCP request message. The client will produce a gratuitous ARP in
order to find if there is any other host present in the network with the same IP
address. If there is no reply from another host, then there is no host with the
same TCP configuration in the network and the message is broadcasted to the
server showing the acceptance of the IP address. A Client ID is also added to
this message.

DHCP Request Message


Now, the request message is broadcast by the client PC therefore source IP
address is 0.0.0.0(as the client has no IP right now) and destination IP address
is 255.255.255.255 (the broadcast IP address) and the source MAC address is
08002B2EAF2A (PC MAC address) and destination MAC address is FFFFFFFFFFFF.
Note – This message is broadcast after the ARP request broadcast by the PC to
find out whether any other host is not using that offered IP. If there is no
reply, then the client host broadcast the DHCP request message for the server
showing the acceptance of the IP address and Other TCP/IP Configuration.
4. DHCP Acknowledgment Message: In response to the request message received, the
server will make an entry with a specified client ID and bind the IP address
offered with lease time. Now, the client will have the IP address provided by the
server.

Now the server will make an entry of the client host with the offered IP address
and lease time. This IP address will not be provided by the server to any other
host. The destination MAC address is 00:11:22:33:44:55 (client’s MAC address) and
the destination IP address is 255.255.255.255 and the source IP address is
172.16.32.12 and the source MAC address is 00AA00123456 (server MAC address).
5. DHCP Negative Acknowledgment Message: Whenever a DHCP server receives a
request for an IP address that is invalid according to the scopes that are
configured, it sends a DHCP Nak message to the client. Eg-when the server has no
IP address unused or the pool is empty, then this message is sent by the server
to the client.
6. DHCP Decline: If the DHCP client determines the offered configuration
parameters are different or invalid, it sends a DHCP decline message to the
server. When there is a reply to the gratuitous ARP by any host to the client,
the client sends a DHCP decline message to the server showing the offered IP
address is already in use.
7. DHCP Release: A DHCP client sends a DHCP release packet to the server to
release the IP address and cancel any remaining lease time.
8. DHCP Inform: If a client address has obtained an IP address manually then the
client uses DHCP information to obtain other local configuration parameters, such
as domain name. In reply to the DHCP inform message, the DHCP server generates a
DHCP ack message with a local configuration suitable for the client without
allocating a new IP address. This DHCP ack message is unicast to the client.
Note – All the messages can be unicast also by the DHCP relay agent if the server
is present in a different network.
Security Considerations for Using DHCP
To make sure your DHCP servers are safe, consider these DHCP security issues:
 Limited IP Addresses : A DHCP server can only offer a set number of IP
addresses. This means attackers could flood the server with requests,
causing essential devices to lose their connection.
 Fake DHCP Servers : Attackers might set up fake DHCP servers to give out
fake IP addresses to devices on your network.
 DNS Access : When users get an IP address from DHCP, they also get DNS
server details. This could potentially allow them to access more data than
they should. It’s important to restrict network access, use firewalls, and
secure connections with VPNs to protect against this.
Protection Against DHCP Starvation Attack
A DHCP starvation attack happens when a hacker floods a DHCP server with requests
for IP addresses. This overwhelms the server, making it unable to assign
addresses to legitimate users. The hacker can then block access for authorized
users and potentially set up a fake DHCP server to intercept and manipulate
network traffic, which could lead to a man-in-the-middle attack.
Reasons Why Enterprises Must Automate DHCP?
Automating your DHCP system is crucial for businesses because it reduces the time
and effort your IT team spends on manual tasks. For instance, DHCP-related issues
like printers not connecting or subnets not working with the main network can be
avoided automatically.
Automated DHCP also allows your operations to grow smoothly. Instead of hiring
more staff to handle tasks that automation can manage, your team can focus on
other important areas of business growth.
Advantages
 Centralized management of IP addresses.
 Centralized and automated TCP/IP configuration .
 Ease of adding new clients to a network.
 Reuse of IP addresses reduces the total number of IP addresses that are
required.
 The efficient handling of IP address changes for clients that must be
updated frequently, such as those for portable devices that move to
different locations on a wireless network.
 Simple reconfiguration of the IP address space on the DHCP server without
needing to reconfigure each client.
 The DHCP protocol gives the network administrator a method to configure the
network from a centralized area.
 With the help of DHCP, easy handling of new users and the reuse of IP
addresses can be achieved.
Disadvantages
 IP conflict can occur.
 The problem with DHCP is that clients accept any server. Accordingly, when
another server is in the vicinity, the client may connect with this server,
and this server may possibly send invalid data to the client.
 The client is not able to access the network in absence of a DHCP Server.
 The name of the machine will not be changed in a case when a new IP Address
is assigned.
Conclusion
In conclusion, DHCP is a technology that simplifies network setup by
automatically assigning IP addresses and network configurations to devices. While
DHCP offers convenience, it’s important to manage its security carefully. Issues
such as IP address exhaustion, and potential data access through DNS settings
highlight the need for robust security measures like firewalls and VPNs to
protect networks from unauthorized access and disruptions. DHCP remains essential
for efficiently managing network connections while ensuring security against
potential risks.

[ ] FTP (File Transfer Protocol)


FTP or File Transfer Protocol is said to be one of the earliest and also the most
common forms of transferring files on the internet. Located in the application
layer of the OSI model, FTP is a basic system that helps in transferring files
between a client and a server. It is what makes the FTP unique that the system
provides a reliable and efficient means of transferring files from one system to
another even if they have different file structures and operating systems.
Contrary to other protocols such as http that cover hypertexts and web resources
in general, ftp is dedicated to the management and the transfer of text, binary,
or image files.
What is File Transfer Protocol?
FTP is a standard communication protocol. There are various other protocols like
HTTP which are used to transfer files between computers, but they lack clarity
and focus as compared to FTP. Moreover, the systems involved in connection are
heterogeneous, i.e. they differ in operating systems, directories, structures,
character sets, etc the FTP shields the user from these differences and transfers
data efficiently and reliably. FTP can transfer ASCII, EBCDIC, or image files.
The ASCII is the default file share format, in this, each character is encoded by
NVT ASCII. In ASCII or EBCDIC the destination must be ready to accept files in
this mode. The image file format is the default format for transforming binary
files.
Types of FTP
There are different ways through which a server and a client do a file transfer
using FTP. Some of them are mentioned below:
 Anonymous FTP: Anonymous FTP is enabled on some sites whose files are
available for public access. A user can access these files without having
any username or password. Instead, the username is set to anonymous, and
the password is to the guest by default. Here, user access is very limited.
For example, the user can be allowed to copy the files but not to navigate
through directories.
 Password Protected FTP: This type of FTP is similar to the previous one,
but the change in it is the use of username and password.
 FTP Secure (FTPS): It is also called as FTP Secure Sockets Layer (FTP SSL).
It is a more secure version of FTP data transfer. Whenever FTP connection
is established, Transport Layer Security (TLS) is enabled.
 FTP over Explicit SSL/TLS (FTPES): FTPES helps by upgrading FTP Connection
from port 21 to an encrypted connection.
 Secure FTP (SFTP): SFTP is not a FTP Protocol, but it is a subset of Secure
Shell Protocol, as it works on port 22.
What is FTP Useful For?
FTP is especially useful for:
 Transferring Large Files: FTP can transfer large files in one shot; thus
applicable when hosting websites, backing up servers, or sharing files in
large quantities.
 Remote File Management: Files on a remote server can be uploaded,
downloaded, deleted, renamed, and copied according to the users’ choices.
 Automating File Transfers: FTP is a great protocol for the execution of
file transfers on predefined scripts and employments.
 Accessing Public Files: Anonymous FTP means that everybody irrespective of
the identity is allowed to download some files with no permissions needed.
How to Use FTP?
To use FTP, follow these steps:
 Connect to the FTP Server: One can connect to the server using the address,
username and password through an FTP client or a command line interface.
Anonymous Information may not need a username and password.
 Navigate Directories: Some commands include ls that is used to list
directories and cd that is used to change directories.
 Transfer Files: File transfer may be done by using the commands such as get
for downloading files, and put for uploading files.
 Manage Files: Make operations like deletion (Delete), renaming (Rename) as
well as copying (Copy) of files.
 Close the Connection: Once file transfer has been accomplished, terminate
the connection by giving the bye or quit command.
How Does FTP Work?
FTP is a client server protocol that has two communication channel, command
channel for conversation control and data channel for file content.
Here are steps mentioned in which FTP works:
 A user has to log in to FTP Server first, there may be some servers where
you can access to content without login, known as anonymous FTP.
 Client can start a conversation with server, upon requesting to download a
file.
 The user can start different functions like upload, delete, rename, copy
files, etc. on server.
FTP can work on different modes like Active and Passive modes. For more, you can
refer to Difference between Active and Passive FTP.
Types of Connection in FTP
 Control Connection
 Data Connection
Control Connection
For sending control information like user identification, password, commands to
change the remote directory, commands to retrieve and store files, etc., FTP
makes use of a control connection. The control connection is initiated on port
number 21.
Data connection
For sending the actual file, FTP makes use of a data connection. A data
connection is initiated on port number 20.
FTP sends the control information out-of-band as it uses a separate control
connection. Some protocols send their request and response header lines and the
data in the same TCP connection. For this reason, they are said to send their
control information in-band. HTTP and SMTP are such examples.

FTP Session
When an FTP session is started between a client and a server, the client
initiates a control TCP connection with the server side. The client sends control
information over this. When the server receives this, it initiates a data
connection to the client side. But the control connection remains active
throughout the user session. As we know HTTP is stateless . But FTP needs to
maintain a state about its user throughout the session.
FTP Clients
FTP works on a client-server model. The FTP client is a program that runs on the
user’s computer to enable the user to talk to and get files from remote
computers. It is a set of commands that establishes the connection between two
hosts, helps to transfer the files, and then closes the connection.
Some of the commands are:
get the filename(retrieve the file from the server)
get the filename(retrieve multiple files from the server )
ls(list files available in the current directory of the server)
There are also built-in FTP programs, which makes it easier to transfer files and
it does not require remembering the commands.
FTP Data Types
The data type of a file, which determines how the file is represented overall, is
the first piece of information that can be provided about it. The FTP standard
specifies the following four categories of data:
 ASCII: Describes an ASCII text file in which each line is indicated by the
previously mentioned type of end-of-line marker.
 EBCDIC: For files that use IBM’s EBCDIC character set, this type is
conceptually identical to ASCII.
 Image: This is the “black box” mode I described earlier; the file has no
formal internal structure and is transferred one byte at a time without any
processing.
 Local: Files containing data in logical bytes with a bit count other than
eight can be handled by this data type.
Advantages of FTP
 File sharing also comes in the category of advantages of FTP in this
between two machines files can be shared on the network.
 Speed is one of the main benefits of FTP.
 Since we don’t have to finish every operation to obtain the entire file, it
is more efficient.
 Using the username and password, we must log in to the FTP server. As a
result, FTP might be considered more secure.
 We can move the files back and forth via FTP. Let’s say you are the firm
manager and you provide information to every employee, and they all reply
on the same server.
Disadvantages of FTP
 File size limit is the drawback of FTP only 2 GB size files can be
transferred.
 More then one receivers are not supported by FTP.
 FTP does not encrypt the data this is one of the biggest drawbacks of FTP.
 FTP is unsecured we use login IDs and passwords making it secure but they
can be attacked by hackers.
[ ] TelNet
TELNET stands for Teletype Network. It is a client/server application
protocol that provides access to virtual terminals of remote systems on local
area networks or the Internet. The local computer uses a telnet client program
and the remote computers use a telnet server program. In this article, we will
discuss every point about TELNET.
What is Telnet?
TELNET is a type of protocol that enables one computer to connect to the local
computer. It is used as a standard TCP/IP protocol for virtual terminal service
which is provided by ISO. The computer which starts the connection is known as
the local computer. The computer which is being connected to i.e. which accepts
the connection known as the remote computer. During telnet operation, whatever is
being performed on the remote computer will be displayed by the local computer.
Telnet operates on a client/server principle.
How TELNET Works?
 Client-Server Interaction
o The Telnet client initiates the connection by sending requests to the
Telnet server.
o Once the connection is established, the client can send commands to
the server.
o The server processes these commands and responds accordingly.
 Character Flow
o When the user types on the local computer, the local operating system
accepts the characters.
o The Telnet client transforms these characters into a universal
character set called Network Virtual Terminal (NVT) characters.
o These NVT characters travel through the Internet to the remote
computer via the local TCP/IP protocol stack.
o The remote Telnet server converts these characters into a format
understandable by the remote computer.
o The remote operating system receives the characters from a pseudo-
terminal driver and passes them to the appropriate application
program3.
 Network Virtual Terminal (NVT)
o NVT is a virtual terminal in Telnet that provides a common structure
shared by different types of real terminals.
o It ensures communication compatibility between various terminals with
different operating systems.
Uses of TELNET
 Remote Administration and Management
 Network Diagnostics
 Understanding Command-Line Interfaces
 Accessing Bulletin Board Systems (BBS)
 Automation and Scripting
Advantages of TELNET
 It provides remote access to someone’s computer system.
 Telnet allows the user for more access with fewer problems in data
transmission.
 Telnet saves a lot of time.
 The oldest system can be connected to a newer system with telnet having
different operating systems.
Disadvantages of TELNET
 As it is somehow complex, it becomes difficult to beginners in
understanding.
 Data is sent here in form of plain text, that’s why it is not so secured.
 Some capabilities are disabled because of not proper interlinking of the
remote and local devices.
6 Times 5 Times 4 Times 3 Times 2 Times 1 Time

MODULE > 1 2 3 4 5 6

2024 May 15 5 15 75 10 10

2023 Dec 10 10 25 60 10 10

2023 May 15 10 25 35 15 20

2022 Dec 5 10 10 45 25 15

Last 4 Avg 15 10 25 55 15 15

2022 May 10 10 20 20 20 20

2019 Dec 15 10 35 25 15 20

2019 May 20 10 15 45 15 10

2018 Dec 30 5 15 70 20 10

Total 120 60 195 375 130 125

1. Introduction to Networking (10-15 marks)


1. Explain ISO OSI reference model with diagram.
2. State and explain the design issues of OSI Layer.
3. List two ways In which the OSI reference model and the TCP/IP reference model
are the same. Now list two ways in which they differ.
4. What are three reasons for using layered protocols? (OR Explain the need of
layering for communication and networking). What are two possible disadvantages
of using layered protocols?
5. Differentiate between connection oriented and connectionless services.
6. What is topology? Explain the types of topologies with diagram, advantages and
disadvantages.
7. Write a short note on Internetworking devices

2. Physical Layer (5-10 marks)


1. Explain different types of guided transmission media in detail.
2. Compare the performance characteristics of coaxial, twisted pair and fibre
optic transmission media.
3. Data Link Layer (15-25 marks)
1. What is channel allocation problem?
2. Explain CSMA Protocols. Explain how collisions are handled in CSMA/CD
3. Explain sliding window protocol using selective repeat technique.
4. Compare the performance of Selective repeat & Go-back-N protocol.
5. Explain the Go-back-N protocol.
6. Explain one-bit sliding window protocol (Stop and Walt).
7. Explain different framing methods? What are the advantages of varlable length
frame over fixed length frame?
8. Explain design issues of data link layer.
9. List the types of Error detection and correction techniques with the help of
example.
10. 4-bit data bits with binary value 1010 is to be encoded using even parity
Hamming code. What is the binary value after encoding?
11. Numerical on CSMA/CD and ALOHA, Slotted ALOHA:
i. Consider building a CSMA/CD network running at 1Gbps over a 1 km cable with no
repeaters. The signal speed of the cable is 200,000 km/sec. What is the minimum
frame size?
ii. A network with CSMA/CD has 10 Mbps bandwidth and 25.6 ms maximum propagation
delay. What is the minimum frame size?
iii. What is the throughput of the system both in Pure ALOHA and Slotted ALOHA,
if the network transmits 200 bits frames on a shared channel of 200 Kbps and the
system produces:
a) 1000 frames per second
b) 500 frames per second
iv. A5 km long broadcast LAN uses CSMA has 107 bps bandwidth and uses CSMA/CD.
The signal travels along the wire at 5 x 10° m/s. What is the minimum packet size
that can be used on this network?
4. Network Layer (45-75 marks)
1. Explain IPv4 header format in detail
2. Explain classful and classless IPv4 addressing
3. What is subnetting? Compare subnetting and super netting. What are the default
subnet masks? Explain the need for subnet mask in subnetting.
4. Explain Link State Routing with suitable example.
5. Explain Distance vector routing protocol. What is count to infinity problem.
How to overcome it?
6. Explain ARP and RARP protocols in detail.
7. Write a short note on ICMP protocol.
8. What is Congestion control? Explain Open loop and closed loop congestion
control.
9. What is traffic shaping? Explain leaky bucket algorithm and compare it with
token bucket algorithm
10. Numerical on subnetting: (Asked in 3 of the last 4 exams)
1) An ISP is granted a block of addresses starting with 190.100.0.0/16 (65,536).
The ISP needs to distribute these addresses to three groups of customers as
follows:
a) The first group has 64 customers; each needs 256 addresses
b) The second group has 128 customers; each needs 128 addresses.
c) The third group has 128 customers; each needs 64 addresses.
Design the subblocks and find out how many addresses are still available after
these allocations.
2) An organization has granted a block of addresses starting with 105.8.71.0/24,
organization wanted to distribute this block to 11 subnets as follows:
a) First Group has 3 medium size businesses, each need 16 addresses
b) The second Group has 4 medium size businesses, each need 32 addresses.
c) The third Group has 4 households, each need 4 addresses.
Design the sub blocks and give slash notation for each subblock. Find how many
addresses have been left after this allocation.
3) A large number of consecutive IP address are available starting at 198.16.0.0.
Suppose that four organizations, A, B, C, and D, request 4000, 2000, 4000, and
8000 addresses, respectively, and in that order. For each of these. give the
first IP address assigned. the last IP address assigned, and the mask in the
w.x.y.z/s notation.
11. Compare the network layer protocols IPv4 and IPv6.
5. Transport layer (10-25 marks)
1. Explain the TCP connection establishment(Three-Way Handshake technique) and
Connection release.
2. Differentiate between TCP and UDP.
3. Explain Slow-Start algorithm for TCP's congestion handling policy.
4. Write a shot note on TCP Timers.
5. Explain TCP flow control.
6. Application layer (10-20 marks)
1. What is need of DNS and explain how DNS works(functioning)? Explain DNS
namespace.
2. Write a short note on SMTP.
3. Explain HTTP. Draw and summarize the structure of HTTP request and response.
4. Explain working(operation) of DHCP protocol.
5. Explain DHCP message format in detail.

You might also like