CN Notes
CN Notes
Q1.
[ ] a. Explain principle differences between connectionless and
connection-oriented communication.
[ ] b. What is channel allocation problem?
[ ] c. Find the error, if any, in the following IPv4 addresses:
i. 221.24.7.8.20 ii. 75.45.351.14
[ ] d. Differentiate between TCP and UDP
[ ] e. Write short note on SMTP
Q2.
[ ] a. Describe OSI reference model with a neat diagram.
[ ] b. Explain different framing methods.
Q3.
[ ] a. Explain different types of guided transmission media in detail.
[ ] b. Explain sliding window protocol using selective repeat technique.
Q4.
[ ] a. Explain Link State Routing with a suitable example.
[ ] b. What is the need of DNS and explain how DNS works?
Q5.
[ ] a. Explain IPv4 header format in detail.
[ ] Explain Three Way Handshake Technique in TCP
Q6.
[ ] a. Explain leaky bucket algorithm and compare it with token bucket
algorithm.
[ ] b. Write short notes on:
[ ] i. TCP Timers
[ ] ii. HTTP
Q1. Solve any Four out of Five
[ ] a. Explain the need of layering in reference model for communication and
networking?
[ ] b. Explain one bit sliding window protocol.
[ ] c. Explain IPv4 header format with diagram.
[ ] d. Differentiate between TCP and UDP. [COMMON]
[ ] e. What is the need of DNS? Explain DNS Name Space. [COMMON]
Q2. Attempt the following
[ ] a. Explain following transmission medias - Twisted Pair, Coaxial Cable
(baseband and broadband), Fiber Optic.
[ ] b. What is channel allocation problem? Explain CSMA/CD protocol.
[ ] Consider building a CSMA/CD network running at 1Gbps over a 1-km cable with
no repeaters. The signal speed of the cable is 200,000 km/sec. What is the
minimum frame size?
Q3. Attempt the following
[ ] a. Explain Classful and Classless IPv4 addressing.
[ ] b. Explain TCP connection establishment and TCP connection release.
Q4. Attempt the following
[ ] a. Explain Selective Repeat Protocol for flow control.
[ ] b. Explain shortest path (Dijkstra's Algorithm) routing algorithm.
Q5. Attempt the following
[ ] a. A large number of consecutive IP addresses are available starting at
198.16.0.0. Suppose that four organizations, A, B, C, and D, request 4000, 2000,
4000, and 8000 addresses, respectively, and in that order. For each of these,
give the first IP address assigned, the last IP address assigned, and the mask in
the w.x.y.z/s notation.
[ ] b. Explain DHCP
[ ] c. Explain ARP protocol in detail.
Q6. Attempt the following
[ ] a. Explain IP address message format and its operation in detail.
[ ] b. Explain ARP protocol in detail.
Q1. Attempt any four of the following
[ ] a. What is subnetting? Compare subnetting and supernetting
[ ] b. What are three reasons for using layered protocols? What is two possible
disadvantages of using layered protocols?
[ ] c. Explain the count to infinity problem in detail.
[ ] d. List two ways in which the OSI reference model and the TCP/IP reference
model are the same. Now list two ways in which they differ. [COMMON]
[ ] e. 4-bit data bits with binary value 1010 is to be encoded using even parity
Hamming code. What is the binary value after encoding?
Q2. Attempt the following
[ ] a. Define guided transmission media? Illustrate with diagram the details for
coaxial cable? State any 5 comparative characteristics of coaxial cable with
fiber optics and twisted pair cables.
[ ] b. Explain how collision handled in CSMA/CD? A 5 km long broadcast LAN uses
CSMA/CD. [COMMON]
[ ] The signal travels along the wire at 5 x 10^8 m/s. What is the minimum
packet size that can be used on this network?
Q3. Attempt the following
[ ] a. An organization has granted a block of addresses starting with
105.8.71.0/24. The organization wanted to distribute this block to 11 subnets as
follows:
1. First Group has 3 medium size businesses, each need 16 addresses
2. The second Group has 4 medium size businesses, each need 32 addresses.
3. The third Group has 4 households, each need 4 addresses.
Design the sub blocks and give slash notation for each subblock. Find how many
addresses have been left after this allocation.
[ ] b. Explain classful IP addressing scheme in detail? List the advantages and
disadvantages of classless IP addressing scheme.
Q4.
[ ] a) Explain the open loop congestion control and closed loop congestion
control policies in detail
[ ] b) Explain the TCP connection establishment and Connection release.
Q5.
[ ] a) Explain the concept of sliding protocol? Explain the selective repeat
protocol with example? Compare the performance of Selective repeat & Go-back-N
protocol.
[ ] b) Explain the link state routing algorithm with example?
Q6. Write a short note on following
[ ] a) ARP & RARP
[ ] b) DNS [COMMON]
Q1.
[ ] a) State and explain the design issues of OSI layers. [COMMON]
[ ] b) Compare the performance characteristics of coaxial, twisted pair and
fiber optic transmission media.
[ ] c) List the types of Error Detection and Correction techniques with the help
of example.
d) Compare the Network layer protocols IPv4 and IPv6.
Q2.
[ ] a) Illustrate TCP protocol for establishing a connection using 3-way
handshake technique in the transport layer. [COMMON]
[ ] b) Explain ISO-OSI reference model with diagram. [COMMON]
Q3.
[ ] a) What is the throughput of the system both in Pure ALOHA and Slotted
ALOHA, if the network transmits 200 bits frames on a shared channel of 200 Kbps
and the system produces? a) 1000 frames per second b) 500 frames per second
[ ] b) Analyze the steps involved in Token and Leaky bucket algorithm by quoting
the need and benefit in the network layer with suitable diagrams.
Q4.
[ ] a) Explain Linked State Routing with the help of example.
[ ] b) An ISP is granted a block of addresses starting with 190.100.0.0/16
(65,536 addresses). The ISP needs to distribute these addresses to three groups
of customers as follows:
a. The first group has 64 customers; each need 256 addresses.
b. The second group has 128 customers; each need 128 addresses.
c. The third group has 128 customers; each need 64 addresses.
[ ] Design the subblocks and find out how many addresses are still available
after these allocations.
Q5.
[ ] a) What is Congestion control? Explain Open loop and Close loop Congestion
control.
[ ] b) Draw and summarize the structure of HTTP request and response.
Q6.
Write Short Note on (Any Two)
[ ] a) Address Resolution Protocol (ARP)
[ ] b) Classful and Classless Addressing
[ ] c) Distance Vector Routing (DVR)
Q1.
[ ] A. Explain design issues of layers in OSI reference model in computer
networks. Explain ISO OSI Reference model with diagram. [COMMON]
[ ] B. Explain CSMA/CA protocols. Explain how collisions are handled in CSMA/CD.
[COMMON]
[ ] C. Explain different framing methods? What are the advantages of variable
length frame over fixed length frame?
Q3. Solve any Two.
[ ] A. Explain IPv4 header format with diagram.
[ ] B. Explain different TCP Congestion Control policies.
[ ] C. Explain TCP flow control.
Q4. Solve any Two.
[ ] A. Explain ARP and RARP protocols in detail.
[ ] B. Explain the need for DNS (Domain Name System) and describe it's
functioning. [COMMON]
[ ] C. Explain working of DHCP protocol.
LMT Video
MODULE 1
[ ] OSI Reference Model
The OSI (Open Systems Interconnection) Model is a set of rules that explains how
different computer systems communicate over a network. OSI Model was developed by
the International Organization for Standardization (ISO). The OSI Model consists
of 7 layers and each layer has specific functions and responsibilities.
This layered approach makes it easier for different devices and technologies to
work together. OSI Model provides a clear structure for data transmission and
managing network issues. The OSI Model is widely used as a reference to
understand how network systems function.
In this article, we will discuss the OSI Model and each layer of the OSI Model in
detail. We will also discuss the flow of data in the OSI Model and how the OSI
Model is different from the TCP/IP Model.
Layer 1 – Physical Layer
The lowest layer of the OSI reference model is the Physical Layer. It is
responsible for the actual physical connection between the devices. The physical
layer contains information in the form of bits. Physical Layer is responsible for
transmitting individual bits from one node to the next. When receiving data, this
layer will get the signal received and convert it into 0s and 1s and send them to
the Data Link layer, which will put the frame back together. Common physical
layer devices are Hub, Repeater, Modem, and Cables.
Functions of the Physical Layer
Bit Synchronization: The physical layer provides the synchronization of the
bits by providing a clock. This clock controls both sender and receiver
thus providing synchronization at the bit level.
Bit Rate Control: The Physical layer also defines the transmission rate
i.e. the number of bits sent per second.
Physical Topologies: Physical layer specifies how the different,
devices/nodes are arranged in a network i.e. bus topology, star topology,
or mesh topology.
Transmission Mode: Physical layer also defines how the data flows between
the two connected devices. The various transmission modes possible
are Simplex, half-duplex and full-duplex.
Layer 2 – Data Link Layer (DLL)
The data link layer is responsible for the node-to-node delivery of the message.
The main function of this layer is to make sure data transfer is error-free from
one node to another, over the physical layer. When a packet arrives in a network,
it is the responsibility of the DLL to transmit it to the Host using its MAC
address. Packet in the Data Link layer is referred to as Frame. Switches and
Bridges are common Data Link Layer devices.
The Data Link Layer is divided into two sublayers:
Logical Link Control (LLC)
Media Access Control (MAC)
The packet received from the Network layer is further divided into frames
depending on the frame size of the NIC(Network Interface Card). DLL also
encapsulates Sender and Receiver’s MAC address in the header.
The Receiver’s MAC address is obtained by placing an ARP(Address Resolution
Protocol) request onto the wire asking “Who has that IP address?” and the
destination host will reply with its MAC address.
Functions of the Data Link Layer
Framing: Framing is a function of the data link layer. It provides a way
for a sender to transmit a set of bits that are meaningful to the receiver.
This can be accomplished by attaching special bit patterns to the beginning
and end of the frame.
Physical Addressing: After creating frames, the Data link layer adds
physical addresses (MAC addresses) of the sender and/or receiver in the
header of each frame.
Error Control: The data link layer provides the mechanism of error control
in which it detects and retransmits damaged or lost frames.
Flow Control: The data rate must be constant on both sides else the data
may get corrupted thus, flow control coordinates the amount of data that
can be sent before receiving an acknowledgment.
Access Control: When a single communication channel is shared by multiple
devices, the MAC sub-layer of the data link layer helps to determine which
device has control over the channel at a given time.
Layer 3 – Network Layer
The network layer works for the transmission of data from one host to the other
located in different networks. It also takes care of packet routing i.e.
selection of the shortest path to transmit the packet, from the number of routes
available. The sender and receiver’s IP address are placed in the header by the
network layer. Segment in the Network layer is referred to as Packet. Network
layer is implemented by networking devices such as routers and switches.
Functions of the Network Layer
Routing: The network layer protocols determine which route is suitable from
source to destination. This function of the network layer is known as
routing.
Logical Addressing: To identify each device inter-network uniquely, the
network layer defines an addressing scheme. The sender and receiver’s IP
addresses are placed in the header by the network layer. Such an address
distinguishes each device uniquely and universally.
Layer 4 – Transport Layer
The transport layer provides services to the application layer and takes services
from the network layer. The data in the transport layer is referred to
as Segments. It is responsible for the end-to-end delivery of the complete
message. The transport layer also provides the acknowledgment of the successful
data transmission and re-transmits the data if an error is found. Protocols used
in Transport Layer are TCP, UDP NetBIOS, PPTP.
At the sender’s side, the transport layer receives the formatted data from the
upper layers, performs Segmentation, and also implements Flow and error
control to ensure proper data transmission. It also adds Source and
Destination port number in its header and forwards the segmented data to the
Network Layer.
Generally, this destination port number is configured, either by default or
manually. For example, when a web application requests a web server, it
typically uses port number 80, because this is the default port assigned to
web applications. Many applications have default ports assigned.
At the Receiver’s side, Transport Layer reads the port number from its header and
forwards the Data which it has received to the respective application. It also
performs sequencing and reassembling of the segmented data.
Functions of the Transport Layer
Segmentation and Reassembly: This layer accepts the message from the
(session) layer, and breaks the message into smaller units. Each of the
segments produced has a header associated with it. The transport layer at
the destination station reassembles the message.
Service Point Addressing: To deliver the message to the correct process,
the transport layer header includes a type of address called service point
address or port address. Thus by specifying this address, the transport
layer makes sure that the message is delivered to the correct process.
Services Provided by Transport Layer
Connection-Oriented Service
Connectionless Service
Layer 5 – Session Layer
Session Layer in the OSI Model is responsible for the establishment of
connections, management of connections, terminations of sessions between two
devices. It also provides authentication and security. Protocols used in the
Session Layer are NetBIOS, PPTP.
Functions of the Session Layer
Session Establishment, Maintenance, and Termination: The layer allows the
two processes to establish, use, and terminate a connection.
Synchronization: This layer allows a process to add checkpoints that are
considered synchronization points in the data. These synchronization points
help to identify the error so that the data is re-synchronized properly,
and ends of the messages are not cut prematurely and data loss is avoided.
Dialog Controller: The session layer allows two systems to start
communication with each other in half-duplex or full-duplex.
Example
Let us consider a scenario where a user wants to send a message through some
Messenger application running in their browser. The “Messenger” here acts as the
application layer which provides the user with an interface to create the data.
This message or so-called Data is compressed, optionally encrypted (if the data
is sensitive), and converted into bits (0’s and 1’s) so that it can be
transmitted.
Layer 6 – Presentation Layer
The presentation layer is also called the Translation layer. The data from the
application layer is extracted here and manipulated as per the required format to
transmit over the network. Protocols used in the Presentation Layer
are JPEG, MPEG, GIF, TLS/SSL, etc.
Functions of the Presentation Layer
Translation: For example, ASCII to EBCDIC.
Encryption/ Decryption: Data encryption translates the data into another
form or code. The encrypted data is known as the ciphertext and the
decrypted data is known as plain text. A key value is used for encrypting
as well as decrypting data.
Compression: Reduces the number of bits that need to be transmitted on the
network.
Layer 7 – Application Layer
At the very top of the OSI Reference Model stack of layers, we find the
Application layer which is implemented by the network applications. These
applications produce the data to be transferred over the network. This layer also
serves as a window for the application services to access the network and for
displaying the received information to the user. Protocols used in the
Application layer are SMTP, FTP, DNS, etc.
Functions of the Application Layer
The main functions of the application layer are given below.
Network Virtual Terminal(NVT): It allows a user to log on to a remote host.
File Transfer Access and Management(FTAM): This application allows a user
to access files in a remote host, retrieve files in a remote host, and
manage or control files from a remote computer.
Mail Services: Provide email service.
Directory Services: This application provides distributed database sources
and access for global information about various objects and services.
Data flows through the OSI model in a step-by-step process:
Application Layer: Applications create the data.
Presentation Layer: Data is formatted and encrypted.
Session Layer: Connections are established and managed.
Transport Layer: Data is broken into segments for reliable delivery.
Network Layer: Segments are packaged into packets and routed.
Data Link Layer: Packets are framed and sent to the next device.
Physical Layer: Frames are converted into bits and transmitted physically.
Each layer adds specific information to ensure the data reaches its destination
correctly, and these steps are reversed upon arrival.
We can understand how data flows through OSI Model with the help of an example
mentioned below.
Let us suppose, Person A sends an e-mail to his friend Person B.
Step 1: Person A interacts with e-mail application like Gmail, outlook, etc.
Writes his email to send. (This happens at Application Layer).
Step 2: At Presentation Layer, Mail application prepares for data transmission
like encrypting data and formatting it for transmission.
Step 3: At Session Layer, There is a connection established between the sender
and receiver on the internet.
Step 4: At Transport Layer, Email data is broken into smaller segments. It adds
sequence number and error-checking information to maintain the reliability of the
information.
Step 5: At Network Layer, Addressing of packets is done in order to find the best
route for transfer.
Step 6: At Data Link Layer, data packets are encapsulated into frames, then MAC
address is added for local devices and then it checks for error using error
detection.
Step 7: At Physical Layer, Frames are transmitted in the form of electrical/
optical signals over a physical network medium like ethernet cable or WiFi.
After the email reaches the receiver i.e. Person B, the process will reverse and
decrypt the e-mail content. At last, the email will be shown on Person B email
client.
Protocols Used in the OSI Layers
Layer Working Protocol Data Protocols
Unit
1 – Physical Establishing Physical Bits USB, SONET/SDH, etc.
Layer Connections between
Devices.
2 – Data Link Node to Node Delivery of Frames Ethernet, PPP, etc.
Layer Message.
3 – Network Transmission of data from Packets IP, ICMP, IGMP, OSPF,
Layer one host to another, etc.
located in different
networks.
4 – Transport Take Service from Network Segments (for TCP, UDP, SCTP, etc.
Layer Layer and provide it to TCP) or
the Application Layer. Datagrams
(for UDP)
5 – Session Establishes Connection, Data NetBIOS, RPC, PPTP,
Layer Maintenance, Ensures etc.
Authentication and
Ensures security.
6 – Data from the application Data TLS/SSL, MIME, JPEG,
Presentation layer is extracted and PNG, ASCII, etc.
Layer manipulated in the
required format for
transmission.
7 – Helps in identifying the Data FTP, SMTP, DNS, DHCP,
Application client and synchronizing etc.
Layer communication.
[ ] Layered Architectures
Every network consists of a specific number of functions, layers, and tasks to
perform. Layered Architecture in a computer network is defined as a model where a
whole network process is divided into various smaller sub-tasks. These divided
sub-tasks are then assigned to a specific layer to perform only the dedicated
tasks. A single layer performs only a specific type of task. To run the
application and provide all types of services to clients a lower layer adds its
services to the higher layer present above it. Therefore layered architecture
provides interactions between the sub-systems. If any type of modification is
done in one layer it does not affect the next layer.
Layered Architecture
As shown in the above diagram, there are five different layers. Therefore, it is
a five-layered architecture. Each layer performs a dedicated task. The lower-
level data for example from layer 1 data is transferred to layer 2. Below all the
layers Physical Medium is present. The physical medium is responsible for the
actual communication to take place. For the transfer of data and communication
layered architecture provides simple interface.
Features of Layered Architecture
Use of Layered architecture in computer network provides with the feature
of modularity and distinct interfaces.
Layered architecture ensures independence between layers, by offering
services to higher layers from the lower layers and without specifying how
these services are implemented.
Layered architecture segments as larger and unmanageable design into small
sub tasks.
In layer architecture every network has different number of functions,
layers and content.
In layered architecture, the physical route provides with communication
which is available under the layer 1.
In layered architecture, the implementation done by one layer can be
modified by another layer.
Elements of Layered Architecture
There are three different types of elements of a layered architecture. They are
described below:
Service: Service is defined as a set of functions and tasks being provided
by a lower layer to a higher layer. Each layer performs a different type of
task. Therefore, actions provided by each layer are different.
Protocol: Protocol is defined as a set rule used by the layer for
exchanging and transmission of data with its peer entities. These rules can
consists details regarding a type of content and their order passed from
one layer to another.
Interface: Interface is defined as a channel that allows to transmit the
messages from one layer to the another.
Significance of Layered Architecture
Divide and Conquer Approach: Layered architecture supports divide and
conquer approach. The unmanageable and complex task is further divided into
smaller sub tasks. Each sub task is then carried out by the different
layer. Therefore, using this approach reduces the complexity of the problem
or design process.
Easy to Modify: The layers are independent of each other in layered
architecture. If any sudden change occurs in the implementation of one
layer, it can be changed. This change does not affect the working of other
layers involved in the task. Therefore, layered architectures are required
to perform any sudden update or change.
Modularity: Layered architecture is more modular as compared to other
architecture models in computer network. Modularity provides with more
independence between the layers and are easier to understand.
Easy to Test: Each layer in layered architecture performs a different and
dedicated task. Therefore, each layer can be analyzed and tested
individually. It helps to analyze the problem and solve them more
efficiently as compared to solving all the problems at a time.
Scalability: As networks grow in size and complexity, additional layers or
protocols may be added to meet new requirements while maintaining existing
functionality.
Security: The layered technique enables security measures to be implemented
to varying degrees, protecting the community from a variety of threats.
Efficiency: Each layer focuses on a certain aspect of verbal exchange,
optimizing aid allocation and performance.
Benefits of Layered Architecture
Modularity
Interoperability
Flexibility
Reusability
Scalability
Security
Challenges in Layered Architecture
Performance Overhead
Complexity in Implementation
Resource Utilization
Debugging and Troubleshooting
Protocol Overhead
Explore our latest online courses and learn new skills at your own pace. Enroll
and become a certified expert to boost your career.
Addressing
At a particular time, innumerable messages are being transferred between large
numbers of computers. So, a naming or addressing system should exist so that each
layer can identify the sender and receivers of each message.
Error Control
Unreliable channels introduce a number of errors in the data streams that are
communicated. So, the layers need to agree upon common error detection and error
correction methods so as to protect data packets while they are transferred.
Flow Control
If the rate at which data is produced by the sender is higher than the rate at
which data is received by the receiver, there are chances of overflowing the
receiver. So, a proper flow control mechanism needs to be implemented.
Resource Allocation
Computer networks provide services in the form of network resources to the end
users. The main design issue is to allocate and deallocate resources to
processes. The allocation/deallocation should occur so that minimal interference
among the hosts occurs and there is optimal usage of the resources.
Statistical Multiplexing
It is not feasible to allocate a dedicated path for each message while it is
being transferred from the source to the destination. So, the data channel needs
to be multiplexed, so as to allocate a fraction of the bandwidth or time to each
host.
Routing
There may be multiple paths from the source to the destination. Routing involves
choosing an optimal path among all possible paths, in terms of cost and time.
There are several routing algorithms that are used in network systems.
Security
A major factor of data communication is to defend it against threats like
eavesdropping and surreptitious alteration of messages. So, there should be
adequate mechanisms to prevent unauthorized access to data through authentication
and cryptography.
Star Topology
Advantages of Star Topology
If N devices are connected to each other in a star topology, then the
number of cables required to connect them is N. So, it is easy to set up.
Each device requires only 1 port i.e. to connect to the hub, therefore the
total number of ports required is N.
It is Robust. If one link fails only that link will affect and not other
than that.
Easy to fault identification and fault isolation.
Star topology is cost-effective as it uses inexpensive coaxial cable.
Disadvantages of Star Topology
If the concentrator (hub) on which the whole topology relies fails, the
whole system will crash down.
The cost of installation is high.
Performance is based on the single concentrator i.e. hub.
A common example of star topology is a local area network (LAN) in an office
where all computers are connected to a central hub. This topology is also used in
wireless networks where all devices are connected to a wireless access point.
Bus Topology
Bus Topology is a network type in which every computer and network device is
connected to a single cable. It is bi-directional. It is a multi-point connection
and a non-robust topology because if the backbone fails the topology crashes. In
Bus Topology, various MAC (Media Access Control) protocols are followed by LAN
ethernet connections like TDMA, Pure Aloha, CDMA, Slotted Aloha, etc.
Bus Topology
Advantages of Bus Topology
If N devices are connected to each other in a bus topology, then the number
of cables required to connect them is 1, known as backbone cable, and N
drop lines are required.
Coaxial or twisted pair cables are mainly used in bus-based networks that
support up to 10 Mbps.
The cost of the cable is less compared to other topologies, but it is used
to build small networks.
Bus topology is familiar technology as installation and troubleshooting
techniques are well known.
CSMA is the most common method for this type of topology.
Disadvantages of Bus Topology
A bus topology is quite simpler, but still, it requires a lot of cabling.
If the common cable fails, then the whole system will crash down.
If the network traffic is heavy, it increases collisions in the network. To
avoid this, various protocols are used in the MAC layer known as Pure
Aloha, Slotted Aloha, CSMA/CD, etc.
Adding new devices to the network would slow down networks.
Security is very low.
A common example of bus topology is the Ethernet LAN, where all devices are
connected to a single coaxial cable or twisted pair cable. This topology is also
used in cable television networks.
Ring Topology
In a Ring Topology, it forms a ring connecting devices with exactly two
neighboring devices. A number of repeaters are used for Ring topology with a
large number of nodes, because if someone wants to send some data to the last
node in the ring topology with 100 nodes, then the data will have to pass through
99 nodes to reach the 100th node. Hence to prevent data loss repeaters are used
in the network.
The data flows in one direction, i.e. it is unidirectional, but it can be made
bidirectional by having 2 connections between each Network Node, it is
called Dual Ring Topology. In-Ring Topology, the Token Ring Passing protocol is
used by the workstations to transmit the data.
Ring Topology
The most common access method of ring topology is token passing.
Token passing: It is a network access method in which a token is passed
from one node to another node.
Token: It is a frame that circulates around the network.
Operations of Ring Topology
One station is known as a monitor station which takes all the
responsibility for performing the operations.
To transmit the data, the station has to hold the token. After the
transmission is done, the token is to be released for other stations to
use.
When no station is transmitting the data, then the token will circulate in
the ring.
There are two types of token release techniques: Early token
release releases the token just after transmitting the data and Delayed
token release releases the token after the acknowledgment is received from
the receiver.
Advantages of Ring Topology
The data transmission is high-speed.
The possibility of collision is minimum in this type of topology.
Cheap to install and expand.
It is less costly than a star topology.
Disadvantages of Ring Topology
The failure of a single node in the network can cause the entire network to
fail.
Troubleshooting is difficult in this topology.
The addition of stations in between or the removal of stations can disturb
the whole topology.
Less secure.
Tree Topology
Tree topology is the variation of the Star topology. This topology has a
hierarchical flow of data. In Tree Topology, protocols like DHCP and SAC
(Standard Automatic Configuration) are used.
Tree Topology
In tree topology, the various secondary hubs are connected to the central hub
which contains the repeater. This data flow from top to bottom i.e. from the
central hub to the secondary and then to the devices or from bottom to top i.e.
devices to the secondary hub and then to the central hub. It is a multi-point
connection and a non-robust topology because if the backbone fails the topology
crashes.
Advantages of Tree Topology
It allows more devices to be attached to a single central hub thus it
decreases the distance that is traveled by the signal to come to the
devices.
It allows the network to get isolated and also prioritize from different
computers.
We can add new devices to the existing network.
Error detection and error correction are very easy in a tree topology.
Disadvantages of Tree Topology
If the central hub gets fails the entire system fails.
The cost is high because of the cabling.
If new devices are added, it becomes difficult to reconfigure.
A common example of a tree topology is the hierarchy in a large organization. At
the top of the tree is the CEO, who is connected to the different departments or
divisions (child nodes) of the company. Each department has its own hierarchy,
with managers overseeing different teams (grandchild nodes). The team members
(leaf nodes) are at the bottom of the hierarchy, connected to their respective
managers and departments.
Hybrid Topology
Hybrid Topology is the combination of all the various types of topologies we have
studied above. Hybrid Topology is used when the nodes are free to take any form.
It means these can be individuals such as Ring or Star topology or can be a
combination of various types of topologies seen above. Each individual topology
uses the protocol that has been discussed earlier.
Hybrid Topology
The above figure shows the structure of the Hybrid topology. As seen it contains
a combination of all different types of networks.
Advantages of Hybrid Topology
This topology is very flexible .
The size of the network can be easily expanded by adding new devices.
Disadvantages of Hybrid Topology
It is challenging to design the architecture of the Hybrid Network.
Hubs used in this topology are very expensive.
The infrastructure cost is very high as a hybrid network requires a lot of
cabling and network devices .
A common example of a hybrid topology is a university campus network. The network
may have a backbone of a star topology, with each building connected to the
backbone through a switch or router. Within each building, there may be a bus or
ring topology connecting the different rooms and offices. The wireless access
points also create a mesh topology for wireless devices. This hybrid topology
allows for efficient communication between different buildings while providing
flexibility and redundancy within each building.
[ ] Co-axial Cable
Coaxial Cable
Coaxial cable has an outer plastic covering containing an insulation layer made
of PVC or Teflon and 2 parallel conductors each having a separate insulated
protection cover. The coaxial cable transmits information in two modes: Baseband
mode(dedicated cable bandwidth) and Broadband mode(cable bandwidth is split into
separate ranges). Cable TVs and analog television networks widely use Coaxial
cables.
[ ] Twisted-Pair Cable
Twisted Pair Cable
It consists of 2 separately insulated conductor wires wound about each other.
Generally, several such pairs are bundled together in a protective sheath. They
are the most widely used Transmission Media. Twisted Pair is of two types:
Unshielded Twisted Pair (UTP): UTP consists of two insulated copper wires
twisted around one another. This type of cable has the ability to block
interference and does not depend on a physical shield for this purpose. It
is used for telephonic applications.
Radiowave
Microwaves
It is a line of sight transmission i.e. the sending and receiving antennas need
to be properly aligned with each other. The distance covered by the signal is
directly proportional to the height of the antenna. Frequency Range:1GHz –
300GHz. Micro waves are majorly used for mobile phone communication and
television distribution.
Infrared
Infrared waves are used for very short distance communication. They cannot
penetrate through obstacles. This prevents interference between systems.
Frequency Range:300GHz – 400THz. It is used in TV remotes, wireless mouse,
keyboard, printer, etc.
Methods of Framing :
There are basically four methods of framing as given below –
1. Character Count
2. Flag Byte with Character Stuffing
3. Starting and Ending Flags, with Bit Stuffing
4. Encoding Violations
These are explained as following below.
1. Character Count :
This method is rarely used and is generally required to count total number
of characters that are present in frame. This is be done by using field in
header. Character count method ensures data link layer at the receiver or
destination about total number of characters that follow, and about where
the frame ends.
There is disadvantage also of using this method i.e., if anyhow character count
is disturbed or distorted by an error occurring during transmission, then
destination or receiver might lose synchronization. The destination or receiver
might also be not able to locate or identify beginning of next frame.
2. Character Stuffing :
Character stuffing is also known as byte stuffing or character-oriented
framing and is same as that of bit stuffing but byte stuffing actually
operates on bytes whereas bit stuffing operates on bits. In byte stuffing,
special byte that is basically known as ESC (Escape Character) that has
predefined pattern is generally added to data section of the data stream or
frame when there is message or character that has same pattern as that of
flag byte.
But receiver removes this ESC and keeps data part that causes some problems or
issues. In simple words, we can say that character stuffing is addition of 1
additional byte if there is presence of ESC or flag in text.
3. Bit Stuffing :
Bit stuffing is also known as bit-oriented framing or bit-oriented
approach. In bit stuffing, extra bits are being added by network protocol
designers to data streams. It is generally insertion or addition of extra
bits into transmission unit or message to be transmitted as simple way to
provide and give signaling information and data to receiver and to avoid or
ignore appearance of unintended or unnecessary control sequences.
It is type of protocol management simply performed to break up bit pattern that
results in transmission to go out of synchronization. Bit stuffing is very
essential part of transmission process in network and communication protocol. It
is also required in USB.
4. Physical Layer Coding Violations :
Encoding violation is method that is used only for network in which
encoding on physical medium includes some sort of redundancy i.e., use of
more than one graphical or visual structure to simply encode or represent
one variable of data.
[ ] CSMA/CD
CSMA/CD (Carrier Sense Multiple Access/ Collision Detection) is a media access
control method that was widely used in Early Ethernet technology/LANs when there
used to be shared Bus Topology and each node ( Computers) was connected by
Coaxial Cables. Nowadays Ethernet is Full Duplex and Topology is either Star
(connected via Switch or Router) or point-to-point ( Direct Connection). Hence
CSMA/CD is not used but they are still supported though.
Consider a scenario where there are ‘n’ stations on a link and all are waiting to
transfer data through that channel. In this case, all ‘n’ stations would want to
access the link/channel to transfer their own data. The problem arises when more
than one station transmits the data at the moment. In this case, there will be
collisions in the data from different stations.
CSMA/CD is one such technique where different stations that follow this protocol
agree on some terms and collision detection measures for effective transmission.
This protocol decides which station will transmit when so that data reaches the
destination without corruption.
How Does CSMA/CD Work?
Step 1: Check if the sender is ready to transmit data packets.
Step 2: Check if the transmission link is idle.
The sender has to keep on checking if the transmission link/medium is idle.
For this, it continuously senses transmissions from other nodes. The sender
sends dummy data on the link. If it does not receive any collision signal,
this means the link is idle at the moment. If it senses that the carrier is
free and there are no collisions, it sends the data. Otherwise, it refrains
from sending data.
Step 3: Transmit the data & check for collisions.
The sender transmits its data on the link. CSMA/CD does not use an
‘acknowledgment’ system. It checks for successful and unsuccessful
transmissions through collision signals. During transmission, if a
collision signal is received by the node, transmission is stopped. The
station then transmits a jam signal onto the link and waits for random time
intervals before it resends the frame. After some random time, it again
attempts to transfer the data and repeats the above process.
Step 4: If no collision was detected in propagation, the sender completes
its frame transmission and resets the counters.
How Does a Station Know if Its Data Collide?
[ ] CSMA/CA
The basic idea behind CSMA/CA is that the station should be able to receive while
transmitting to detect a collision from different stations. In wired networks, if
a collision has occurred then the energy of the received signal almost doubles,
and the station can sense the possibility of collision. In the case of wireless
networks, most of the energy is used for transmission, and the energy of the
received signal increases by only 5-10% if a collision occurs. It can’t be used
by the station to sense collision. Therefore CSMA/CA has been specially designed
for wireless networks.
These are three types of strategies:
1. InterFrame Space (IFS): When a station finds the channel busy it senses the
channel again, when the station finds a channel to be idle it waits for a
period of time called IFS time. IFS can also be used to define the priority
of a station or a frame. Higher the IFS lower is the priority.
2. Contention Window: It is the amount of time divided into slots. A station
that is ready to send frames chooses a random number of slots as wait time.
3. Acknowledgments: The positive acknowledgments and time-out timer can help
guarantee a successful transmission of the frame.
Characteristics of CSMA/CA
1. Carrier Sense: The device listens to the channel before transmitting, to
ensure that it is not currently in use by another device.
2. Multiple Access: Multiple devices share the same channel and can transmit
simultaneously.
3. Collision Avoidance: If two or more devices attempt to transmit at the same
time, a collision occurs. CSMA/CA uses random backoff time intervals to
avoid collisions.
4. Acknowledgment (ACK): After successful transmission, the receiving device
sends an ACK to confirm receipt.
5. Fairness: The protocol ensures that all devices have equal access to the
channel and no single device monopolizes it.
6. Binary Exponential Backoff: If a collision occurs, the device waits for a
random period of time before attempting to retransmit. The backoff time
increases exponentially with each retransmission attempt.
7. Interframe Spacing: The protocol requires a minimum amount of time between
transmissions to allow the channel to be clear and reduce the likelihood of
collisions.
8. RTS/CTS Handshake: In some implementations, a Request-To-Send (RTS) and
Clear-To-Send (CTS) handshake is used to reserve the channel before
transmission. This reduces the chance of collisions and increases
efficiency.
9. Wireless Network Quality: The performance of CSMA/CA is greatly influenced
by the quality of the wireless network, such as the strength of the signal,
interference, and network congestion.
10. Adaptive Behavior: CSMA/CA can dynamically adjust its behavior in
response to changes in network conditions, ensuring the efficient use of
the channel and avoiding congestion.
Overall, CSMA/CA balances the need for efficient use of the shared channel with
the need to avoid collisions, leading to reliable and fair communication in a
wireless network.
Process: The entire process of collision avoidance can be explained as follows:
Types of CSMA Access Modes
There are 4 types of access modes available in CSMA. It is also referred as 4
different types of CSMA protocols which decide the time to start sending data
across shared media.
1. 1-Persistent: It senses the shared channel first and delivers the data
right away if the channel is idle. If not, it must wait
and continuously track for the channel to become idle and then broadcast
the frame without condition as soon as it does. It is an aggressive
transmission algorithm.
2. Non-Persistent: It first assesses the channel before transmitting data; if
the channel is idle, the node transmits data right away. If not, the
station must wait for an arbitrary amount of time (not continuously), and
when it discovers the channel is empty, it sends the frames.
3. P-Persistent: It consists of the 1-Persistent and Non-Persistent modes
combined. Each node observes the channel in the 1Persistent mode, and if
the channel is idle, it sends a frame with a P probability. If the data is
not transferred, the frame restarts with the following time slot after
waiting for a (q = 1-p probability) random period.
4. O-Persistent: A supervisory node gives each node a transmission order.
Nodes wait for their time slot according to their allocated transmission
sequence when the transmission medium is idle.
Advantages of CSMA
Increased Efficiency: CSMA ensures that only one device communicates on the
network at a time, reducing collisions and improving network efficiency.
Simplicity: CSMA is a simple protocol that is easy to implement and does
not require complex hardware or software.
Flexibility: CSMA is a flexible protocol that can be used in a wide range
of network environments, including wired and wireless networks.
Low cost: CSMA does not require expensive hardware or software, making it a
cost-effective solution for network communication.
Disadvantages of CSMA
Limited Scalability: CSMA is not a scalable protocol and can become
inefficient as the number of devices on the network increases.
Delay: In busy networks, the requirement to sense the medium and wait for
an available channel can result in delays and increased latency.
Limited Reliability: CSMA can be affected by interference, noise, and other
factors, resulting in unreliable communication.
Vulnerability to Attacks: CSMA can be vulnerable to certain types of
attacks, such as jamming and denial-of-service attacks, which can disrupt
network communication.
T = 1/(U*C-L)
T(FDM) = N*T(1/U(C/N)-L/N)
Where,
Station Model:
Assumes that each of N stations independently produce frames. The probability of
producing a packet in the interval IDt where I is the constant arrival rate of
new frames.
Collision Assumption:
If two frames overlap in time-wise, then that’s collision. Any collision is an
error, and both frames must re transmitted. Collisions are only possible error.
N independent stations.
A station is blocked until its generated frame is transmitted.
probability of a frame being generated in a period of length Dt is IDt
where I is the arrival rate of frames.
Only a single Channel available.
Time can be either: Continuous or slotted.
Carrier Sense: A station can sense if a channel is already busy before
transmission.
No Carrier Sense: Time out used to sense loss data.
Pure Aloha
For more, refer to Pure Aloha.
Slotted Aloha
Slotted Aloha is simply an advanced version of pure Aloha that helps in improving
the communication network. A station is required to wait for the beginning of the
next slot to transmit. The vulnerable period is halved as opposed to Pure Aloha.
Slotted Aloha helps in reducing the number of collisions by properly utilizing
the channel and this basically results in the somehow delay of the users. In
Slotted Aloha, the channel time is separated into particular time slots.
Slotted Aloha
For more, refer to Slotted Aloha.
Differences Between Pure Aloha and Slotted Aloha
The difference between Pure and Slotted Aloha lies in their approach to handling
data collisions. While Pure Aloha sends data anytime, Slotted Aloha reduces
collisions by organizing time slots. To gain more insight into these protocols
and other network fundamentals, explore this GATE Computer Science & IT – 2025
course.
What is ARP?
Address Resolution Protocol is a protocol used to map an IP address 32-bit to a
physical MAC address 48-bit. The MAC address is known as the hardware id number.
This is important in local area networks where devices need to know each other
MAC addresses to communicate easily at the data link layer.
How Does ARP Work?
When a device wants to communicate with another device on the local network
but only knows its IP address, it sends out an ARP request. This request is
broadcasted to all devices on the local network.
The ARP request packet includes the senders IP address, senders MAC
address, and the IP address of the device whose MAC address is being
queried.
The sender receives the ARP reply and updates its ARP table with the IP to
MAC address mapping, allowing it to send packets directly to the
destination device.
What is RARP?
Reverse Address Resolution Protocol is used to map a MAC address 48-bit to an IP
address 32-bit. This protocol is typically used by devices that know their Media
Access Control address but need to find their IP address.
How Does RARP Work?
When a device with only its MAC address but not its IP address needs to
find its IP address, it sends out a RARP request. This request is
broadcasted to all devices on the local network.
The RARP request packet includes the devices MAC address and requests an IP
address in return.
The device receives the RARP reply and configures itself with the IP
address provided by the RARP server.
ARP RARP
A protocol used to map an IP address A protocol used to map a physical address
to a physical address to an IP address
To obtain the MAC address of a To obtain the IP address of a network
network device when only its IP device when only its MAC address is known
address is known
IP addresses MAC addresses
ARP stands for Address Resolution Whereas RARP stands for Reverse Address
Protocol. Resolution Protocol.
In ARP, broadcast MAC address is While in RARP, broadcast IP address is
used. used.
In ARP, ARP table is managed or While in RARP, RARP table is managed or
maintained by local host. maintained by RARP server.
In Address Resolution Protocol, While in RARP, IP address is fetched.
Receiver’s MAC address is fetched.
ARP is used in sender’s side to map RARP is used in receiver’s side to map
the receiver’s MAC address. the sender’s IP.
[ ] Dijkstra's Algorithm
IGMP:
IGMP is an acronym for Internet Group Management Protocol. IGMP is a
communication protocol used by hosts and adjacent routers for multicasting
communication with IP networks and uses the resources efficiently to transmit the
message/data packets. Multicast communication can have single or multiple senders
and receivers and thus, IGMP can be used in streaming videos, gaming, or web
conferencing tools. This protocol is used on IPv4 networks and for using this on
IPv6, multicasting is managed by Multicast Listener Discovery (MLD).
Like other network protocols, IGMP is used on the network layer. MLDv1 is almost
the same in functioning as IGMPv2 and MLDv2 is almost similar to IGMPv3. The
communication protocol, IGMPv1 was developed in 1989 at Stanford University.
IGMPv1 was updated to IGMPv2 in the year 1997 and again updated to IGMPv3 in the
year 2002. The IGMP protocol is used by the hosts and router to identify the
hosts in a LAN that are the members of a group. IGMP is a part of the IP layer
and IGMP has a fixed-size message. The IGMP message is encapsulated within an IP
datagram.
The IP protocol supports two types of communication:
Unicasting- It is a communication between one sender and one receiver.
Therefore, we can say that it is one-to-one communication.
Multicasting: Sometimes the sender wants to send the same message to a
large of receivers simultaneously. This process is known as multicasting
which has one-to-many communication.
Applications:
Streaming – Multicast routing protocols are used for audio and video
streaming over the network i.e., either one-to-many or many-to-many.
Gaming – Internet group management protocol is often used in simulation
games which has multiple users over the network such as online games.
Web Conferencing tools – Video conferencing is a new method to meet people
from your own convenience and IGMP connects to the users for conferencing
and transfers the message/data packets efficiently.
IGMP works on devices that are capable of handling multicast groups and dynamic
multicasting. These devices allow the host to join or leave the membership in the
multicast group. These devices also allow to add and remove clients from the
group. This communication protocol is operated between the host and the local
multicast router. When a multicast group is created, the multicast group address
is in the range of class D (224-239) IP addresses and is forwarded as the
destination IP address in the packet.
L2 or Level-2 devices such as switches are used in between host and multicast
router for IGMP snooping. IGMP snooping is a process to listen to the IGMP
network traffic in controlled manner. Switch receives the message from host and
forwards the membership report to the local multicast router. The multicast
traffic is further forwarded to remote routers from local multicast routers using
PIM (Protocol Independent Multicast) so that clients can receive the message/data
packets. Clients wishing to join the network sends join message in the query and
switch intercepts the message and adds the ports of clients to its multicast
routing table.
Advantages
IGMP communication protocol efficiently transmits the multicast data to the
receivers and so, no junk packets are transmitted to the host which shows
optimized performance.
Bandwidth is consumed totally as all the shared links are connected.
Hosts can leave a multicast group and join another.
Disadvantages
It does not provide good efficiency in filtering and security.
Due to lack of TCP, network congestion can occur.
IGMP is vulnerable to some attacks such as DOS attack (Denial-Of-Service).
Classful Addressing
Note:
IP addresses are globally managed by Internet Assigned Numbers
Authority(IANA) and Regional Internet Registries(RIR).
While finding the total number of host IP addresses, 2 IP addresses are not
counted and are therefore, decreased from the total count because the first
IP address of any network is the network number and whereas the last IP
address is reserved for broadcast IP.
Occupation of The Address Space In Classful Addressing
Class A
IP addresses belonging to class A are assigned to the networks that contain a
large number of hosts.
The network ID is 8 bits long.
The host ID is 24 bits long.
The higher-order bit of the first octet in class A is always set to 0. The
remaining 7 bits in the first octet are used to determine network ID. The 24 bits
of host ID are used to determine the host in any network. The default subnet mask
for Class A is 255.x.x.x. Therefore, class A has a total of:
2^24 – 2 = 16,777,214 host ID
IP addresses belonging to class A ranges from 0.0.0.0 – 127.255.255.255.
Class A
Class B
IP address belonging to class B is assigned to networks that range from medium-
sized to large-sized networks.
The network ID is 16 bits long.
The host ID is 16 bits long.
The higher-order bits of the first octet of IP addresses of class B are always
set to 10. The remaining 14 bits are used to determine the network ID. The 16
bits of host ID are used to determine the host in any network. The default subnet
mask for class B is 255.255.x.x. Class B has a total of:
2^14 = 16384 network address
2^16 – 2 = 65534 host address
IP addresses belonging to class B ranges from 128.0.0.0 – 191.255.255.255.
Class B
Class C
IP addresses belonging to class C are assigned to small-sized networks.
The network ID is 24 bits long.
The host ID is 8 bits long.
The higher-order bits of the first octet of IP addresses of class C is always set
to 110. The remaining 21 bits are used to determine the network ID. The 8 bits of
host ID are used to determine the host in any network. The default subnet
mask for class C is 255.255.255.x. Class C has a total of:
2^21 = 2097152 network address
2^8 – 2 = 254 host address
IP addresses belonging to class C range from 192.0.0.0 – 223.255.255.255.
Class C
Class D
IP address belonging to class D is reserved for multi-casting. The higher-order
bits of the first octet of IP addresses belonging to class D is always set to
1110. The remaining bits are for the address that interested hosts recognize.
Class D does not possess any subnet mask. IP addresses belonging to class D range
from 224.0.0.0 – 239.255.255.255.
Class D
Class E
IP addresses belonging to class E are reserved for experimental and research
purposes. IP addresses of class E range from 240.0.0.0 – 255.255.255.255. This
class doesn’t have any subnet mask. The higher-order bits of the first octet of
class E are always set to 1111.
Class E
Range of Special IP Addresses
169.254.0.0 – 169.254.0.16 : Link-local addresses
127.0.0.0 – 127.255.255.255 : Loop-back addresses
0.0.0.0 – 0.0.0.8: used to communicate within the current network.
Rules for Assigning Host ID
Host IDs are used to identify a host within a network. The host ID is assigned
based on the following rules:
Within any network, the host ID must be unique to that network.
A host ID in which all bits are set to 0 cannot be assigned because this
host ID is used to represent the network ID of the IP address.
Host ID in which all bits are set to 1 cannot be assigned because this host
ID is reserved as a broadcast address to send packets to all the hosts
present on that particular network.
Rules for Assigning Network ID
Hosts that are located on the same physical network are identified by the network
ID, as all host on the same physical network is assigned the same network ID. The
network ID is assigned based on the following rules:
The network ID cannot start with 127 because 127 belongs to the class A
address and is reserved for internal loopback functions.
All bits of network ID set to 1 are reserved for use as an IP broadcast
address and therefore, cannot be used.
All bits of network ID set to 0 are used to denote a specific host on the
local network and are not routed and therefore, aren’t used.
Summary of Classful Addressing
In the above table No. of networks for class A should be 127. (Network ID with
all 0 s is not considered)
Problems With Classful Addressing
The problem with this classful addressing method is that millions of class A
addresses are wasted, many of the class B addresses are wasted, whereas, the
number of addresses available in class C is so small that it cannot cater to the
needs of organizations. Class D addresses are used for multicast routing and are
therefore available as a single block only. Class E addresses are reserved.
Since there are these problems, Classful networking was replaced by Classless
Inter-Domain Routing (CIDR) in 1993. We will be discussing Classless addressing
in the next post.
The network ID is 24 bits long.
The host ID is 8 bits long.
2^21 = 2097152 network address
2^8 – 2 = 254 host address
Within any network, the host ID must be unique to that network.
Host ID in which all bits are set to 0 cannot be assigned because this host
ID is used to represent the network ID of the IP address.
Host ID in which all bits are set to 1 cannot be assigned because this host
ID is reserved as a broadcast address to send packets to all the hosts
present on that particular network.
The network ID cannot start with 127 because 127 belongs to the class A
address and is reserved for internal loopback functions.
All bits of network ID set to 1 are reserved for use as an IP broadcast
address and therefore, cannot be used.
All bits of network ID set to 0 are used to denote a specific host on the
local network and are not routed and therefore, aren’t used.
Classful and Classless Addressing
Here is the main diffrence between Classful and Classless Addressing:
Subnetting
Dividing a large block of addresses into several contiguous sub-blocks and
assigning these sub-blocks to different smaller networks is called subnetting. It
is a practice that is widely used when classless addressing is done.
A subnet or subnetwork is a network inside a network. Subnets make networks more
efficient. Through subnetting, network traffic can travel a shorter distance
without passing through unnecessary routers to reach its destination.
Classless Addressing
To reduce the wastage of IP addresses in a block, we use sub-netting. What we do
is that we use host id bits as net id bits of a classful IP address. We give the
IP address and define the number of bits for mask along with it (usually followed
by a ‘/’ symbol), like, 192.168.1.1/28. Here, subnet mask is found by putting the
given number of bits out of 32 as 1, like, in the given address, we need to put
28 out of 32 bits as 1 and the rest as 0, and so, the subnet mask would be
255.255.255.240. A classless addressing system or classless interdomain routing
(CIDR or supernetting) is the way to combine two or more class C networks to
create a/23 or a /22 supernet. A classless addressing system or classless
interdomain routing (CIDR) is an improved IP addressing system. In a classless
addressing system the block of IP address is assigned dynamically based on
specific rules.
Some Values Calculated in Subnetting:
1. Number of subnets : 2(Given bits for mask – No. of bits in default mask)
2. Subnet address : AND result of subnet mask and the given IP address
3. Broadcast address : By putting the host bits as 1 and retaining the network
bits as in the IP address
4. Number of hosts per subnet : 2(32 – Given bits for mask) – 2
5. First Host ID : Subnet address + 1 (adding one to the binary representation of
the subnet address)
6. Last Host ID : Subnet address + Number of Hosts
1. Retransmission Policy :
It is the policy in which retransmission of the packets are taken care of.
If the sender feels that a sent packet is lost or corrupted, the packet
needs to be retransmitted. This transmission may increase the congestion in
the network.
To prevent congestion, retransmission timers must be designed to prevent
congestion and also able to optimize efficiency.
2. Window Policy :
The type of window at the sender’s side may also affect the congestion.
Several packets in the Go-back-n window are re-sent, although some packets
may be received successfully at the receiver side. This duplication may
increase the congestion in the network and make it worse.
Therefore, Selective repeat window should be adopted as it sends the
specific packet that may have been lost.
3. Discarding Policy :
A good discarding policy adopted by the routers is that the routers may
prevent congestion and at the same time partially discard the corrupted or
less sensitive packages and also be able to maintain the quality of a
message.
In case of audio file transmission, routers can discard less sensitive
packets to prevent congestion and also maintain the quality of the audio
file.
4. Acknowledgment Policy :
Since acknowledgements are also the part of the load in the network, the
acknowledgment policy imposed by the receiver may also affect congestion.
Several approaches can be used to prevent congestion related to
acknowledgment.
The receiver should send acknowledgement for N packets rather than sending
acknowledgement for a single packet. The receiver should send an
acknowledgment only if it has to send a packet or a timer expires.
5. Admission Policy :
In admission policy a mechanism should be used to prevent congestion.
Switches in a flow should first check the resource requirement of a network
flow before transmitting it further. If there is a chance of a congestion
or there is a congestion in the network, router should deny establishing a
virtual network connection to prevent further congestion.
All the above policies are adopted to prevent congestion before it happens in the
network.
Congestion control ensures that a network can handle traffic efficiently without
data loss. From techniques like TCP congestion control to advanced algorithms,
managing network traffic is essential. The GATE CS Self-Paced Course provides a
thorough explanation of congestion control methods, making it easier to grasp
complex networking concepts.
Closed Loop Congestion Control
Closed loop congestion control techniques are used to treat or alleviate
congestion after it happens. Several techniques are used by different protocols;
some of them are:
1. Backpressure :
Backpressure is a technique in which a congested node stops receiving packets
from upstream node. This may cause the upstream node or nodes to become congested
and reject receiving data from above nodes. Backpressure is a node-to-node
congestion control technique that propagate in the opposite direction of data
flow. The backpressure technique can be applied only to virtual circuit where
each node has information of its above upstream node.
In above diagram the 3rd node is congested and stops receiving packets
as a result 2nd node may be get congested due to slowing down of the output data
flow. Similarly 1st node may get congested and inform the source to slow down.
3. Implicit Signaling :
In implicit signaling, there is no communication between the congested nodes and
the source. The source guesses that there is congestion in a network. For example
when sender sends several packets and there is no acknowledgment for a while, one
assumption is that there is a congestion.
4. Explicit Signaling :
In explicit signaling, if a node experiences congestion it can explicitly sends a
packet to the source or destination to inform about congestion. The difference
between choke packet and explicit signaling is that the signal is included in the
packets that carry data rather than creating a different packet as in case of
choke packet technique.
Explicit signaling can occur in either forward or backward direction.
Forward Signaling : In forward signaling, a signal is sent in the direction
of the congestion. The destination is warned about congestion. The receiver
in this case adopt policies to prevent further congestion.
Backward Signaling : In backward signaling, a signal is sent in the
opposite direction of the congestion. The source is warned about congestion
and it needs to slow down.
Since n > size of the packet at the head of the Queue, i.e. n > 200
Therefore, n = 1000-200 = 800
Packet size of 200 is sent into the network.
Now, again n > size of the packet at the head of the Queue, i.e. n > 400
Therefore, n = 800-400 = 400
Packet size of 400 is sent into the network.
Since, n < size of the packet at the head of the Queue, i.e. n < 450
Therefore, the procedure is stopped.
Initialise n = 1000 on another tick of the clock.
This procedure is repeated until all the packets are sent into the network.
When the host has to send a packet In this, the bucket holds tokens generated
, packet is thrown in bucket. at regular intervals of time.
IPv4 IPv6
IPv4 has a 32-bit address length IPv6 has a 128-bit address length
It can generate 4.29×10 9 address The address space of IPv6 is quite large it
space can produce 3.4×10 38 address space
IPv4 has a header of 20-60 bytes. IPv6 has a header of 40 bytes fixed
IPv4 can be converted to IPv6 Not all IPv6 can be converted to IPv4
IPv4’s IP addresses are divided IPv6 does not have any classes of the IP
into five different classes. address.
Class A , Class B, Class C, Class
D , Class E.
Error checking TCP provides extensive error- UDP has only the basic error-
mechanism checking mechanisms. It is checking mechanism
because it provides flow using checksums.
control and acknowledgment of
data.
Header Length TCP has a (20-60) bytes UDP has an 8 bytes fixed-
variable length header. length header.
[ ] Berkeley Sockets
[ ] Three-way handshake
The TCP 3-Way Handshake is a fundamental process that establishes a reliable
connection between two devices over a TCP/IP network. It involves three steps:
SYN (Synchronize), SYN-ACK (Synchronize-Acknowledge), and ACK (Acknowledge).
During the handshake, the client and server exchange initial sequence numbers and
confirm the connection establishment. In this article, we will discuss the TCP 3-
Way Handshake Process.
What is the TCP 3-Way Handshake?
The TCP 3-Way Handshake is a fundamental process used in the Transmission Control
Protocol (TCP) to establish a reliable connection between a client and a server
before data transmission begins. This handshake ensures that both parties are
synchronized and ready for communication.
TCP Segment Structure
A TCP segment consists of data bytes to be sent and a header that is added to the
data by TCP as shown:
The header of a TCP segment can range from 20-60 bytes. 40 bytes are for options.
If there are no options, a header is 20 bytes else it can be of upmost 60
bytes. Header fields:
Source Port Address: A 16-bit field that holds the port address of the
application that is sending the data segment.
Destination Port Address: A 16-bit field that holds the port address of the
application in the host that is receiving the data segment.
Sequence Number: A 32-bit field that holds the sequence number , i.e, the
byte number of the first byte that is sent in that particular segment. It
is used to reassemble the message at the receiving end of the segments that
are received out of order.
Acknowledgement Number: A 32-bit field that holds the acknowledgement
number, i.e, the byte number that the receiver expects to receive next. It
is an acknowledgement for the previous bytes being received successfully.
Header Length (HLEN): This is a 4-bit field that indicates the length of
the TCP header by a number of 4-byte words in the header, i.e if the header
is 20 bytes(min length of TCP header ), then this field will hold 5
(because 5 x 4 = 20) and the maximum length: 60 bytes, then it’ll hold the
value 15(because 15 x 4 = 60). Hence, the value of this field is always
between 5 and 15.
Control flags: These are 6 1-bit control bits that control connection
establishment, connection termination, connection abortion, flow control,
mode of transfer etc. Their function is:
o URG: Urgent pointer is valid
o ACK: Acknowledgement number is valid( used in case of cumulative
acknowledgement)
o PSH: Request for push
o RST: Reset the connection
o SYN: Synchronize sequence numbers
o FIN: Terminate the connection
Window size: This field tells the window size of the sending TCP in bytes.
Checksum: This field holds the checksum for error control . It is mandatory
in TCP as opposed to UDP.
Urgent pointer: This field (valid only if the URG control flag is set) is
used to point to data that is urgently required that needs to reach the
receiving process at the earliest. The value of this field is added to the
sequence number to get the byte number of the last urgent byte.
To master concepts like the TCP 3-Way Handshake and other critical networking
principles, consider enrolling in the GATE CS Self-Paced course . This course
offers a thorough understanding of key topics essential for GATE preparation and
a successful career in computer science. Get the knowledge and skills you need
with expert-led instruction.
TCP 3-way Handshake Process
The process of communication between devices over the internet happens according
to the current TCP/IP suite model(stripped-out version of OSI reference model).
The Application layer is a top pile of a stack of TCP/IP models from where
network-referenced applications like web browsers on the client side establish a
connection with the server. From the application layer, the information is
transferred to the transport layer where our topic comes into the picture. The
two important protocols of this layer are – TCP, and UDP(User Datagram
Protocol) out of which TCP is prevalent(since it provides reliability for the
connection established). However, you can find an application of UDP in querying
the DNS server to get the binary equivalent of the Domain Name used for the
website.
Step 1 (SYN): In the first step, the client wants to establish a connection
with a server, so it sends a segment with SYN(Synchronize Sequence Number)
which informs the server that the client is likely to start communication
and with what sequence number it starts segments with
Step 2 (SYN + ACK): Server responds to the client request with SYN-ACK
signal bits set. Acknowledgement(ACK) signifies the response of the segment
it received and SYN signifies with what sequence number it is likely to
start the segments with
Step 3 (ACK): In the final part client acknowledges the response of the
server and they both establish a reliable connection with which they will
start the actual data transfer
[ ] TCP Timers
TCP uses several timers to ensure that excessive delays are not encountered
during communications. Several of these timers are elegant, handling problems
that are not immediately obvious at first analysis. Each of the timers used by
TCP is examined in the following sections, which reveal its role in ensuring data
is properly sent from one connection to another.
TCP implementation uses four timers –
Retransmission Timer – To retransmit lost segments, TCP uses retransmission
timeout (RTO). When TCP sends a segment the timer starts and stops when the
acknowledgment is received. If the timer expires timeout occurs and the
segment is retransmitted. RTO (retransmission timeout is for 1 RTT) to
calculate retransmission timeout we first need to calculate the RTT(round
trip time).
RTT three types –
o Measured RTT(RTTm) – The measured round-trip time for a segment is the
time required for the segment to reach the destination and be
acknowledged, although the acknowledgement may include other segments.
o Smoothed RTT(RTTs) – It is the weighted average of RTTm. RTTm is
likely to change and its fluctuation is so high that a single
measurement cannot be used to calculate RTO.
Initially -> No value
After the first measurement -> RTTs=RTTm
After each measurement -> RTTs= (1-t)*RTTs + t*RTTm
Note: t=1/8 (default if not given)
Deviated RTT(RTTd) – Most implementations do not use RTTs alone so RTT
deviated is also calculated to find out RTO.
In the diagram given, there is a fast sender and a slow receiver. Here are the
following points to understand how the message will overflow after a certain
interval of time.
In the diagram, the receiver is receiving the message sent by the sender at
the rate of 5 messages per second while the sender is sending the messages
at the rate of 10 messages per second.
When the sender sends the message to the receiver, it gets into the network
queue of the receiver.
Once the user reads the message from the application, the message gets
cleared from the queue and the space becomes free.
According to the mentioned speed of the sender and receiver, the receiver
queue will be getting shortened and the buffer space will reduce at the
speed of 5 messages/ second. Since, the receiver buffer size can
accomodate 200 messages, hence, in 40 seconds, the receiver buffer will
become full.
So, after 40 seconds, the messages will start dropping as there will be no
space remaining for the incoming messages.
This is why flow control becomes important for TCP protocol for data transfer and
communication purposes.
How Does Flow Control in TCP Work?
When the data is sent on the network, this is what normally happens in the
network layer.
The sender writes the data to a socket and sends it to the transport layer which
is TCP in this case. The transport layer will then wrap this data and send it to
the network layer which will route it to the receiving node.
If you look at the diagram closely, you will notice this part of the
diagram.
The TCP stores the data that needs to be sent in the send buffer and the data to
be received in the receive buffer. Flow control makes sure that no more packets
are sent by the sender once the receiver’s buffer is full as the messages will be
dropped and the receiver won’t be able to handle them. In order to control the
amount of data sent by the TCP, the receiver will create a buffer which is also
known as Receive
Window.
The TCP needs to send ACK every time it receives the data packet, acknowledging
that the packet is received successfully and with this value of ACK it sends the
value of the current receive window so that sender knows where to send the data.
The Sliding Window
The sliding window is used in TCP to control the number of bytes a channel can
accommodate. It is the number of bytes that were sent but not acknowledged. This
is done in TCP by using a window in which packets in sequence are sent and
authorized. When the sending host receives the ACK from the receiving host about
the packets the window size is incremented in order to allow new packets to come
in. There are several techniques used by the receiver window including go-back-
n and selective repeat but the fundamental of the communication remains the same.
Receive Window
The TCP flow control is maintained by the receive window on the sender side. It
tracks the amount of space left vacant inside the buffer on the receiver side.
The figure below shows the receive
window.
The formula for calculating the receive window is given in the figure. The
window is constantly updated and reported via the window segment of the header in
TCP. Some of the important terminals in the TCP receive window
are receiveBuffer, receiverWindow, lastByteRead, and lastByteReceived.
Whenever the receive buffer is full, the receiver sends receiveWindow=0 to the
sender. When the remaining buffer is consumed and there’s nothing back to
acknowledge then it creates a problem in the application of the receiver.
In order to create a solution for this problem, the TCP makes explicit
consideration by dictating the sender a single bit of data to the receiver side
continuously. This minimizes strain on the network while maintaining a constant
check on the status of the buffer at the receiving side. In this way, as soon as
buffer space frees up, an ACK is sent.
In this, a control mechanism is adopted to ensure that the rate of incoming data
is not greater than its consumption. This mechanism relies on the window field of
the TCP header and provides reliable data transport over a network.
The Persist Timer
From the above condition, there might be a possibility of deadlock. After the
receiver shows a zero window, if the ACK message is not sent by the receiver to
the sender, it will never know when to start sending the data. This situation in
which the sender is waiting for a message to start sending the data and the
receiver is waiting for more incoming data is called a deadlock condition in flow
control. In order to solve this problem, whenever the TCP receives the zero
window message, it starts a persistent timer that will send small packets to the
receiver periodically. This is also called WindowProbe.
Conclusion
The flow control mechanism in TCP is to ensure that sender does not send
data more than what the receiver can handle.
With every ACK message at the receiver, it advertises the current receive
window.
Receive window is spare space in the receiver buffer whose formula is given
by: rwnd = ReceiveBuffer – (LastByteReceived – LastByteReadByApplication);
In TCP, a sliding window is used for determining that no more bytes are
allotted in the window than what is advertised by the receiver.
When the window size is zero, the persist timer starts and TCP will stop
the data transmission.
WindowProbe message is sent in small packets of data periodically to the
receiver.
When it receives a non-zero window size, it resumes its transmission.
MODULE 6
Short Notes :
[ ] DNS (Domain Name Server)
The Domain Name System (DNS) is like the internet’s phone book. It helps you find
websites by translating easy-to-remember names (like www.example.com) into the
numerical IP addresses (like 192.0.2.1) that computers use to locate each other
on the internet. Without DNS, you would have to remember long strings of numbers
to visit your favorite websites.
Domain Name System (DNS) is a hostname used for IP address translation services.
DNS is a distributed database implemented in a hierarchy of name servers. It is
an application layer protocol for message exchange between clients and servers.
It is required for the functioning of the Internet.
What is the Need for DNS?
Every host is identified by the IP address but remembering numbers is very
difficult for people also the IP addresses are not static therefore a mapping is
required to change the domain name to the IP address. So DNS is used to convert
the domain name of the websites to their numerical IP address.
DNS translates domain names to IP addresses, making it an essential part of the
internet. To learn more, the GATE CS Self-Paced Course is a great resource.
Types of Domain
There are various kinds of domains:
Generic Domains: .com(commercial), .edu(educational), .mil(military),
.org(nonprofit organization), .net(similar to commercial) all these are
generic domains.
Country Domain: .in (India) .us .uk
Inverse Domain: if we want to know what is the domain name of the website.
IP to domain name mapping. So DNS can provide both the mapping for example
to find the IP addresses of geeksforgeeks.org then we have to type
nslookup www.geeksforgeeks.org
Types of DNS
Organization of Domain
It is very difficult to find out the IP address associated with a website because
there are millions of websites and with all those websites we should be able to
generate the IP address immediately, there should not be a lot of delays for that
to happen organization of the database is very important.
Name-to-Address Resolution
Hierarchy of Name Servers Root Name Servers: It is contacted by name
servers that can not resolve the name. It contacts the authoritative name
server if name mapping is not known. It then gets the mapping and returns
the IP address to the host.
Top-level Domain (TLD) Server: It is responsible for com, org, edu, etc,
and all top-level country domains like uk, fr, ca, in, etc. They have info
about authoritative domain servers and know the names and IP addresses of
each authoritative name server for the second-level domains.
Authoritative Name Servers are the organization’s DNS servers, providing
authoritative hostnames to IP mapping for organization servers. It can be
maintained by an organization or service provider. In order to reach
cse.dtu.in we have to ask the root DNS server, then it will point out to
the top-level domain server and then to the authoritative domain name
server which actually contains the IP address. So the authoritative domain
server will return the associative IP address.
Domain Name Server
The client machine sends a request to the local name server, which, if the root
does not find the address in its database, sends a request to the root name
server, which in turn, will route the query to a top-level domain (TLD) or
authoritative name server. The root name server can also contain some hostName to
IP address mappings. The Top-level domain (TLD) server always knows who the
authoritative name server is. So finally the IP address is returned to the local
name server which in turn returns the IP address to the host.
History of HTTP
Tim Berners-Lee and his team at CERN are indeed credited with inventing the
original HTTP protocol.
HTTP version 0.9 was the initial version introduced in 1991.
HTTP version 1.0 followed in 1996 with the introduction of RFC 1945.
HTTP version 1.1 was introduced in January 1997 with RFC 2068, later
refined in RFC 2616 in June 1999.
HTTP version 2.0 was specified in RFC 7540 and published on May 14, 2015.
HTTP version 3.0, also known as HTTP/3, is based on the QUIC protocol and
is designed to improve web performance. It was renamed as Hyper-Text
Transfer Protocol QUIC (HTTP/3) and developed by Google.
Methods of HTTP
GET: Used to retrieve data from a specified resource. It should have no
side effects and is commonly used for fetching web pages, images, etc.
POST: Used to submit data to be processed by a specified resource. It is
suitable for form submissions, file uploads, and creating new resources.
PUT: Used to update or create a resource on the server. It replaces the
entire resource with the data provided in the request body.
PATCH: Similar to PUT but used for partial modifications to a resource. It
updates specific fields of a resource rather than replacing the entire
resource.
DELETE: Used to remove a specified resource from the server.
HEAD: Similar to GET but retrieves only the response headers, useful for
checking resource properties without transferring the full content.
OPTIONS: Used to retrieve the communication options available for a
resource, including supported methods and headers.
TRACE: Used for debugging purposes to echo the received request back to the
client, though it's rarely used due to security concerns.
CONNECT: Used to establish a tunnel to the server through an HTTP proxy,
commonly used for SSL/TLS connections.
HTTP Request/Response:
HTTP is a request-response protocol, which means that for every request sent by a
client (typically a web browser), the server responds with a corresponding
response. The basic flow of an HTTP request-response cycle is as follows:
Client sends an HTTP request: The client (usually a web browser) initiates
the process by sending an HTTP request to the server. This request includes
a request method (GET, POST, PUT, DELETE, etc.), the target URI (Uniform
Resource Identifier, e.g., a URL), headers, and an optional request body.
Server processes the request: The server receives the request and processes
it based on the requested method and resource. This may involve retrieving
data from a database, executing server-side scripts, or performing other
operations.
Server sends an HTTP response: After processing the request, the server
sends an HTTP response back to the client. The response includes a status
code (e.g., 200 OK, 404 Not Found), response headers, and an optional
response body containing the requested data or content.
Client processes the response: The client receives the server's response
and processes it accordingly. For example, if the response contains an HTML
page, the browser will render and display it. If it's an image or other
media file, the browser will display or handle it appropriately.
Features
Stateless: Each request is independent, and the server doesn't retain
previous interactions' information.
Text-Based: Messages are in plain text, making them readable and
debuggable.
Client-Server Model: Follows a client-server architecture for requesting
and serving resources.
Request-Response: Operates on a request-response cycle between clients and
servers.
Request Methods: Supports various methods like GET, POST, PUT, DELETE for
different actions on resources.
Advantages
Platform independence: Works on any operating system
Compatibility: Compatible with various protocols and technologies
Efficiency: Optimized for performance
Security: Supports encryption for secure data transfer
Disadvantages
Lack of security: Vulnerable to attacks like man in the middle
Performance issues: Can be slow for large data transfers
Statelessness: Requires additional mechanisms for maintaining state
SMTP
SMTP Protocol
The SMTP model is of two types:
End-to-End Method
Store-and-Forward Method
The end-to-end model is used to communicate between different organizations
whereas the store and forward method is used within an organization. An SMTP
client who wants to send the mail will contact the destination’s host SMTP
directly, to send the mail to the destination. The SMTP server will keep the mail
to itself until it is successfully copied to the receiver’s SMTP.
The client SMTP is the one that initiates the session so let us call it the
client-SMTP and the server SMTP is the one that responds to the session request
so let us call it receiver-SMTP. The client-SMTP will start the session and the
receiver SMTP will respond to the request.
Model of SMTP System
In the SMTP model user deals with the user agent (UA), for example, Microsoft
Outlook, Netscape, Mozilla, etc. To exchange the mail using TCP, MTA is used. The
user sending the mail doesn’t have to deal with MTA as it is the responsibility
of the system admin to set up a local MTA. The MTA maintains a small queue of
mail so that it can schedule repeat delivery of mail in case the receiver is not
available. The MTA delivers the mail to the mailboxes and the information can
later be downloaded by the user agents.
SMTP Model
Components of SMTP
Mail User Agent (MUA): It is a computer application that helps you in
sending and retrieving mail. It is responsible for creating email messages
for transfer to the mail transfer agent(MTA).
Mail Submission Agent (MSA): It is a computer program that receives mail
from a Mail User Agent(MUA) and interacts with the Mail Transfer Agent(MTA)
for the transfer of the mail.
Mail Transfer Agent (MTA): It is software that has the work to transfer
mail from one system to another with the help of SMTP.
Mail Delivery Agent (MDA): A mail Delivery agent or Local Delivery Agent is
basically a system that helps in the delivery of mail to the local system.
How does SMTP Work?
Communication between the sender and the receiver: The sender’s user agent
prepares the message and sends it to the MTA. The MTA’s responsibility is
to transfer the mail across the network to the receiver’s MTA. To send
mail, a system must have a client MTA, and to receive mail, a system must
have a server MTA.
Sending Emails: Mail is sent by a series of request and response messages
between the client and the server. The message which is sent across
consists of a header and a body. A null line is used to terminate the mail
header and everything after the null line is considered the body of the
message, which is a sequence of ASCII characters. The message body contains
the actual information read by the receipt.
Receiving Emails: The user agent on the server-side checks the mailboxes at
a particular time of intervals. If any information is received, it informs
the user about the mail. When the user tries to read the mail it displays a
list of emails with a short description of each mail in the mailbox. By
selecting any of the mail users can view its contents on the terminal.
Working of SMTP
What is an SMTP Envelope?
Purpose
o The SMTP envelope contains information that guides email
delivery between servers.
o It is distinct from the email headers and body and is not visible to
the email recipient.
Contents of the SMTP Envelope
o Sender Address: Specifies where the email originates.
o Recipient Addresses: Indicates where the email should be delivered.
o Routing Information: Helps servers determine the path for email
delivery.
Comparison to Regular Mail
o Think of the SMTP envelope as the address on a physical envelope for
regular mail.
o Just like an envelope guides postal delivery, the SMTP envelope
directs email servers on where to send the email.
Working of DHCP
The 8 DHCP Messages
1. DHCP Discover Message: This is the first message generated in the
communication process between the server and the client. This message is
generated by the Client host in order to discover if there is any DHCP
server/servers are present in a network or not. This message is broadcasted to
all devices present in a network to find the DHCP server. This message is 342 or
576 bytes long.
Now, for the offer message, the source IP address is 172.16.32.12 (server’s IP
address in the example), the destination IP address is 255.255.255.255 (broadcast
IP address), the source MAC address is 00AA00123456, the destination MAC address
is 00:11:22:33:44:55 (client’s MAC address). Here, the offer message is broadcast
by the DHCP server therefore destination IP address is the broadcast IP address
and destination MAC address is 00:11:22:33:44:55 (client’s MAC address)and the
source IP address is the server IP address and the MAC address is the server MAC
address.
Also, the server has provided the offered IP address 192.16.32.51 and a lease
time of 72 hours(after this time the entry of the host will be erased from the
server automatically). Also, the client identifier is the PC MAC address
(08002B2EAF2A) for all the messages.
3. DHCP Request Message: When a client receives an offer message, it responds by
broadcasting a DHCP request message. The client will produce a gratuitous ARP in
order to find if there is any other host present in the network with the same IP
address. If there is no reply from another host, then there is no host with the
same TCP configuration in the network and the message is broadcasted to the
server showing the acceptance of the IP address. A Client ID is also added to
this message.
Now the server will make an entry of the client host with the offered IP address
and lease time. This IP address will not be provided by the server to any other
host. The destination MAC address is 00:11:22:33:44:55 (client’s MAC address) and
the destination IP address is 255.255.255.255 and the source IP address is
172.16.32.12 and the source MAC address is 00AA00123456 (server MAC address).
5. DHCP Negative Acknowledgment Message: Whenever a DHCP server receives a
request for an IP address that is invalid according to the scopes that are
configured, it sends a DHCP Nak message to the client. Eg-when the server has no
IP address unused or the pool is empty, then this message is sent by the server
to the client.
6. DHCP Decline: If the DHCP client determines the offered configuration
parameters are different or invalid, it sends a DHCP decline message to the
server. When there is a reply to the gratuitous ARP by any host to the client,
the client sends a DHCP decline message to the server showing the offered IP
address is already in use.
7. DHCP Release: A DHCP client sends a DHCP release packet to the server to
release the IP address and cancel any remaining lease time.
8. DHCP Inform: If a client address has obtained an IP address manually then the
client uses DHCP information to obtain other local configuration parameters, such
as domain name. In reply to the DHCP inform message, the DHCP server generates a
DHCP ack message with a local configuration suitable for the client without
allocating a new IP address. This DHCP ack message is unicast to the client.
Note – All the messages can be unicast also by the DHCP relay agent if the server
is present in a different network.
Security Considerations for Using DHCP
To make sure your DHCP servers are safe, consider these DHCP security issues:
Limited IP Addresses : A DHCP server can only offer a set number of IP
addresses. This means attackers could flood the server with requests,
causing essential devices to lose their connection.
Fake DHCP Servers : Attackers might set up fake DHCP servers to give out
fake IP addresses to devices on your network.
DNS Access : When users get an IP address from DHCP, they also get DNS
server details. This could potentially allow them to access more data than
they should. It’s important to restrict network access, use firewalls, and
secure connections with VPNs to protect against this.
Protection Against DHCP Starvation Attack
A DHCP starvation attack happens when a hacker floods a DHCP server with requests
for IP addresses. This overwhelms the server, making it unable to assign
addresses to legitimate users. The hacker can then block access for authorized
users and potentially set up a fake DHCP server to intercept and manipulate
network traffic, which could lead to a man-in-the-middle attack.
Reasons Why Enterprises Must Automate DHCP?
Automating your DHCP system is crucial for businesses because it reduces the time
and effort your IT team spends on manual tasks. For instance, DHCP-related issues
like printers not connecting or subnets not working with the main network can be
avoided automatically.
Automated DHCP also allows your operations to grow smoothly. Instead of hiring
more staff to handle tasks that automation can manage, your team can focus on
other important areas of business growth.
Advantages
Centralized management of IP addresses.
Centralized and automated TCP/IP configuration .
Ease of adding new clients to a network.
Reuse of IP addresses reduces the total number of IP addresses that are
required.
The efficient handling of IP address changes for clients that must be
updated frequently, such as those for portable devices that move to
different locations on a wireless network.
Simple reconfiguration of the IP address space on the DHCP server without
needing to reconfigure each client.
The DHCP protocol gives the network administrator a method to configure the
network from a centralized area.
With the help of DHCP, easy handling of new users and the reuse of IP
addresses can be achieved.
Disadvantages
IP conflict can occur.
The problem with DHCP is that clients accept any server. Accordingly, when
another server is in the vicinity, the client may connect with this server,
and this server may possibly send invalid data to the client.
The client is not able to access the network in absence of a DHCP Server.
The name of the machine will not be changed in a case when a new IP Address
is assigned.
Conclusion
In conclusion, DHCP is a technology that simplifies network setup by
automatically assigning IP addresses and network configurations to devices. While
DHCP offers convenience, it’s important to manage its security carefully. Issues
such as IP address exhaustion, and potential data access through DNS settings
highlight the need for robust security measures like firewalls and VPNs to
protect networks from unauthorized access and disruptions. DHCP remains essential
for efficiently managing network connections while ensuring security against
potential risks.
FTP Session
When an FTP session is started between a client and a server, the client
initiates a control TCP connection with the server side. The client sends control
information over this. When the server receives this, it initiates a data
connection to the client side. But the control connection remains active
throughout the user session. As we know HTTP is stateless . But FTP needs to
maintain a state about its user throughout the session.
FTP Clients
FTP works on a client-server model. The FTP client is a program that runs on the
user’s computer to enable the user to talk to and get files from remote
computers. It is a set of commands that establishes the connection between two
hosts, helps to transfer the files, and then closes the connection.
Some of the commands are:
get the filename(retrieve the file from the server)
get the filename(retrieve multiple files from the server )
ls(list files available in the current directory of the server)
There are also built-in FTP programs, which makes it easier to transfer files and
it does not require remembering the commands.
FTP Data Types
The data type of a file, which determines how the file is represented overall, is
the first piece of information that can be provided about it. The FTP standard
specifies the following four categories of data:
ASCII: Describes an ASCII text file in which each line is indicated by the
previously mentioned type of end-of-line marker.
EBCDIC: For files that use IBM’s EBCDIC character set, this type is
conceptually identical to ASCII.
Image: This is the “black box” mode I described earlier; the file has no
formal internal structure and is transferred one byte at a time without any
processing.
Local: Files containing data in logical bytes with a bit count other than
eight can be handled by this data type.
Advantages of FTP
File sharing also comes in the category of advantages of FTP in this
between two machines files can be shared on the network.
Speed is one of the main benefits of FTP.
Since we don’t have to finish every operation to obtain the entire file, it
is more efficient.
Using the username and password, we must log in to the FTP server. As a
result, FTP might be considered more secure.
We can move the files back and forth via FTP. Let’s say you are the firm
manager and you provide information to every employee, and they all reply
on the same server.
Disadvantages of FTP
File size limit is the drawback of FTP only 2 GB size files can be
transferred.
More then one receivers are not supported by FTP.
FTP does not encrypt the data this is one of the biggest drawbacks of FTP.
FTP is unsecured we use login IDs and passwords making it secure but they
can be attacked by hackers.
[ ] TelNet
TELNET stands for Teletype Network. It is a client/server application
protocol that provides access to virtual terminals of remote systems on local
area networks or the Internet. The local computer uses a telnet client program
and the remote computers use a telnet server program. In this article, we will
discuss every point about TELNET.
What is Telnet?
TELNET is a type of protocol that enables one computer to connect to the local
computer. It is used as a standard TCP/IP protocol for virtual terminal service
which is provided by ISO. The computer which starts the connection is known as
the local computer. The computer which is being connected to i.e. which accepts
the connection known as the remote computer. During telnet operation, whatever is
being performed on the remote computer will be displayed by the local computer.
Telnet operates on a client/server principle.
How TELNET Works?
Client-Server Interaction
o The Telnet client initiates the connection by sending requests to the
Telnet server.
o Once the connection is established, the client can send commands to
the server.
o The server processes these commands and responds accordingly.
Character Flow
o When the user types on the local computer, the local operating system
accepts the characters.
o The Telnet client transforms these characters into a universal
character set called Network Virtual Terminal (NVT) characters.
o These NVT characters travel through the Internet to the remote
computer via the local TCP/IP protocol stack.
o The remote Telnet server converts these characters into a format
understandable by the remote computer.
o The remote operating system receives the characters from a pseudo-
terminal driver and passes them to the appropriate application
program3.
Network Virtual Terminal (NVT)
o NVT is a virtual terminal in Telnet that provides a common structure
shared by different types of real terminals.
o It ensures communication compatibility between various terminals with
different operating systems.
Uses of TELNET
Remote Administration and Management
Network Diagnostics
Understanding Command-Line Interfaces
Accessing Bulletin Board Systems (BBS)
Automation and Scripting
Advantages of TELNET
It provides remote access to someone’s computer system.
Telnet allows the user for more access with fewer problems in data
transmission.
Telnet saves a lot of time.
The oldest system can be connected to a newer system with telnet having
different operating systems.
Disadvantages of TELNET
As it is somehow complex, it becomes difficult to beginners in
understanding.
Data is sent here in form of plain text, that’s why it is not so secured.
Some capabilities are disabled because of not proper interlinking of the
remote and local devices.
6 Times 5 Times 4 Times 3 Times 2 Times 1 Time
MODULE > 1 2 3 4 5 6
2024 May 15 5 15 75 10 10
2023 Dec 10 10 25 60 10 10
2023 May 15 10 25 35 15 20
2022 Dec 5 10 10 45 25 15
Last 4 Avg 15 10 25 55 15 15
2022 May 10 10 20 20 20 20
2019 Dec 15 10 35 25 15 20
2019 May 20 10 15 45 15 10
2018 Dec 30 5 15 70 20 10