0% found this document useful (0 votes)
8 views

Computer Networks (3) (1)

Uploaded by

welehe9538
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views

Computer Networks (3) (1)

Uploaded by

welehe9538
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 208

Unit – I

Introduction to Computer Networks


Computer Networks and distributed systems

Difference Between Computer Networks and Distributed Systems

A computer network is a group of interconnected computers that share resources and


data. Distributed systems, while similar, consist of autonomous computers working
together to perform tasks. These systems form the backbone of modern digital
communication and processing. Yet, they serve different purposes and operate under
different principles.

What are Computer Networks?


Computer networks connect multiple computers to share data, resources, and
communication efficiently. They enable devices to communicate regardless of their
physical or geographical locations. By linking computers through various mediums,
networks facilitate the flow of information across different platforms and users.
● Types of Networks: Networks vary by size and scope. Local Area Networks (LAN)
connect computers within a small area like an office building. Wide Area Networks
(WAN) cover broader geographic areas, such as cities or regions.
● Components: Essential components of computer networks include routers,
switches, hubs, and modems. These devices help direct data to appropriate
destinations, ensuring effective communication between computers.
● Protocols: Networks operate based on set rules or protocols. The most common
include TCP/IP, which guides how data is packaged and transmitted over the
network.
● Connectivity Media: Networks can be wired using cables like Ethernet or optical
fiber. They can also be wireless, using radio waves or infrared signals.
● Functionality: The primary function of computer networks is to enable resource
sharing, including files, printers, and internet connections. This sharing enhances
productivity and accessibility.

What are Distributed Systems?


Distributed systems are networks of independent computers that work together to
perform complex tasks. These systems appear as a single cohesive unit to users, even
though the processing is spread across multiple physical machines. This structure
allows distributed systems to handle large-scale computations efficiently and reliably.
Key elements of distributed systems include:
● Transparency: Distributed systems are designed to hide the complexity of
processes from the user. This makes the system appear as a single entity, despite
being a collection of independent components.
● Scalability: These systems can easily be scaled up by adding more machines.
Scalability improves performance and accommodates growth without disrupting
existing operations.
● Fault Tolerance: Distributed systems are resilient to failures. If one component fails,
the system can reroute tasks or replicate data to continue functioning without
significant downtime.
● Resource Sharing: Computers in a distributed system share resources such as
processing power and data storage. This sharing is managed seamlessly to
enhance overall system efficiency.
● Decentralization: Unlike traditional centralized systems, distributed systems do not
have a single central controller. Decisions and control are often spread across
various nodes, which enhances flexibility and resistance to attacks.

Differences between Computer Networks and Distributed Systems. The primary


purpose of computer networks is to enable communication and resource sharing among
devices. Distributed systems are designed to perform complex tasks by distributing the
workload across multiple nodes.

Differences between Computer Networks and Distributed Systems


Here are the key differences between computer networks and distributed systems:
Aspect Computer Networks Distributed Systems

Purpose The primary purpose of


Distributed systems are designed
computer networks is to
to perform complex tasks by
enable communication and
distributing the workload across
resource sharing among
multiple nodes.
devices.

Control Computer networks often


Distributed systems operate with
have centralized devices for
decentralized control, spreading
control, like routers or
functions across various nodes.
servers.

Complexity Computer networks are


Distributed systems are more
typically less complex,
complex, managing not just
focusing mainly on
communication but also the
connectivity and
computation process.
communication.

Transparency Transparency is not a


Distributed systems provide
primary concern in computer
transparency, making the
networks; users may be
distributed nature of the process
aware of the underlying
invisible to the user.
network.

Scalability While scalable, computer Distributed systems are inherently


networks may require scalable, designed to easily add
significant reconfiguration to more resources without major
handle growth. changes.

Fault Computer networks can be Distributed systems are highly fault-


Tolerance vulnerable to points of failure tolerant, often designed to continue
that might disrupt the entire operation despite individual
network. failures.

Resource In computer networks, In distributed systems, resource


Utilization resource sharing is limited to sharing includes processing power
bandwidth, data storage, and software, optimizing task
and peripheral devices. execution.
Use Cases of Computer Networks
Below are the use cases of computer networks:
● Office Networks: Businesses use local area networks (LANs) to connect employee
computers and printers. This setup enhances collaboration and resource sharing
within the workplace.
● Internet Access: Home Wi-Fi networks provide Internet connectivity to various
devices. These networks allow multiple users to browse the web, stream videos, and
download files simultaneously.
● Data Centers: Data centers use networked servers to manage and store vast
amounts of data. These are crucial for hosting websites, backing up data, and cloud
storage services.
Use Cases of Distributed Systems
● Cloud Computing: Services like Amazon Web Services and Microsoft Azure use
distributed computing to offer scalable resources. Users can access computing
power and storage without managing physical servers.
● E-commerce Platforms: Websites like Amazon and eBay distribute their operations
across multiple servers and data centers. This distribution handles high traffic
volumes and transaction loads efficiently.
● Scientific Research: Projects like SETI@home use volunteer computers worldwide
to process large datasets for research. This collective processing power aids in
complex computations like space observations.

Challenges for Computer Networks and Distributed Systems


Both computer networks and distributed systems face a range of challenges that can
impact their efficiency, security, and scalability. These challenges are critical to address
to ensure that the networks and systems remain robust and capable of supporting user
needs effectively. Key challenges for computer network and distributed system are :
● Security Risks: Both systems are vulnerable to cyber threats. Securing vast
networks and distributed systems is complex.
● Complex Configuration: Setting up and maintaining these systems requires
sophisticated configuration and ongoing management.
● Scalability Issues: Although designed to scale, rapidly increasing demands can
strain both networks and systems.
● Interoperability: Ensuring different components work together seamlessly is often
challenging.
● Performance Bottlenecks: High traffic volumes can overwhelm network resources,
leading to performance issues.
● Cost: Expanding and upgrading infrastructure can be costly, especially for large-
scale deployments.
Classifications of computer networks

Types of Computer Networks

A computer network is a cluster of computers over a shared communication path that


works to share resources from one computer to another, provided by or located on the
network nodes.
What is a Computer Network?
A computer network is a system that connects many independent computers to share
information (data) and resources. The integration of computers and other different
devices allows users to communicate more easily. A computer network is a collection of
two or more computer systems that are linked together. A network connection can be
established using either cable or wireless media. Hardware and software are used to
connect computers and tools in any network.
Uses of Computer Networks
● Communicating using email, video, instant messaging, etc.
● Sharing devices such as printers, scanners, etc.
● Sharing files.
● Sharing software and operating programs on remote systems.
● Allowing network users to easily access and maintain information.
Types of Computer Networks
There are mainly five types of Computer Networks
1. Personal Area Network (PAN)
2. Local Area Network (LAN)
3. Campus Area Network (CAN)
4. Metropolitan Area Network (MAN)
5. Wide Area Network (WAN)
Types of
Computer
Networks
1. Personal Area Network (PAN)
PAN is the most basic type of computer network. It is a type of network designed to
connect devices within a short range, typically around one person. It allows your
personal devices, like smartphones, tablets, laptops, and wearables, to communicate
and share data with each other. PAN offers a network range of 1 to 100 meters from
person to device providing communication. Its transmission speed is very high with very
easy maintenance and very low cost. This uses Bluetooth, IrDA, and Zigbee as
technology. Examples of PAN are USB, computer, phone, tablet, printer, PDA, etc.
Personal Area
Network (PAN)

Types of PAN

● Wireless Personal Area Networks: Wireless Personal Area Networks are created
by simply utilising wireless technologies such as WiFi and Bluetooth. It is a low-
range network.
● Wired Personal Area Network: A wired personal area network is constructed using
a USB.

Advantages of PAN

● PAN is relatively flexible and provides high efficiency for short network ranges.
● It needs easy setup and relatively low cost.
● It does not require frequent installations and maintenance
● It is easy and portable.
● Needs fewer technical skills to use.

Disadvantages of PAN

● Low network coverage area/range.


● Limited to relatively low data rates.
● Devices are not compatible with each other.
● Inbuilt WPAN devices are a little bit costly.

Applications of PAN

● Home and Offices


● Organizations and the Business sector
● Medical and Hospital
● School and College Education
● Military and Defense
2. Local Area Network (LAN)
LAN is the most frequently used network. A LAN is a computer network that connects
computers through a common communication path, contained within a limited area, that
is, locally. A LAN encompasses two or more computers connected over a server. The
two important technologies involved in this network are Ethernet and Wi-fi. It ranges up
to 2km & transmission speed is very high with easy maintenance and low cost.
Examples of LAN are networking in a home, school, library, laboratory, college, office,
etc.

Local Area
Network (LAN)
Advantages of a LAN

● Privacy: LAN is a private network, thus no outside regulatory body controls it, giving
it a privacy.
● High Speed: LAN offers a much higher speed(around 100 mbps) and data transfer
rate comparatively to WAN.
● Supports different transmission mediums: LAN support a variety of
communications transmission medium such as an Ethernet cable (thin cable, thick
cable, and twisted pair), fiber and wireless transmission.
● Inexpensive and Simple: A LAN usually has low cost, installation, expansion and
maintenance and LAN installation is relatively easy to use, good scalability.

Disadvantages of LAN

● The initial setup costs of installing Local Area Networks is high because there is
special software required to make a server.
● Communication devices like an ethernet cable, switches, hubs, routers, cables are
costly.
● LAN administrator can see and check personal data files as well as Internet history
of each and every LAN user. Hence, the privacy of the users are violated
● LANs are restricted in size and cover only a limited area
● Since all the data is stored in a single server computer, if it can be accessed by an
unauthorized user, can cause a serious data security threat.
3. Campus Area Network (CAN)
CAN is bigger than a LAN but smaller than a MAN. This is a type of computer network
that is usually used in places like a school or colleges. This network covers a limited
geographical area that is, it spreads across several buildings within the campus. CAN
mainly use Ethernet technology with a range from 1km to 5km. Its transmission speed is
very high with a moderate maintenance cost and moderate cost. Examples of CAN are
networks that cover schools, colleges, buildings, etc.
Campus Area
Network (CAN)

Advantages of CAN

● Speed: Communication within a CAN takes place over Local Area Network (LAN) so
data transfer rate between systems is little bit fast than Internet.
● Security: Network administrators of campus take care of network by continuous
monitoring, tracking and limiting access. To protect network from unauthorized
access firewall is placed between network and internet.
● Cost effective: With a little effort and maintenance, network works well by providing
fast data transfer rate with multi-departmental network access. It can be enabled
wirelessly, where wiring and cabling costs can be managed. So to work with in a
campus using CAN is cost-effective in view of performance
4. Metropolitan Area Network (MAN)
A MAN is larger than a LAN but smaller than a WAN. This is the type of computer
network that connects computers over a geographical distance through a shared
communication path over a city, town, or metropolitan area. This network mainly uses
FDDI, CDDI, and ATM as the technology with a range from 5km to 50km. Its
transmission speed is average. It is difficult to maintain and it comes with a high cost.
Examples of MAN are networking in towns, cities, a single large city, a large area within
multiple buildings, etc.
Metropolitan
Area Network
(MAN)

Advantages of MAN

● MAN offers high-speed connectivity in which the speed ranges from 10-100 Mbps.
● The security level in MAN is high and strict as compared to WAN.
● It support to transmit data in both directions concurrently because of dual bus
architecture.
● MAN can serve multiple users at a time with the same high-speed internet to all the
users.
● MAN allows for centralized management and control of the network, making it easier
to monitor and manage network resources and security.

Disadvantages of MAN

● The architecture of MAN is quite complicated hence, it is hard to design and


maintain.
● This network is highly expensive because it required the high cost to set up fiber
optics.
● It provides less fault tolerance.
● The Data transfer rate in MAN is low when compare to LANs.
5. Wide Area Network (WAN)
WAN is a type of computer network that connects computers over a large geographical
distance through a shared communication path. It is not restrained to a single location
but extends over many locations. WAN can also be defined as a group of local area
networks that communicate with each other with a range above 50km. Here we use
Leased-Line & Dial-up technology. Its transmission speed is very low and it comes with
very high maintenance and very high cost. The most common example of WAN is the
Internet.

Wide Area
Network (WAN)

Advantages of WAN

● It covers large geographical area which enhances the reach of organisation to


transmit data quickly and cheaply.
● The data can be stored in centralised manner because of remote access to data
provided by WAN.
● The travel charges that are needed to cover the geographical area of work can be
minimised.
● WAN enables a user or organisation to connect with the world very easily and allows
to exchange data and do business at global level.

Disadvantages of WAN

● Traffic congestion in Wide Area Network is very high.


● The fault tolerance ability of WAN is very less.
● Noise and error are present in large amount due to multiple connection point.
● The data transfer rate is slow in comparison to LAN because of large distances and
high number of connected system within the network.
Comparison between Different Computer Networks
Parameter PAN LAN CAN MAN WAN
s

Personal Campus
Full Local Area Metropolitan Wide Area
Area Area
Name Network Area Network Network
Network Network

Bluetooth, Leased
Technolo Ethernet & FDDI, CDDi.
IrDA,Zigbe Ethernet Line, Dial-
gy Wifi ATM
e Up

Above 50
Range 1-100 m Upto 2km 1 – 5 km 5-50 km
km

Transmis
sion Very High Very High High Average Low
Speed

Ownershi Private or Private or


Private Private Private
p Public Public

Maintena Very
Very Easy Easy Moderate Difficult
nce Difficult

Cost Very Low Low Moderate High Very High

Other Types of Computer Networks


● Wireless Local Area Network (WLAN)
● Storage Area Network (SAN)
● System-Area Network (SAN)
● Passive Optical Local Area Network (POLAN)
● Enterprise Private Network (EPN)
● Virtual Private Network (VPN)
● Home Area Network (HAN)
1. Wireless Local Area Network (WLAN)

WLAN is a type of computer network that acts as a local area network but makes use of
wireless network technology like Wi-Fi. This network doesn’t allow devices to
communicate over physical cables like in LAN but allows devices to communicate
wirelessly. The most common example of WLAN is Wi-Fi.

Wireless Local
Area Network
(WLAN)
There are several computer networks available; more information is provided below.

2. Storage Area Network (SAN)

SAN is a type of computer network that is high-speed and connects groups of storage
devices to several servers. This network does not depend on LAN or WAN. Instead, a
SAN moves the storage resources from the network to its high-powered network. A
SAN provides access to block-level data storage. Examples of SAN are a network of
disks accessed by a network of servers.
Storage Area
Network (SAN)

3. Passive Optical Local Area Network (POLAN)

A POLAN is a type of computer network that is an alternative to a LAN. POLAN uses


optical splitters to split an optical signal from a single strand of single-mode optical fiber
to multiple signals to distribute users and devices. In short, POLAN is a point to
multipoint LAN architecture.

Passive Optical
Local Area
Network
(POLAN)
4. Enterprise Private Network (EPN)

EPN is a type of computer network mostly used by businesses that want a secure
connection over various locations to share computer resources.

Enterprise
Private Network
(EPN)

5. Virtual Private Network (VPN)

A VPN is a type of computer network that extends a private network across the internet
and lets the user send and receive data as if they were connected to a private network
even though they are not. Through a virtual point-to-point connection users can access
a private network remotely. VPN protects you from malicious sources by operating as a
medium that gives you a protected network connection.
Virtual Private
Network (VPN)

6. Home Area Network (HAN)

Many of the houses might have more than a computer. To interconnect those
computers and with other peripheral devices, a network should be established similar to
the local area network (LAN) within that home. Such a type of network that allows a
user to interconnect multiple computers and other digital devices within the home is
referred to as Home Area Network (HAN). HAN encourages sharing of resources, files,
and programs within the network. It supports both wired and wireless communication.
Home Area
Network (HAN)
Internetwork
An internet network is defined as two or more computer network LANs, WANs, or
computer network segments that are connected by devices and configured with a local
addressing system. The method is known as internetworking. There are two types of
Internetwork.
● Intranet: An internal network within an organization that enables employees to share
data, collaborate, and access resources. Intranets are not accessible to the public
and use private IP addresses.
● Extranet: Extranets extend the intranet to authorized external users, such as
business partners or clients. They provide controlled access to specific resources
while maintaining security.
Advantages of Computer Network
● Central Storage of Data: Files are stored on a central storage database which
helps to easily access and available to everyone.
● Connectivity: A single connection can be routed to connect multiple computing
devices.
● Sharing of Files: Files and data can be easily shared among multiple devices which
helps in easily communicating among the organization.
● Security through Authorization: Computer Networking provides additional security
and protection of information in the system.
Disadvantages of Computer Network
● Virus and Malware: A virus is a program that can infect other programs by
modifying them. Viruses and Malware can corrupt the whole network.
● High Cost of Setup: The initial setup of Computer Networking is expensive
because it consists of a lot of wires and cables along with the device.
● loss of Information: In case of a System Failure, might lead to some loss of data.
● Management of Network: Management of a Network is somehow complex for a
person, it requires training for its proper use.
Conclusion
In conclusion, computer networks are essential components that connect various
computer devices in order to efficiently share data and resources. PAN, LAN, CAN,
MAN, and WAN networks serve a wide range of applications and purposes, each with
its own set of advantages and drawbacks. Understanding these networks and their
applications improves connectivity, data exchange, and resource utilization in a variety
of applications from personal use to global communications.

Network applications
Application of Computer Network

There are a variety of fields in computer networks that are used in industries. Some of
them are as follows:

1. Internet and World Wide Web

In computer networks, we have a global internet, also known as the World Wide Web,
that offers us various features like access to websites, online services and retrieval of
information. With the help of the World Wide Web, we can browse, and we can do
search, and access web pages and multimedia content.

2. Communication

With the help of computer networks, communication is also easy because we can do
email, instant messaging, voice and video calls and video conferencing, which helps us
to communicate with each other effectively. People can use these features in their
businesses and organizations to stay connected with each other.

3. File Sharing and Data Transfer

Data transfer and file sharing are made possible by networks that connect different
devices. This covers file sharing within a business setting, file sharing between personal
devices, and downloading/uploading of content from the internet.

4. Online gaming

Multiplayer online games use computer networks to link players from all over the world,
enabling online competitions and real-time gaming experiences.

5. Remote Access and Control

Networks enable users to access and control systems and devices from a distance.
This is helpful when accessing home automation systems, managing servers, and
providing remote IT support.

6. Social media

With the help of a computer network, we can use social media sites like Facebook,
Twitter and Instagram to help people set up their profiles, and we can connect with
others and share content on social media.

7. Cloud Computing

The provision of on-demand access to computing resources and services hosted in


distant data centres relies on networks. Some example of cloud computing is software
as a service (SaaS), platform as a service (PaaS) and infrastructure as service (IaaS).
8. Online Banking and E-Commerce

Online banking and e-commerce platforms, where customers conduct financial


transactions and make online purchases, require secure computer networks.

9. Enterprise Networks

In Computer networks, we have some networks that are only used in businesses and
organizations so they can store data and share files and resources like printers,
scanners, etc.

10. Healthcare

With the help of computer networks in the health industry, we can share patient records
and store the records in the form of data that is easy and secure compared to the file
method. Networks are also necessary for telemedicine and remote patient monitoring.

11. Education

Schools use networks to access online courses, virtual classrooms, and other online
learning materials. Campuses of colleges and universities frequently have extensive
computer networks.

12. Transportation and Logistics

The transportation sector uses Computer Networks to manage and track shipments,
plan the best routes, and coordinate logistics activities.

13. Internet of Things (IoT) and Smart Homes

Through the Internet of Things (IoT), smart homes use networks to connect to and
manage a variety of devices, including thermostats, security cameras, and smart
appliances.

14. Scientific Research

To share data, work together on projects, and access high-performance computing


resources for data analysis and scientific simulations, researchers use networks.

15. Government and Defense

With the help of computer networks, we can communicate, share data, and advance
national defence. Government agencies and the military rely on secure networks.

These are just a few instances of the many areas of our lives where computer networks
are used. Computer networks are fundamental to facilitating communication, teamwork,
and the effective exchange of knowledge and resources globally.
Network Hardware

Basic Network Hardware

The basic computer hardware components that are needed to set up a network are as
follows −

Network Cables

Network cables are the transmission media to transfer data from one device to another.
A commonly used network cable is category 5 cable with RJ – 45 connector, as shown
in the image below:

Routers

A router is a connecting device that transfers data packets between different computer
networks. Typically, they are used to connect a PC or an organization’s LAN to a
broadband internet connection. They contain RJ-45 ports so that computers and other
devices can connect with them using network cables.
Repeaters, Hubs, and Switches

Repeaters, hubs and switches connect network devices together so that they can
function as a single segment.

A repeater receives a signal and regenerates it before re-transmitting so that it can


travel longer distances.

A hub is a multiport repeater having several input/output ports, so that input at any port
is available at every other port.

A switch receives data from a port, uses packet switching to resolve the destination
device and then forwards the data to the particular destination, rather than broadcasting
it as a hub.
Bridges

A bridge connects two separate Ethernet network segments. It forwards packets from
the source network to the destined network.

Gateways

A gateway connects entirely different networks that work upon different protocols. It is
the entry and the exit point of a network and controls access to other networks.
Network Interface Cards

NIC is a component of the computer to connect it to a network. Network cards are of


two types: Internal network cards and external network cards.

Network Software

Network software encompasses a broad range of software used for design,


implementation, and operation and monitoring of computer networks. Traditional
networks were hardware based with software embedded. With the advent of Software –
Defined Networking (SDN), software is separated from the hardware thus making it
more adaptable to the ever-changing nature of the computer network.

Functions of Network Software

● Helps to set up and install computer networks


● Enables users to have access to network resources in a seamless manner
● Allows administrations to add or remove users from the network
● Helps to define locations of data storage and allows users to access that data
● Helps administrators and security system to protect the network from data
breaches, unauthorized access and attacks on a network
● Enables network virtualizations

SDN Framework

The Software Defined Networking framework has three layers as depicted in the
following diagram −

APPLICATION LAYER − SDN applications reside in the Application Layer. The


applications convey their needs for resources and services to the control layer through
APIs.
CONTROL LAYER − The Network Control Software, bundled into the Network
Operating System, lies in this layer. It provides an abstract view of the underlying
network infrastructure. It receives the requirements of the SDN applications and relays
them to the network components.
INFRASTRUCTURE LAYER − Also called the Data Plane Layer, this layer contains the
actual network components. The network devices reside in this layer that shows their
network capabilities through the Control to data-Plane Interface.

Reference models: OSI, TCP/IP

In computer networks, reference models give a conceptual framework that standardizes


communication between heterogeneous networks.

The two popular reference models are −

OSI Model
TCP/IP Protocol Suite

OSI Model

OSI or Open System Interconnection model was developed by International Standards


Organization (ISO). It gives a layered networking framework that conceptualizes how
communication should be done between heterogeneous systems. It has seven
interconnected layers.

The seven layers of the OSI Model are a physical layer, data link layer, network layer,
transport layer, session layer, presentation layer, and application layer. The hierarchy is
depicted in the following figure −
TCP / IP PROTOCOL SUITE

TCP stands for Transmission Control Protocol, while IP stands for Internet Protocol. It is
a suite of protocols for communication structured in four layers. It can be used for
communication over the internet as well as for private networks.

The four layers are application layer, transport layer, internet layer and network access
layer, as depicted in the following diagram −

What is OSI Model? – Layers of OSI Model

OSI stands for Open Systems Interconnection, where open stands to say non-
proprietary. It is a 7-layer architecture with each layer having specific functionality to
perform. All these 7 layers work collaboratively to transmit the data from one person to
another across the globe. The OSI reference model was developed by ISO –
‘International Organization for Standardization‘, in the year 1984.
The OSI model provides a theoretical foundation for understanding network
communication. However, it is usually not directly implemented in its entirety in real-
world networking hardware or software. Instead, specific protocols and
technologies are often designed based on the principles outlined in the OSI model to
facilitate efficient data transmission and networking operations.
What is OSI Model?
The OSI model, created in 1984 by ISO, is a reference framework that explains the
process of transmitting data between computers. It is divided into seven layers that
work together to carry out specialised network functions, allowing for a more
systematic approach to networking.
OSI Model
Data Flow In OSI Model
When we transfer information from one device to another, it travels through 7 layers of
OSI model. First data travels down through 7 layers from the sender’s end and then
climbs back 7 layers on the receiver’s end.
Data flows through the OSI model in a step-by-step process:
● Application Layer: Applications create the data.
● Presentation Layer: Data is formatted and encrypted.
● Session Layer: Connections are established and managed.
● Transport Layer: Data is broken into segments for reliable delivery.
● Network Layer: Segments are packaged into packets and routed.
● Data Link Layer: Packets are framed and sent to the next device.
● Physical Layer: Frames are converted into bits and transmitted physically.
Each layer adds specific information to ensure the data reaches its destination correctly,
and these steps are reversed upon arrival.

Let’s look at it with an Example:


Luffy sends an e-mail to his friend Zoro.
Step 1: Luffy interacts with e-mail application like Gmail, outlook, etc. Writes his email
to send. (This happens in Layer 7: Application layer)
Step 2: Mail application prepares for data transmission like encrypting data and
formatting it for transmission. (This happens in Layer 6: Presentation Layer)
Step 3: There is a connection established between the sender and receiver on the
internet. (This happens in Layer 5: Session Layer)
Step 4: Email data is broken into smaller segments. It adds sequence number and
error-checking information to maintain the reliability of the information. (This happens in
Layer 4: Transport Layer)
Step 5: Addressing of packets is done in order to find the best route for transfer. (This
happens in Layer 3: Network Layer)
Step 6: Data packets are encapsulated into frames, then MAC address is added for
local devices and then it checks for error using error detection. (This happens in Layer
2: Data Link Layer)
Step 7: Lastly Frames are transmitted in the form of electrical/ optical signals over a
physical network medium like ethernet cable or WiFi.
After the email reaches the receiver i.e. Zoro, the process will reverse and decrypt the
e-mail content. At last, the email will be shown on Zoro’s email client.
What Are The 7 Layers of The OSI Model?
The OSI model consists of seven abstraction layers arranged in a top-down order:
6. Physical Layer
7. Data Link Layer
8. Network Layer
9. Transport Layer
10. Session Layer
11. Presentation Layer
12. Application Layer
Physical Layer – Layer 1
The lowest layer of the OSI reference model is the physical layer. It is responsible for
the actual physical connection between the devices. The physical layer contains
information in the form of bits. It is responsible for transmitting individual bits from one
node to the next. When receiving data, this layer will get the signal received and convert
it into 0s and 1s and send them to the Data Link layer, which will put the frame back
together.
Functions of the Physical Layer

● Bit Synchronization: The physical layer provides the synchronization of the bits by
providing a clock. This clock controls both sender and receiver thus providing
synchronization at the bit level.
● Bit Rate Control: The Physical layer also defines the transmission rate i.e. the
number of bits sent per second.
● Physical Topologies: Physical layer specifies how the different, devices/nodes are
arranged in a network i.e. bus, star, or mesh topology.
● Transmission Mode: Physical layer also defines how the data flows between the
two connected devices. The various transmission modes possible are Simplex, half-
duplex and full-duplex.
Note:
● Hub, Repeater, Modem, and Cables are Physical Layer devices.
● Network Layer, Data Link Layer, and Physical Layer are also known as Lower
Layers or Hardware Layers.
Data Link Layer (DLL) – Layer 2
The data link layer is responsible for the node-to-node delivery of the message. The
main function of this layer is to make sure data transfer is error-free from one node to
another, over the physical layer. When a packet arrives in a network, it is the
responsibility of the DLL to transmit it to the Host using its MAC address.
The Data Link Layer is divided into two sublayers:
● Logical Link Control (LLC)
● Media Access Control (MAC)
The packet received from the Network layer is further divided into frames depending on
the frame size of the NIC(Network Interface Card). DLL also encapsulates Sender and
Receiver’s MAC address in the header.
The Receiver’s MAC address is obtained by placing an ARP(Address Resolution
Protocol) request onto the wire asking “Who has that IP address?” and the destination
host will reply with its MAC address.
Functions of the Data Link Layer

● Framing: Framing is a function of the data link layer. It provides a way for a sender
to transmit a set of bits that are meaningful to the receiver. This can be
accomplished by attaching special bit patterns to the beginning and end of the
frame.
● Physical Addressing: After creating frames, the Data link layer adds physical
addresses (MAC addresses) of the sender and/or receiver in the header of each
frame.
● Error Control: The data link layer provides the mechanism of error control in which
it detects and retransmits damaged or lost frames.
● Flow Control: The data rate must be constant on both sides else the data may get
corrupted thus, flow control coordinates the amount of data that can be sent before
receiving an acknowledgment.
● Access Control: When a single communication channel is shared by multiple
devices, the MAC sub-layer of the data link layer helps to determine which device
has control over the channel at a given time.

Note:
● Packet in the Data Link layer is referred to as Frame.
● Data Link layer is handled by the NIC (Network Interface Card) and device drivers of
host machines.
● Switch & Bridge are Data Link Layer devices.
Network Layer – Layer 3
The network layer works for the transmission of data from one host to the other located
in different networks. It also takes care of packet routing i.e. selection of the shortest
path to transmit the packet, from the number of routes available. The sender &
receiver’s IP addresses are placed in the header by the network layer.

Functions of the Network Layer

● Routing: The network layer protocols determine which route is suitable from source
to destination. This function of the network layer is known as routing.
● Logical Addressing: To identify each device inter-network uniquely, the network
layer defines an addressing scheme. The sender & receiver’s IP addresses are
placed in the header by the network layer. Such an address distinguishes each
device uniquely and universally.
Note:
● Segment in the Network layer is referred to as Packet.
● Network layer is implemented by networking devices such as routers and switches.
Transport Layer – Layer 4
The transport layer provides services to the application layer and takes services from
the network layer. The data in the transport layer is referred to as Segments. It is
responsible for the end-to-end delivery of the complete message. The transport layer
also provides the acknowledgment of the successful data transmission and re-transmits
the data if an error is found.
At the sender’s side: The transport layer receives the formatted data from the upper
layers, performs Segmentation, and also implements Flow and error control to
ensure proper data transmission. It also adds Source and Destination port numbers in
its header and forwards the segmented data to the Network Layer.
Note: The sender needs to know the port number associated with the receiver’s
application.
Generally, this destination port number is configured, either by default or manually. For
example, when a web application requests a web server, it typically uses port number
80, because this is the default port assigned to web applications. Many applications
have default ports assigned.
At the receiver’s side: Transport Layer reads the port number from its header and
forwards the Data which it has received to the respective application. It also performs
sequencing and reassembling of the segmented data.
Functions of the Transport Layer

● Segmentation and Reassembly: This layer accepts the message from the
(session) layer, and breaks the message into smaller units. Each of the segments
produced has a header associated with it. The transport layer at the destination
station reassembles the message.
● Service Point Addressing: To deliver the message to the correct process, the
transport layer header includes a type of address called service point address or port
address. Thus by specifying this address, the transport layer makes sure that the
message is delivered to the correct process.
Services Provided by Transport Layer
● Connection-Oriented Service
● Connectionless Service
1. Connection-Oriented Service: It is a three-phase process that includes:
● Connection Establishment
● Data Transfer
● Termination/disconnection
In this type of transmission, the receiving device sends an acknowledgment, back to the
source after a packet or group of packets is received. This type of transmission is
reliable and secure.
2. Connectionless service: It is a one-phase process and includes Data Transfer. In
this type of transmission, the receiver does not acknowledge receipt of a packet. This
approach allows for much faster communication between devices. Connection-oriented
service is more reliable than connectionless Service.
Note:
● Data in the Transport Layer is called Segments.
● Transport layer is operated by the Operating System. It is a part of the OS and
communicates with the Application Layer by making system calls.
● The transport layer is called as Heart of the OSI model.
● Device or Protocol Use : TCP, UDP NetBIOS, PPTP
Session Layer – Layer 5
This layer is responsible for the establishment of connection, maintenance of sessions,
and authentication, and also ensures security.

Functions of the Session Layer

● Session Establishment, Maintenance, and Termination: The layer allows the two
processes to establish, use, and terminate a connection.
● Synchronization: This layer allows a process to add checkpoints that are
considered synchronization points in the data. These synchronization points help to
identify the error so that the data is re-synchronized properly, and ends of the
messages are not cut prematurely and data loss is avoided.
● Dialog Controller: The session layer allows two systems to start communication
with each other in half-duplex or full-duplex.
Note:
● All the below 3 layers(including Session Layer) are integrated as a single layer in the
TCP/IP model as the “Application Layer”.
● Implementation of these 3 layers is done by the network application itself. These are
also known as Upper Layers or Software Layers.
● Device or Protocol Use : NetBIOS, PPTP.
Example
Let us consider a scenario where a user wants to send a message through some
Messenger application running in their browser. The “Messenger” here acts as the
application layer which provides the user with an interface to create the data. This
message or so-called Data is compressed, optionally encrypted (if the data is sensitive),
and converted into bits (0’s and 1’s) so that it can be transmitted.

Communication in Session Layer


Presentation Layer – Layer 6
The presentation layer is also called the Translation layer. The data from the
application layer is extracted here and manipulated as per the required format to
transmit over the network.

Functions of the Presentation Layer

● Translation: For example, ASCII to EBCDIC.


● Encryption/ Decryption: Data encryption translates the data into another form or
code. The encrypted data is known as the ciphertext and the decrypted data is
known as plain text. A key value is used for encrypting as well as decrypting data.
● Compression: Reduces the number of bits that need to be transmitted on the
network.
Note: Device or Protocol Use: JPEG, MPEG, GIF.
Application Layer – Layer 7
At the very top of the OSI Reference Model stack of layers, we find the Application layer
which is implemented by the network applications. These applications produce the data
to be transferred over the network. This layer also serves as a window for the
application services to access the network and for displaying the received information to
the user.
Example: Application – Browsers, Skype Messenger, etc.
Note: The application Layer is also called Desktop Layer.
Device or Protocol Use : SMTP.

Functions of the Application Layer

The main functions of the application layer are given below.


● Network Virtual Terminal(NVT): It allows a user to log on to a remote host.
● File Transfer Access and Management(FTAM): This application allows a user to
access files in a remote host, retrieve files in a remote host, and manage or
control files from a remote computer.
● Mail Services: Provide email service.
● Directory Services: This application provides distributed database sources
and access for global information about various objects and services.
Note: The OSI model acts as a reference model and is not implemented on the Internet
because of its late invention. The current model being used is the TCP/IP model.
OSI Model – Layer Architecture
Layer No Layer Name Responsibili Information Device or
ty Form (Data Protocol
Unit)

7 Helps in
identifying
the client and
Application
synchronizin Message SMTP
Layer
g
communicati
on.
6 Data from
the
application
layer is
extracted
Presentation
and Message JPEG, MPEG, GIF
Layer
manipulated
in the
required
format for
transmission.

5 Establishes
Connection,
Maintenance,
Message (or
Session Ensures
encrypted Gateway
Layer Authenticatio
message)
n and
Ensures
security.

4 Take Service
from Network
Layer and
Transport
provide it to Segment Firewall
Layer
the
Application
Layer.

3 Transmission
of data from
one host to
Network
another, Packet Router
Layer
located in
different
networks.

2 Node to
Data Link Node
Frame Switch, Bridge
Layer Delivery of
Message.
1 Establishing
Physical
Physical Hub, Repeater,
Connections Bits
Layer Modem, Cables
between
Devices.

OSI vs TCP/IP Model


TCP/IP protocol ( Transfer Control Protocol/Internet Protocol ) was created by U.S.
Department of Defense’s Advanced Research Projects Agency (ARPA) in 1970s.
Some key differences between the OSI model and the TCP/IP Model are:
● TCP/IP model consists of 4 layers but OSI model has 7 layers. Layers 5,6,7 of the
OSI model are combined into the Application Layer of TCP/IP model and OSI
layers 1 and 2 are combined into Network Access Layers of TCP/IP protocol.
● The TCP/IP model is older than the OSI model, hence it is a foundational protocol
that defines how should data be transferred online.
● Compared to the OSI model, the TCP/IP model has less strict layer boundaries.
● All layers of the TCP/IP model are needed for data transmission but in the OSI
model, some applications can skip certain layers. Only layers 1,2 and 3 of the OSI
model are necessary for data transmission.

OSI vs TCP/IP
Why Does The OSI Model Matter?
Even though the modern Internet doesn’t strictly use the OSI Model (it uses a simpler
Internet protocol suite), the OSI Model is still very helpful for solving network problems.
Whether it’s one person having trouble getting their laptop online, or a website being
down for thousands of users, the OSI Model helps to identify the problem. If you can
narrow down the issue to one specific layer of the model, you can avoid a lot of
unnecessary work.
Imperva Application Security
Imperva security solutions protect your applications at different levels of the OSI model.
They use DDoS mitigation to secure the network layer and provide web application
firewall (WAF), bot management, and API security to protect the application layer.
To secure applications and networks across the OSI stack, Imperva offers multi-layered
protection to ensure websites and applications are always available, accessible, and
safe. The Imperva application security solution includes:
● DDoS Mitigation: Protects the network layer from Distributed Denial of Service
attacks.
● Web Application Firewall (WAF): Shields the application layer from threats.
● Bot Management: Prevents malicious bots from affecting the application.
● API Security: Secures APIs from various vulnerabilities and attacks.
Advantages of OSI Model
The OSI Model defines the communication of a computing system into 7 different
layers. Its advantages include:
● It divides network communication into 7 layers which makes it easier to understand
and troubleshoot.
● It standardizes network communications, as each layer has fixed functions and
protocols.
● Diagnosing network problems is easier with the OSI model.
● It is easier to improve with advancements as each layer can get updates separately.
Disadvantages of OSI Model
● Complexity: The OSI Model has seven layers, which can be complicated and hard
to understand for beginners.
● Not Practical: In real-life networking, most systems use a simpler model called the
Internet protocol suite (TCP/IP), so the OSI Model isn’t always directly applicable.
● Slow Adoption: When it was introduced, the OSI Model was not quickly adopted by
the industry, which preferred the simpler and already-established TCP/IP model.
● Overhead: Each layer in the OSI Model adds its own set of rules and operations,
which can make the process more time-consuming and less efficient.
● Theoretical: The OSI Model is more of a theoretical framework, meaning it’s great
for understanding concepts but not always practical for implementation.
Conclusion
In conclusion, the OSI (Open Systems Interconnection) model is a conceptual
framework that standardizes the functions of a telecommunication or computing system
into seven distinct layers: Physical, Data Link, Network, Transport, Session,
Presentation, and Application. Each layer has specific responsibilities and interacts with
the layers directly above and below it, ensuring seamless communication and data
exchange across diverse network environments. Understanding the OSI model helps in
troubleshooting network issues, designing robust network architectures, and facilitating
interoperability between different networking products and technologies.

THE PHYSICAL LAYER: guided transmission media, wireless transmission, the


public switched telephone networks, mobile telephone system

Physical Layer in OSI Model

The physical Layer is the bottom-most layer in the Open System Interconnection
(OSI) Model which is a physical and electrical representation of the system. It consists
of various network components such as power plugs, connectors, receivers, cable
types, etc. The physical layer sends data bits from one device(s) (like a computer) to
another device(s). The physical Layer defines the types of encoding (that is how the 0’s
and 1’s are encoded in a signal). The physical Layer is responsible for the
communication of the unstructured raw data streams over a physical medium.
Functions Performed by Physical Layer
The following are some important and basic functions that are performed by the
Physical Layer of the OSI Model –
● The physical layer maintains the data rate (how many bits a sender can send per
second).
● It performs the Synchronization of bits.
● It helps in Transmission Medium decisions (direction of data transfer).
● It helps in Physical Topology (Mesh, Star, Bus, Ring) decisions (Topology
through which we can connect the devices with each other).
● It helps in providing Physical Medium and Interface decisions.
● It provides two types of configuration Point Point configuration and Multi-Point
configuration.
● It provides an interface between devices (like PCs or computers) and
transmission medium.
● It has a protocol data unit in bits.
● Hubs, Ethernet, etc. device is used in this layer.
● This layer comes under the category of Hardware Layers (since the hardware
layer is responsible for all the physical connection establishment and processing
too).
● It provides an important aspect called Modulation, which is the process of
converting the data into radio waves by adding the information to an electrical or
optical nerve signal.
● It also provides a Switching mechanism wherein data packets can be forwarded
from one port (sender port) to the leading destination port.

Physical Topologies
Physical Topology or Network Topology is the Geographical Representation of Linking
devices. Following are the four types of physical topology-
● Mesh Topology: In a mesh topology, each and every device should have a
dedicated point-to-point connection with each and every other device in the
network. Here there is more security of data because there is a dedicated point-
to-point connection between two devices. Mesh Topology is difficult to install
because it is more complex.
● Star Topology: In star topology, the device should have a dedicated point-to-
point connection with a central controller or hub. Star Topology is easy to install
and reconnect as compared to Mesh Topology. Star Topology doesn’t have Fault
Tolerance Technique.
● Bus Topology: In a bus topology, multiple devices are connected through a
single cable that is known as backbone cable with the help of tap and drop lines.
It is less costly as compared to Mesh Topology and Star Topology. Re-
connection and Re-installation are difficult.
● Ring Topology: In a ring topology, each device is connected with repeaters in a
circle-like ring that’s why it is called Ring Topology. In Ring Topology, a device
can send the data only when it has a token, without a token no device can send
the data, and a token is placed by Monitor in Ring Topology.

Line Configuration

● Point-to-Point configuration: In Point-to-Point configuration, there is a line (link)


that is fully dedicated to carrying the data between two devices.
● Multi-Point configuration: In a Multi-Point configuration, there is a line (link)
through which multiple devices are connected.
Modes of Transmission Medium
● Simplex mode: In this mode, out of two devices, only one device can transmit
the data, and the other device can only receive the data. Example- Input from
keyboards, monitors, TV broadcasting, Radio broadcasting, etc.
● Half Duplex mode: In this mode, out of two devices, both devices can send and
receive the data but only one at a time not simultaneously. Examples- Walkie-
Talkie, Railway Track, etc.
● Full-Duplex mode: In this mode, both devices can send and receive the data
simultaneously. Examples- Telephone Systems, Chatting applications, etc.

Physical Layer Protocols Examples


Typically, a combination of hardware and software programming makes up the physical
layer. It consists of several protocols that control data transmissions on a network. The
following are some examples of Layer 1 protocols:
● Ethernet with 1000BASE-T.
● Ethernet with 1000BASE-SX.
● Ethernet at 100BaseT.
● Synchronous Digital Hierarchy/Optical Synchronization.
● Physical-layer variations in 802.11.
● Bluetooth.
● Networking for controllers.
● U.S. Serial Bus.

Types of Transmission Media

Transmission media refer to the physical pathways through which data is transmitted
from one device to another within a network. These pathways can be wired or wireless.
The choice of medium depends on factors like distance, speed, and interference. In this
article, we will discuss the transmission media.
What is Transmission Media?
A transmission medium is a physical path between the transmitter and the receiver i.e. it
is the channel through which data is sent from one place to another. Transmission
Media is broadly classified into the following types:
Types of Transmission Media
Guided Media
Guided Media is also referred to as Wired or Bounded transmission media. Signals
being transmitted are directed and confined in a narrow pathway by using physical links.

Features:
● High Speed
● Secure
● Used for comparatively shorter distances
There are 3 major types of Guided Media:
Twisted Pair Cable
It consists of 2 separately insulated conductor wires wound about each other. Generally,
several such pairs are bundled together in a protective sheath. They are the most
widely used Transmission Media. Twisted Pair is of two types:
● Unshielded Twisted Pair (UTP): UTP consists of two insulated copper wires
twisted around one another. This type of cable has the ability to block interference
and does not depend on a physical shield for this purpose. It is used for telephonic
applications.
Unshielded Twisted Pair

Advantages of Unshielded Twisted Pair

Least expensive
● Easy to install
● High-speed capacity

Disadvantages of Unshielded Twisted Pair

Susceptible to external interference


● Lower capacity and performance in comparison to STP
● Short distance transmission due to attenuation

Shielded Twisted Pair


Shielded Twisted Pair (STP): This type of cable consists of a special jacket (a copper
braid covering or a foil shield) to block external interference. It is used in fast-data-rate
Ethernet and in voice and data channels of telephone lines.
Advantages of Shielded Twisted Pair
● Better performance at a higher data rate in comparison to UTP
● Eliminates crosstalk
● Comparatively faster
Disadvantages of Shielded Twisted Pair
● Comparatively difficult to install and manufacture
● More expensive
● Bulky
Coaxial Cable
It has an outer plastic covering containing an insulation layer made of PVC or Teflon
and 2 parallel conductors each having a separate insulated protection cover. The
coaxial cable transmits information in two modes: Baseband mode(dedicated cable
bandwidth) and Broadband mode(cable bandwidth is split into separate ranges). Cable
TVs and analog television networks widely use Coaxial cables.

Advantages of Coaxial Cable

Coaxial cables support high bandwidth.


● It is easy to install coaxial cables.
● Coaxial cables have better cut-through resistance so they are more reliable and
durable.
● Less affected by noise or cross-talk or electromagnetic inference.
● Coaxial cables support multiple channels

Disadvantages of Coaxial Cable

● Coaxial cables are expensive.


● The coaxial cable must be grounded in order to prevent any crosstalk.
● As a Coaxial cable has multiple layers it is very bulky.
● There is a chance of breaking the coaxial cable and attaching a “t-joint” by hackers,
this compromises the security of the data.
Optical Fiber Cable
Optical Fibre Cable uses the concept of refraction of light through a core made up of
glass or plastic. The core is surrounded by a less dense glass or plastic covering called
the cladding. It is used for the transmission of large volumes of data. The cable can be
unidirectional or bidirectional. The WDM (Wavelength Division Multiplexer) supports two
modes, namely unidirectional and bidirectional mode.
Advantages of Optical Fibre Cable

● Increased capacity and bandwidth


● Lightweight
● Less signal attenuation
● Immunity to electromagnetic interference
● Resistance to corrosive materials

Disadvantages of Optical Fibre Cable

● Difficult to install and maintain


● High cost
● Fragile

Applications of Optical Fibre Cable

● Medical Purpose: Used in several types of medical instruments.


● Defence Purpose: Used in transmission of data in aerospace.
● For Communication: This is largely used in formation of internet cables.
● Industrial Purpose: Used for lighting purposes and safety measures in designing
the interior and exterior of automobiles.
Stripline
Stripline is a transverse electromagnetic (TEM) transmission line medium invented by
Robert M. Barrett of the Air Force Cambridge Research Centre in the 1950s. Stripline is
the earliest form of the planar transmission line. It uses a conducting material to transmit
high-frequency waves it is also called a waveguide. This conducting material is
sandwiched between two layers of the ground plane which are usually shorted to
provide EMI immunity.
Microstripline
In this, the conducting material is separated from the ground plane by a layer of
dielectric.
2. Unguided Media
It is also referred to as Wireless or Unbounded transmission media. No physical
medium is required for the transmission of electromagnetic signals.

Features of Unguided Media

● The signal is broadcasted through air


● Less Secure
● Used for larger distances
There are 3 types of Signals transmitted through unguided media:
Radio Waves
Radio waves are easy to generate and can penetrate through buildings. The sending
and receiving antennas need not be aligned. Frequency Range:3KHz – 1GHz. AM and
FM radios and cordless phones use Radio waves for transmission.

Further Categorized as Terrestrial and Satellite.


Microwaves
It is a line of sight transmission i.e. the sending and receiving antennas need to be
properly aligned with each other. The distance covered by the signal is directly
proportional to the height of the antenna. Frequency Range:1GHz – 300GHz. Micro
waves are majorly used for mobile phone communication and television distribution.

Microwave Transmission
Infrared
Infrared waves are used for very short distance communication. They cannot penetrate
through obstacles. This prevents interference between systems. Frequency
Range:300GHz – 400THz. It is used in TV remotes, wireless mouse, keyboard, printer,
etc.

Difference Between Radio Waves Vs Micro Waves Vs Infrared Waves


Basis Radiowave Microwave Infrared wave

These are
Direct These are omni-directional These are unidirectional in
unidirectional
ion in nature. nature.
in nature.

At low frequency, they can At low frequency, they can They cannot
penetrate through solid penetrate through solid penetrate
Penet
objects and walls but high objects and walls. at high through any
ration
frequency they bounce off frequency, they cannot solid object
the obstacle. penetrate. and walls.

Frequency
Frequ
Frequency range: 3 KHz to Frequency range: 1 GHz range: 300
ency
1GHz. to 300 GHz. GHz to 400
range
GHz.

Secur These offers poor security. These offers medium These offers
ity security. high security.

Atten Attenuation is
Attenuation is high. Attenuation is variable.
uation low.

Gover There is no
Some frequencies in the Some frequencies in the
nmen need of
radio-waves require microwaves require
t government
government license to use government license to use
Licen license to use
these. these.
se these waves.

Usag
Setup and usage Cost is Setup and usage Cost is Usage Cost is
e
moderate. high. very less.
Cost

These are not


Com used in long
These are used in long These are used in long
munic distance
distance communication. distance communication.
ation communicatio
n.

Factors Considered for Designing the Transmission Media


● Bandwidth: Assuming all other conditions remain constant, the greater a medium’s
bandwidth, the faster a signal’s data transmission rate.
● Transmission Impairment: Transmission Impairment occurs when the received
signal differs from the transmitted signal. Signal quality will be impacted as a result
of transmission impairment.
● Interference: Interference is defined as the process of disturbing a signal as it
travels over a communication medium with the addition of an undesired signal.
Causes of Transmission Impairment

Transmission Impairment
● Attenuation – It means loss of energy. The strength of signal decreases with
increasing distance which causes loss of energy in overcoming resistance of
medium. This is also known as attenuated signal. Amplifiers are used to amplify the
attenuated signal which gives the original signal back and compensate for this loss.
● Distortion – It means changes in the form or shape of the signal. This is generally
seen in composite signals made up with different frequencies. Each frequency
component has its own propagation speed travelling through a medium. And thats
why it delay in arriving at the final destination Every component arrive at different
time which leads to distortion. Therefore, they have different phases at receiver end
from what they had at senders end.
● Noise – The random or unwanted signal that mixes up with the original signal is
called noise. There are several types of noise such as induced noise, crosstalk
noise, thermal noise and impulse noise which may corrupt the signal.
Conclusion
In conclusion, transmission media are fundamental ways for data transmission in
networks, and they are classified as directed (wired) or unguided (wireless). Guided
media, such as twisted pair cables, coaxial cables, and optical fibers, provide secure,
fast, and dependable data transmission over short distances. Unguided media, such as
radio waves, microwaves, and infrared, provide wireless communication at various
distances, with security and attenuation trade-offs. The choice of transmission media is
determined by bandwidth, transmission impairment, and interference..

Wireless Transmission in Computer Network

Wireless transmission is a form of unguided media. Wireless communication involves


no physical link established between two or more devices, communicating wirelessly.
Wireless signals are spread over in the air and are received and interpreted by
appropriate antennas.

When an antenna is attached to electrical circuit of a computer or wireless device, it


converts the digital data into wireless signals and spread all over within its frequency
range. The receptor on the other end receives these signals and converts them back to
digital data.

A little part of electromagnetic spectrum can be used for wireless transmission.


Radio Transmission

Radio frequency is easier to generate and because of its large wavelength it can
penetrate through walls and structures a like. Radio waves can have wavelength from 1
mm – 100,000 km and have frequency ranging from 3 Hz (Extremely Low Frequency) to
300 GHz (Extremely High Frequency). Radio frequencies are sub-divided into six
bands.

Radio waves at lower frequencies can travel through walls whereas higher RF can
travel in straight line and bounce back. The power of low frequency waves decreases
sharply as they cover long distance. High frequency radio waves have more power.

Lower frequencies such as VLF, LF, MF bands can travel on the ground up to 1000
kilometers, over the earth’s surface.

Radio waves of high frequencies are prone to be absorbed by rain and other obstacles.
They use Ionosphere of earth atmosphere. High frequency radio waves such as HF and
VHF bands are spread upwards. When they reach Ionosphere, they are refracted back
to the earth.
Microwave Transmission

Electromagnetic waves above 100 MHz tend to travel in a straight line and signals over
them can be sent by beaming those waves towards one particular station. Because
Microwaves travels in straight lines, both sender and receiver must be aligned to be
strictly in line-of-sight.

Microwaves can have wavelength ranging from 1 mm – 1 meter and frequency ranging
from 300 MHz to 300 GHz.

Microwave antennas concentrate the waves making a beam of it. As shown in picture
above, multiple antennas can be aligned to reach farther. Microwaves have higher
frequencies and do not penetrate wall like obstacles.

Microwave transmission depends highly upon the weather conditions and the frequency
it is using.
Infrared Transmission

Infrared wave lies in between visible light spectrum and microwaves. It has wavelength
of 700-nm to 1-mm and frequency ranges from 300-GHz to 430-THz.

Infrared wave is used for very short-range communication purposes such as television
and it’s remote. Infrared travels in a straight line hence it is directional by nature.
Because of high frequency range, Infrared cannot cross wall-like obstacles.

Light Transmission

Highest most electromagnetic spectrum which can be used for data transmission is light
or optical signaling. This is achieved by means of LASER.

Because of frequency light uses, it tends to travel strictly in straight line.Hence the
sender and receiver must be in the line-of-sight. Because laser transmission is
unidirectional, at both ends of communication the laser and the photo-detector needs to
be installed. Laser beam is generally 1mm wide hence it is a work of precision to align
two far receptors each pointing to lasers source.

Laser works as Tx (transmitter) and photo-detectors works as Rx (receiver).

Lasers cannot penetrate obstacles such as walls, rain, and thick fog. Additionally, laser
beam is distorted by wind, atmosphere temperature, or variation in temperature in the
path.
Laser is safe for data transmission as it is very difficult to tap 1mm wide laser without

interrupting the communication channel.

The Public Switched Telephone Networks

Public Switched Telephone Network (PSTN) is an agglomeration of an interconnected


network of telephone lines owned by both governments as well as commercial
organizations.

Properties of PSTN

● It is also known as Plain Old Telephone Service (POTS)


● It has evolved from the invention of telephone by Alexander Graham Bell.
● The individual networks can be owned by national government, regional
government or private telephone operators.
● Its main objective is to transmit human voice in a recognizable form.
● It is an aggregation of circuit-switched networks of the world.
● Originally, it was an entirely analog network laid with copper cables and switches.
● Presently, most part of PSTN networks is digitized and comprises of a wide
variety communicating devices.
● The present PSTNs comprises of copper telephone lines, fibre optic cables,
communication satellites, microwave transmission links and undersea telephone
lines. It is also linked to the cellular networks.
● The interconnection between the different parts of the telephone system is done
by switching centres. This allows multiple telephone and cellular networks to
communicate with each other.
● Present telephone systems are tightly coupled with WANs (wide area networks)
and are used for both data and voice communications.
● The operation of PSTN networks follows the ITU-T standards.
Mobile Telephone System

Mobile telephone system is used for wide area voice and data communication. Cell
phones have gone through three different generations, called 1G, 2G and 3G. The
generations are as following:
1. Analog voice
2. Digital voice
3. Digital voice and data
These are explained as following below.
First generation (1G) Mobile Phones: Analog Voice 1G system used a single large
transmitter and had a single channel, used for both receiving and sending. If a user
wants to talk then he has to push the button that enabled the transmitter and disabled
the receiver. Such systems were called push-to-talk systems, and they were installed in
the late 1950’s. In 1960’s IMTS (Improved Mobile Telephone System) was installed. It
also used a high-powered (20-watt) transmitter on top of a hill but it had two
frequencies, one for sending and one for receiving, so push to talk button was no longer
needed.

Second generation (2G) Mobile phones: Digital voice The first generation mobile
phones was analog though second generation is digital. It enables new services such
as text messaging. There was no worldwide standardization during second generation.
Several different systems were developed and three have been deployed. GSM (Global
System for Mobile Communications). It is the dominant 2G system.

Third generation (3G) Mobile Phones: Digital Voice and Data The first generation
was analog voice and second generation was digital voice but 3rd generation is about
digital voice and data. 3G mobile telephony is all about providing enough wireless
bandwidth to keep future users happy. Apple’s iPhone is the kind of 3G device but
actually it is not using exactly 3G , they used enhanced 2G network i.e. 2.5G and there
is not enough data capacity to keep users happy.

Fourth generation (4G) Mobile Phones: Broadband Internet Access with Digital
Voice and Data The fourth-generation mobile phone is to access internet along with
digital voice and digital data. It is faster than 3G phones. 4G phones are capable to
work like a computer. 4G phones made cloud services usable. Even after decades still
there are remote areas where 4G network is not available.

Fifth generation (5G) Mobile Phones: Super Fast Connectivity and More Than 4G
The fifth generation mobile phones are to provide super fast connectivity. It provides
superior performance with low latency. You will be able to connect more devices than
4G. As 4G network is not available all places so 5G network will take time to make a
perfect level of coverage.
Mobile telephone service (MTS) connects mobile radio telephones with other networks
like public switched telephone networks (PSTN), other mobile telephones and
communication systems like Internet.

Basic Mobile Communications System

Mobile phones and other mobile devices, called mobile stations are connected to base
stations. Communication between the mobile stations and the base stations are done by
wireless radio signals, which may be both data signals and voice signals. Each base
station has a coverage area around it, such that mobile stations within this area can
connect provided they have access permissions. Base stations contain transmitters and
receivers to convert radio signals to electrical signals and vice versa. Base stations
transmit the message in form of electrical signals to the mobile switching center (MSC).
MSCs are connected to other MSCs and public networks like PSTNs.

The system is diagrammatically shown as follows −


Summary of Generations of Mobile Phone Systems

1G (First Generation) − They were standards for analog voice mobile phone
communications.

2G(Second Generation) − They were standards for digital voice mobile phone
communications.

3G(Third Generation) − These standards were for communications in form of both


digital voice as well as digital data.

4G(Fourth Generation) − 4G standards provide mobile broadband internet access in


addition to digital voice and data.

5G(Fifth Generation) − It is the next step of mobile communication standards beyond


4G which currently under development.

After Completion of Unit, I you should be able to answer the following Questions

1. Differentiate between Computer Networks and Distributed Systems.


2. What are the applications of Computer Networks?
3. Explain OSI Model with its all layers.
4. Explain the classification of Computer Networks.
5. Explain TCP/IP Model with its all layers.
6. Explain different types of Network topologies
7. What are modes of transmission medium?
8. What are the Factors Considered for Designing the Transmission Media?
9. What are the types of Wireless Transmission in Computer Network?
10. What are various types of Transmission Media?
11. What are the Causes of Transmission Impairment?
12. What are the properties of PSTN?
13. What are the different generations of Mobile phones?
14. What are the main functions of data link layer?
15. What are types of error?
16. What are error detection techniques?
17. Compare PAN, LAN, CAN, MAN, WAN.
18. Draw a diagram of basic mobile communication system.
19. How error can be controlled?
20. What is wave interference?
21. What are the challenges for Computer Networks and Distributed Systems?
22. What are the advantages and disadvantages of Computer Networks?

Unit – II

The Data Link Layer

Design issues

The data link layer in the OSI (Open System Interconnections) Model, is in between the
physical layer and the network layer. This layer converts the raw transmission facility
provided by the physical layer to a reliable and error-free link.

The main functions and the design issues of this layer are

● Providing services to the network layer


● Framing
● Error Control
● Flow Control
Services to the Network Layer

In the OSI Model, each layer uses the services of the layer below it and provides
services to the layer above it. The data link layer uses the services offered by the
physical layer. The primary function of this layer is to provide a well-defined service
interface to network layer above it.

The types of services provided can be of three types −

● Unacknowledged connectionless service


● Acknowledged connectionless service
● Acknowledged connection - oriented service

Unacknowledged and connectionless services.


● Here the sender machine sends the independent frames without any
acknowledgement from the sender.
● There is no logical connection established.
Acknowledged and connectionless services.
● There is no logical connection between sender and receiver established.
● Each frame is acknowledged by the receiver.
● If the frame didn’t reach the receiver in a specific time interval it has to be sent
again.
● It is very useful in wireless systems.
Acknowledged and connection-oriented services
● A logical connection is established between sender and receiver before data is
trimester.
● Each frame is numbered so the receiver can ensure all frames have arrived and
exactly once.

Framing

The data link layer encapsulates each data packet from the network layer into frames
that are then transmitted.

A frame has three parts, namely −

● Frame Header
● Payload field that contains the data packet from network layer
● Trailer
Error Control

The data link layer ensures error free link for data transmission. The issues it caters to
with respect to error control are −

● Dealing with transmission errors


● Sending acknowledgement frames in reliable connections
● Retransmitting lost frames
● Identifying duplicate frames and deleting them
● Controlling access to shared channels in case of broadcasting

Flow Control

The data link layer regulates flow control so that a fast sender does not drown a slow
receiver. When the sender sends frames at very high speeds, a slow receiver may not
be able to handle it. There will be frame losses even if the transmission is error-free.
The two common approaches for flow control are −

● Feedback based flow control


● Rate based flow control

Error detection and correction

Data-link layer uses error control techniques to ensure that frames, i.e. bit streams of
data, are transmitted from the source to the destination with a certain extent of
accuracy.

Errors

When bits are transmitted over the computer network, they are subject to get corrupted
due to interference and network problems. The corrupted bits leads to spurious data
being received by the destination and are called errors.
Types of Errors

Errors can be of three types, namely single bit errors, multiple bit errors, and burst
errors.

Single bit error − In the received frame, only one bit has been corrupted, i.e. either
changed from 0 to 1 or from 1 to 0.

Multiple bits error − In the received frame, more than one bits are corrupted.
Burst error − In the received frame, more than one consecutive bits are corrupted.

Error Control

Error control can be done in two ways

Error detection − Error detection involves checking whether any error has occurred or
not. The number of error bits and the type of error does not matter.

Error correction − Error correction involves ascertaining the exact number of bits that
has been corrupted and the location of the corrupted bits.

For both error detection and error correction, the sender needs to send some additional
bits along with the data bits. The receiver performs necessary checks based upon the
additional redundant bits. If it finds that the data is free from errors, it removes the
redundant bits before passing the message to the upper layers.

Error Detection Techniques

There are three main techniques for detecting errors in frames: Parity Check,
Checksum, and Cyclic Redundancy Check (CRC).
Parity Check

The parity check is done by adding an extra bit, called parity bit to the data to make a
number of 1s either even in case of even parity or odd in case of odd parity.

While creating a frame, the sender counts the number of 1s in it and adds the parity bit
in the following way

In case of even parity: If a number of 1s is even then parity bit value is 0. If the number
of 1s is odd then parity bit value is 1.

In case of odd parity: If a number of 1s is odd then parity bit value is 0. If a number of 1s
is even then parity bit value is 1.

On receiving a frame, the receiver counts the number of 1s in it. In case of even parity
check, if the count of 1s is even, the frame is accepted, otherwise, it is rejected. A
similar rule is adopted for odd parity check.

The parity check is suitable for single bit error detection only.
Checksum

In this error detection scheme, the following procedure is applied

Data is divided into fixed sized frames or segments.

The sender adds the segments using 1’s complement arithmetic to get the sum. It then
complements the sum to get the checksum and sends it along with the data frames.

The receiver adds the incoming segments along with the checksum using 1’s
complement arithmetic to get the sum and then complements it.

If the result is zero, the received frames are accepted; otherwise, they are discarded.
Cyclic Redundancy Check (CRC)

Cyclic Redundancy Check (CRC) involves binary division of the data bits being sent by
a predetermined divisor agreed upon by the communicating system. The divisor is
generated using polynomials.

Here, the sender performs binary division of the data segment by the divisor. It then
appends the remainder called CRC bits to the end of the data segment. This makes the
resulting data unit exactly divisible by the divisor.

The receiver divides the incoming data unit by the divisor. If there is no remainder, the
data unit is assumed to be correct and is accepted. Otherwise, it is understood that the
data is corrupted and is therefore rejected.
Error Correction Techniques

Error correction techniques find out the exact number of bits that have been corrupted
and as well as their locations. There are two principle ways

Backward Error Correction (Retransmission) − If the receiver detects an error in the


incoming frame, it requests the sender to retransmit the frame. It is a relatively simple
technique. But it can be efficiently used only where retransmitting is not expensive as in
fiber optics and the time for retransmission is low relative to the requirements of the
application.

Forward Error Correction − If the receiver detects some error in the incoming frame, it
executes error-correcting code that generates the actual frame. This saves bandwidth
required for retransmission. It is inevitable in real-time systems. However, if there are
too many errors, the frames need to be retransmitted.

The four main error correction codes are

Hamming Codes
Binary Convolution Code
Reed – Solomon Code
Low-Density Parity-Check Code

Hamming Codes
Elementary data link protocols

Protocols in the data link layer are designed so that this layer can perform its
basic functions: framing, error control and flow control. Framing is the process of
dividing bit - streams from physical layer into data frames whose size ranges
from a few hundred to a few thousand bytes. Error control mechanisms deals
with transmission errors and retransmission of corrupted and lost frames. Flow
control regulates speed of delivery and so that a fast sender does not drown a
slow receiver.

Types of Data Link Protocols


Data link protocols can be broadly divided into two categories, depending on
whether the transmission channel is noiseless or noisy.

Simplex Protocol
The Simplex protocol is hypothetical protocol designed for unidirectional data
transmission over an ideal channel, i.e. a channel through which transmission
can never go wrong. It has distinct procedures for sender and receiver. The
sender simply sends all its data available onto the channel as soon as they are
available its buffer. The receiver is assumed to process all incoming data
instantly. It is hypothetical since it does not handle flow control or error control.

Stop – and – Wait Protocol


Stop – and – Wait protocol is for noiseless channel too. It provides unidirectional
data transmission without any error control facilities. However, it provides for
flow control so that a fast sender does not drown a slow receiver. The receiver
has a finite buffer size with finite processing speed. The sender can send a
frame only when it has received indication from the receiver that it is available
for further data processing.

Stop – and – Wait ARQ


Stop – and – wait Automatic Repeat Request (Stop – and – Wait ARQ) is a
variation of the above protocol with added error control mechanisms,
appropriate for noisy channels. The sender keeps a copy of the sent frame. It
then waits for a finite time to receive a positive acknowledgement from receiver.
If the timer expires or a negative acknowledgement is received, the frame is
retransmitted. If a positive acknowledgement is received then the next frame is
sent.

Go – Back – N ARQ
Go – Back – N ARQ provides for sending multiple frames before receiving the
acknowledgement for the first frame. It uses the concept of sliding window, and
so is also called sliding window protocol. The frames are sequentially numbered
and a finite number of frames are sent. If the acknowledgement of a frame is
not received within the time period, all frames starting from that frame are
retransmitted.

Selective Repeat ARQ


This protocol also provides for sending multiple frames before receiving the
acknowledgement for the first frame. However, here only the erroneous or lost
frames are retransmitted, while the good frames are received and buffered.

Sliding Window Protocols

Sliding window protocols are data link layer protocols for reliable and
sequential delivery of data frames. The sliding window is also used
in Transmission Control Protocol.

In this protocol, multiple frames can be sent by a sender at a time before


receiving an acknowledgment from the receiver. The term sliding window refers
to the imaginary boxes to hold frames. Sliding window method is also known as
windowing.
Working Principle
In these protocols, the sender has a buffer called the sending window and the
receiver has buffer called the receiving window.

The size of the sending window determines the sequence number of the
outbound frames. If the sequence number of the frames is an n-bit field, then
the range of sequence numbers that can be assigned is 0 to 2 𝑛−1.
Consequently, the size of the sending window is 2 𝑛−1. Thus in order to
accommodate a sending window size of 2 𝑛−1, a n-bit sequence number is
chosen.

The sequence numbers are numbered as modulo-n. For example, if the sending
window size is 4, then the sequence numbers will be 0, 1, 2, 3, 0, 1, 2, 3, 0, 1,
and so on. The number of bits in the sequence number is 2 to generate the
binary sequence 00, 01, 10, 11.

The size of the receiving window is the maximum number of frames that the
receiver can accept at a time. It determines the maximum number of frames
that the sender can send before receiving acknowledgment.

Example
Suppose that we have sender window and receiver window each of size 4. So
the sequence numbering of both the windows will be 0,1,2,3,0,1,2 and so on.
The following diagram shows the positions of the windows after sending the
frames and receiving acknowledgments.
Types of Sliding Window Protocols
The Sliding Window ARQ (Automatic Repeat reQuest) protocols are of two
categories −
● Go – Back – N ARQ
Go – Back – N ARQ provides for sending multiple frames before receiving
the acknowledgment for the first frame. It uses the concept of sliding
window, and so is also called sliding window protocol. The frames are
sequentially numbered and a finite number of frames are sent. If the
acknowledgment of a frame is not received within the time period, all
frames starting from that frame are retransmitted.
● Selective Repeat ARQ
This protocol also provides for sending multiple frames before receiving
the acknowledgment for the first frame. However, here only the erroneous
or lost frames are retransmitted, while the good frames are received and
buffered.

Block coding
Block coding is a technique used in computer networks to improve
the reliability and efficiency of data transmission. It involves dividing
the data into blocks or packets and adding extra bits, known as
error-correction codes, to each block. These codes can be used to
detect and correct errors that may occur during transmission.
There are several different types of block codes, including linear
block codes, cyclic codes, and convolutional codes. Linear block
codes are the most commonly used, and they work by adding parity
bits to the data blocks. Cyclic codes are similar to linear block codes,
but they use a different algorithm to generate the error-correction
codes. Convolutional codes are a type of error-correcting code that
uses a sliding window to encode the data.
Block coding is an important technique in computer networks, as it
helps to ensure that the data being transmitted is accurate and
complete. It is used in a variety of applications, including data
storage, satellite communication, and wireless networks.
Hamming Distance
Hamming distance is a metric for comparing two binary data strings. While
comparing two binary strings of equal length, Hamming distance is the number
of bit positions in which the two bits are different.

The Hamming distance between two strings, a and b is denoted as d(a,b).

It is used for error detection or error correction when data is transmitted


over computer networks. It is also using in coding theory for comparing equal
length data words.

Calculation of Hamming Distance


In order to calculate the Hamming distance between two strings, and , we
perform their XOR operation, (a⊕ b), and then count the total number of 1s in
the resultant string.

Example
Suppose there are two strings 1101 1001 and 1001 1101.

11011001 ⊕ 10011101 = 01000100. Since, this contains two 1s, the Hamming
distance, d(11011001, 10011101) = 2.

Minimum Hamming Distance


In a set of strings of equal lengths, the minimum Hamming distance is the
smallest Hamming distance between all possible pairs of strings in that set.

Example
Suppose there are four strings 010, 011, 101 and 111.

010 ⊕ 011 = 001, d(010, 011) = 1.

010 ⊕ 101 = 111, d(010, 101) = 3.

010 ⊕ 111 = 101, d(010, 111) = 2.


011 ⊕ 101 = 110, d(011, 101) = 2.

011 ⊕ 111 = 100, d(011, 111) = 1.

101 ⊕ 111 = 010, d(011, 111) = 1.

Hence, the Minimum Hamming Distance, dmin = 1.

CRC

Cyclic Redundancy Check and Modulo-2


Division

CRC or Cyclic Redundancy Check is a method of detecting accidental


changes/errors in the communication channel.
CRC uses Generator Polynomial which is available on both sender and
receiver side. An example generator polynomial is of the form like x3 + x + 1.
This generator polynomial represents key 1011. Another example is x2 + 1 that
represents key 101.
n : Number of bits in data to be sent
from sender side.
k : Number of bits in the key obtained
from generator polynomial.
Sender Side (Generation of Encoded Data from Data and Generator
Polynomial (or Key)):
1. The binary data is first augmented by adding k-1 zeros in the end of the
data
2. Use modulo-2 binary division to divide binary data by the key and store
remainder of division.
3. Append the remainder at the end of the data to form the encoded data and
send the same
Receiver Side (Check if there are errors introduced in transmission)
Perform modulo-2 division again and if the remainder is 0, then there are no
errors.
In this article we will focus only on finding the remainder i.e. check word and
the code word.
Modulo 2 Division:
The process of modulo-2 binary division is the same as the familiar division
process we use for decimal numbers. Just that instead of subtraction, we use
XOR here.
● In each step, a copy of the divisor (or data) is XORed with the k bits of the
dividend (or key).
● The result of the XOR operation (remainder) is (n-1) bits, which is used for
the next step after 1 extra bit is pulled down to make it n bits long.
● When there are no bits left to pull down, we have a result. The (n-1)-bit
remainder which is appended at the sender side.
Illustration:
Example 1 (No error in transmission):
Data word to be sent - 100100
Key - 1101 [ Or generator polynomial x3 + x2 + 1]

Sender Side:
Therefore, the remainder is 001 and hence the encoded
data sent is 100100001.

Receiver Side:
Code word received at the receiver side 100100001

Therefore, the remainder is all zeros. Hence, the


data received has no error.

Example 2: (Error in transmission)


Data word to be sent - 100100
Key - 1101

Sender Side:
Therefore, the remainder is 001 and hence the
code word sent is 100100001.

Receiver Side
Let there be an error in transmission media
Code word received at the receiver side - 100000001
Since the remainder is not all zeroes, the error
is detected at the receiver side.

• Flow Control and Error control protocols - Stop and Wait, Go-back–N
ARQ, Selective Repeat ARQ

Difference Between Flow Control and Error


Control
Flow control ensures efficient data transmission, while error
control guarantees data integrity.
Flow control Error control

Flow control is meant only Error control is meant for the


Flow control Error control

for the transmission of data transmission of error free data from


from sender to receiver. sender to receiver.

To detect error in data, the approaches


are : Checksum, Cyclic Redundancy
For Flow control there are
Check and Parity Checking.
two approaches : Feedback-
To correct error in data, the approaches
based Flow Control and
are : Hamming code, Binary
Rate-based Flow Control.
Convolution codes, Reed-Solomon code,
Low-Density Parity Check codes.

It prevents the loss of data


It is used to detect and correct the error
and avoid over running of
occurred in the code.
receive buffers.

Example of Flow Control Example of Error Control techniques are


techniques are : Stop & Wait : Stop & Wait ARQ and Sliding Window
Protocol and Sliding Window ARQ (Go-back-N ARQ, Selected Repeat
Protocol. ARQ).

Conclusion
Flow control and error control are two vital sub layers of the Data
Link Layer that assists in data communication to be smooth. Flow is
mainly concerned with the control of data flow rate to avoid
overloading of the receiver while error control is mainly concerned
with the identification and elimination of errors in the stream of
data.
Difference Between Go-Back-N and Selective Repeat
Protocol
Go-Back-N Protocol Selective Repeat Protocol

In Go-Back-N Protocol, if the sent


frame are find suspected then all In selective Repeat protocol, only
the frames are re-transmitted those frames are re-transmitted
from the lost packet to the last which are found suspected.
packet transmitted.

Sender window size of Go-Back-N Sender window size of selective


Protocol is N. Repeat protocol is also N.

Receiver window size of Go-Back- Receiver window size of selective


N Protocol is 1. Repeat protocol is N.
Go-Back-N Protocol Selective Repeat Protocol

Go-Back-N Protocol is less Selective Repeat protocol is more


complex. complex.

In Go-Back-N Protocol, neither In selective Repeat protocol,


sender nor at receiver need receiver side needs sorting to sort
sorting. the frames.

In Go-Back-N Protocol, type of In Selective Repeat protocol, type


Acknowledgement is cumulative. of Acknowledgement is individual.

In Go-Back-N Protocol, Out-of-


Order packets are NOT Accepted In selective Repeat protocol, Out-
(discarded) and the entire of-Order packets are Accepted.
window is re-transmitted.

In selective Repeat protocol, if


Receives a corrupt packet, it
In Go-Back-N Protocol, if Receives
immediately sends a negative
a corrupt packet, then also, the
acknowledgement and hence only
entire window is re-transmitted.
the selective packet is
retransmitted.

Efficiency of Go-Back-N Protocol Efficiency of selective Repeat


is protocol is also
N/(1+2*a) N/(1+2*a)

Conclusion
The main difference between these two protocols is that after
finding the suspect or damage in sent frames go-back-n protocol re-
transmits all the frames whereas selective repeat protocol re-
transmits only that frame which is damaged.

Important Features of the Protocol


To better understand the Go-Back-N Arq protocol, let’s look into some of the points that
affect the protocol in a network channel, which are:
● The frames shared in the protocol are sequenced for better efficiency, to avoid any
retransmission of shared data, and differentiate between the frames.

● The protocol is designed to share multiple frames at a time, with the receiver end,
before expecting any acknowledgment from it. This simultaneous exchange of data is
termed protocol pipelining.
● If the acknowledgment is not shared to the sender side within a certain time frame, all
the frames after the non-acknowledged frame are to be retransmitted to the receiver
side.

Moving on, let’s look into the working of the Go-Back-N ARQ protocol.

Working on the Protocol

The working of the Go-Back-N ARQ protocol involves applying the sliding window
method for the basis of sharing data, and the number of frames to be shared is decided
by the window size.

Then using the main points we discussed and the mentioned features, let’s discuss the
steps involved in the working of the protocol:
1. To begin with, the sender side will share the data frames simultaneously according to
the window size assigned, over to the receiver side, and wait for the acknowledgment.

2. After the receiver side receives the frames, it will use the first frame and send the
acknowledgment to the sender side.

3. After the sender receive’s the acknowledgment for the first frame, the sender will
share the next frame with the receiver.
4. This exchange continues until, due to some external or internal interruption in the
network, the acknowledgment is not received by the sender side.

5. Then, the sender side will go back to the unacknowledged frame and retransmit that
frame, along with all the frames shared after that frame with the receiver. This
represents the Go-Back-N ARQ protocol method.

Let’s move on to some advantages and disadvantages of applying the Go-Back-N ARQ
protocol in the network.CEH v12 - Certified Ethical Hacking
Advantages and Disadvantages

Applying the Go-Back-N arq protocol has both advantages and disadvantages, some of
which are:

Advantages

● Multiple frames can be simultaneous to the receiver side.

● Increase the efficiency of the data transfer and has more control over the flow of
frames.

● Time delay is less for sharing data frames.


Disadvantages

● The storage of data frames at the receiver side.

● Retransmission of frames, when the acknowledgement is not received by the sender


end.

With the end of this topic, we are completed with our article on ‘Go-Back-N ARQ
Protocol.’

Selective Repeat ARQ


It is also known as Sliding Window Protocol and used for error detection and
control in the data link layer.

In the selective repeat, the sender sends several frames specified by a window
size even without the need to wait for individual acknowledgement from the
receiver as in Go-Back-N ARQ. In selective repeat protocol, the retransmitted
frame is received out of sequence.

In Selective Repeat ARQ only the lost or error frames are retransmitted, whereas
correct frames are received and buffered.

The receiver while keeping track of sequence numbers buffers the frames in
memory and sends NACK for only frames which are missing or damaged. The
sender will send/retransmit a packet for which NACK is received.
Example
Given below is an example of the Selective Repeat ARQ −

Explanation
Step 1 − Frame 0 sends from sender to receiver and set timer.

Step 2 − Without waiting for acknowledgement from the receiver another


frame, Frame1 is sent by sender by setting the timer for it.

Step 3 − In the same way frame2 is also sent to the receiver by setting the
timer without waiting for previous acknowledgement.

Step 4 − Whenever sender receives the ACK0 from receiver, within the frame 0
timer then it is closed and sent to the next frame, frame 3.

Step 5 − whenever the sender receives the ACK1 from the receiver, within the
frame 1 timer then it is closed and sent to the next frame, frame 4.
Step 6 − If the sender doesn’t receive the ACK2 from the receiver within the
time slot, it declares timeout for frame 2 and resends the frame 2 again,
because it thought the frame2 may be lost or damaged.

• Sliding Window

Piggybacking

Piggybacking is the technique of delaying outgoing acknowledgment


and attaching it to the next data packet.
When a data frame arrives, the receiver waits and does not send the
control frame (acknowledgment) back immediately. The receiver
waits until its network layer moves to the next data packet.
Acknowledgment is associated with this outgoing data frame. Thus
the acknowledgment travels along with the next data frame. This
technique in which the outgoing acknowledgment is delayed
temporarily is called Piggybacking.
As we can see in the figure, we can see with piggybacking, a single
message (ACK + DATA) over the wire in place of two separate
messages. Piggybacking improves the efficiency of the bidirectional
protocols.
● If Host A has both acknowledgment and data, which it wants to
send, then the data frame will be sent with the ack field which
contains the sequence number of the frame.
● If Host A contains only one acknowledgment, then it will wait for
some time, then in the case, if it finds any data frame, it
piggybacks the acknowledgment, otherwise, it will send the ACK
frame.
● If Host A left with only a data frame, then it will add the last
acknowledgment to it. Host A can send a data frame with an ack
field containing no acknowledgment bit.

• THE MEDIUM ACCESS SUBLAYER: Channel allocations problem


multiple access protocols
Assignment to Explore all the types of Multiple access Protocols

• Ethernet, Data Link Layer switching

A local Area Network (LAN) is a data communication network


connecting various terminals or computers within a building or
limited geographical area. The connection between the devices
could be wired or wireless. Ethernet, Token rings, and Wireless
LAN using IEEE 802.11 are examples of standard LAN technologies.
What is Ethernet?
Ethernet is the most widely used LAN technology and is defined
under IEEE standards 802.3. The reason behind its wide usability is
that Ethernet is easy to understand, implement, and maintain, and
allows low-cost network implementation. Also, Ethernet offers
flexibility in terms of the topologies that are allowed. Ethernet
generally uses a bus topology. Ethernet operates in two layers of
the OSI model, the physical layer and the data link layer. For
Ethernet, the protocol data unit is a frame since we mainly deal with
DLLs. In order to handle collisions, the Access control mechanism
used in Ethernet is CSMA/CD.

Network switching is the process of forwarding data frames or packets from one
port to another leading to data transmission from source to destination. Data
link layer is the second layer of the Open System Interconnections (OSI) model
whose function is to divide the stream of bits from physical layer into data
frames and transmit the frames according to switching requirements. Switching
in data link layer is done by network devices called bridges.

Bridges
A data link layer bridge connects multiple LANs (local area networks) together to
form a larger LAN. This process of aggregating networks is called network
bridging. A bridge connects the different components so that they appear as
parts of a single network.

The following diagram shows connection by a bridge −


Switching by Bridges
When a data frame arrives at a particular port of a bridge, the bridge examines
the frame’s data link address, or more specifically, the MAC address. If the
destination address as well as the required switching is valid, the bridge sends
the frame to the destined port. Otherwise, the frame is discarded.

The bridge is not responsible for end to end data transfer. It is concerned with
transmitting the data frame from one hop to the next. Hence, they do not
examine the payload field of the frame. Due to this, they can help in switching
any kind of packets from the network layer above.

Bridges also connect virtual LANs (VLANs) to make a larger VLAN.

If any segment of the bridged network is wireless, a wireless bridge is used to


perform the switching.

There are three main ways for bridging −

● simple bridging
● multi-port bridging
● learning or transparent bridging

Unit – III
The Network Layer

Network layer design issues



The Network layer is majorly focused on getting packets from the source to
the destination, routing error handling, and congestion control. Before learning
about design issues in the network layer, let’s learn about its various
functions.
● Addressing: Maintains the address at the frame header of both source
and destination and performs addressing to detect various devices in the
network.
● Packeting: This is performed by Internet Protocol. The network layer
converts the packets from its upper layer.
● Routing: It is the most important functionality. The network layer chooses
the most relevant and best path for the data transmission from source to
destination.
● Inter-networking: It works to deliver a logical connection across multiple
devices.

Network Layer Design Issues


The network layer comes with some design issues that are described as
follows:
1. Store and Forward packet switching
The host sends the packet to the nearest router. This packet is stored there
until it has fully arrived once the link is fully processed by verifying the
checksum then it is forwarded to the next router till it reaches the destination.
This mechanism is called “Store and Forward packet switching.”
2. Services provided to the Transport Layer
Through the network/transport layer interface, the network layer transfers
its patterns services to the transport layer. These services are described
below. But before providing these services to the transfer layer, the following
goals must be kept in mind:-
● Offering services must not depend on router technology.
● The transport layer needs to be protected from the type, number, and
topology of the available router.
● The network addresses for the transport layer should use uniform
numbering patterns, also at LAN and WAN connections.
Based on the connections there are 2 types of services provided :
● Connectionless – The routing and insertion of packets into the subnet are
done individually. No added setup is required.
● Connection-Oriented – Subnet must offer reliable service and all the
packets must be transmitted over a single route.
3. Implementation of Connectionless Service
Packets are termed as “datagrams” and corresponding subnets as “datagram
subnets”. When the message size that has to be transmitted is 4 times the
size of the packet, then the network layer divides into 4 packets and transmits
each packet to the router via. a few protocols. Each data packet has a
destination address and is routed independently irrespective of the packets.
4. Implementation of Connection-Oriented service:
To use a connection-oriented service, first, we establish a connection, use it,
and then release it. In connection-oriented services, the data packets are
delivered to the receiver in the same order in which they have been sent by
the sender. It can be done in either two ways :
● Circuit Switched Connection – A dedicated physical path or a circuit is
established between the communicating nodes and then the data stream is
transferred.
● Virtual Circuit Switched Connection – The data stream is transferred
over a packet switched network, in such a way that it seems to the user
that there is a dedicated path from the sender to the receiver. A virtual path
is established here. While, other connections may also be using the same
path.
Connection-less vs Connection-Oriented
Both Connection-less and Connection-Oriented are used for the connection
establishment between two or more devices. These types of services are
provided by the Network Layer.
Connection-oriented service: In connection-Oriented service we have to
establish a connection between sender and receiver before communication.
Handshske method is used to establish a connection between sender and
receiver. Connection-Oriented service include both connection establishment
as well as connection termination phase. Real life example of this service is
telephone service, for conversation we have to first establish a connection.
Connection-Oriented Service

Connection-less service: In Connection-Less service no need of connection


establishment and connection termination. This Service does not give a
guarantee of reliability. In this service, Packets may follow the different path to
reach their destination. Real life examples of this service is postal system,
Online gaming, real-time video and audio streaming etc.

Routing Algorithms

Classification of Routing Algorithms


Routing is the process of establishing the routes that data packets must follow
to reach the destination. In this process, a routing table is created which
contains information regarding routes that data packets follow. Various routing
algorithms are used for the purpose of deciding which route an incoming data
packet needs to be transmitted on to reach the destination efficiently.
Classification of Routing Algorithms
The routing algorithms can be classified as follows:
1. Adaptive Algorithms
2. Non-Adaptive Algorithms
3. Hybrid Algorithms

Types of Routing Algorithm

Routing algorithms can be classified into various types such as distance


vector, link state, and hybrid routing algorithms. Each has its own strengths
and weaknesses depending on the network structure. A deeper understanding
of these classifications can significantly aid in mastering networking concepts.

1. Adaptive Algorithms
These are the algorithms that change their routing decisions whenever
network topology or traffic load changes. The changes in routing decisions are
reflected in the topology as well as the traffic of the network. Also known
as dynamic routing, these make use of dynamic information such as current
topology, load, delay, etc. to select routes. Optimization parameters are
distance, number of hops, and estimated transit time.
Further, these are classified as follows:
● Isolated: In this method each, node makes its routing decisions using the
information it has without seeking information from other nodes. The
sending nodes don’t have information about the status of a particular link.
The disadvantage is that packets may be sent through a congested
network which may result in delay. Examples: Hot potato routing, and
backward learning.
● Centralized: In this method, a centralized node has entire information
about the network and makes all the routing decisions. The advantage of
this is only one node is required to keep the information of the entire
network and the disadvantage is that if the central node goes down the
entire network is done. The link state algorithm is referred to as a
centralized algorithm since it is aware of the cost of each link in the
network.
● Distributed: In this method, the node receives information from its
neighbors and then takes the decision about routing the packets. A
disadvantage is that the packet may be delayed if there is a change in
between intervals in which it receives information and sends packets. It is
also known as a decentralized algorithm as it computes the least-cost path
between source and destination.
2. Non-Adaptive Algorithms
These are the algorithms that do not change their routing decisions once they
have been selected. This is also known as static routing as a route to be taken
is computed in advance and downloaded to routers when a router is booted.
Further, these are classified as follows:
● Flooding: This adapts the technique in which every incoming packet is
sent on every outgoing line except from which it arrived. One problem with
this is that packets may go in a loop and as a result of which a node may
receive duplicate packets. These problems can be overcome with the help
of sequence numbers, hop count, and spanning trees.
● Random walk: In this method, packets are sent host by host or node by
node to one of its neighbors randomly. This is a highly robust method that
is usually implemented by sending packets onto the link which is least
queued.
Random Walk

3. Hybrid Algorithms
As the name suggests, these algorithms are a combination of both adaptive
and non-adaptive algorithms. In this approach, the network is divided into
several regions, and each region uses a different algorithm.
Further, these are classified as follows:
● Link-state: In this method, each router creates a detailed and complete
map of the network which is then shared with all other routers. This allows
for more accurate and efficient routing decisions to be made.
● Distance vector: In this method, each router maintains a table that contains
information about the distance and direction to every other node in the
network. This table is then shared with other routers in the network. The
disadvantage of this method is that it may lead to routing loops.

Difference between Adaptive and Non-Adaptive


Routing Algorithms
The main difference between Adaptive and Non-Adaptive Algorithms is:
Adaptive Algorithms are the algorithms that change their routing decisions
whenever network topology or traffic load changes. It is called Dynamic
Routing. Adaptive Algorithm is used in a large amount of data, highly complex
network, and rerouting of data.
Non-Adaptive Algorithms are algorithms that do not change their routing
decisions once they have been selected. It is also called static Routing. Non-
Adaptive Algorithm is used in case of a small amount of data and a less
complex network.
Types of Routing Protocol in Computer Networks
1. Routing information protocol (RIP)
One of the earliest protocols developed is the inner gateway protocol, or RIP.
we can use it with local area networks (LANs), that are linked computers in a
short range, or wide area networks (WANs), which are telecom networks that
cover a big range. Hop counts are used by the Routing Information Protocol
(RIP) to calculate the shortest path between networks.
2. Interior gateway protocol (IGRP)
IGRP was developed by the multinational technology corporation Cisco. It
makes use of many of the core features of RIP but raises the maximum
number of supported hops to 100. It might therefore function better on larger
networks. IGRPs are elegant and distance-vector protocols. In order to work,
IGRP requires comparisons across indicators such as load, reliability, and
network capacity. Additionally, this kind updates automatically when things
change, such as the route. This aids in the prevention of routing loops, which
are mistakes that result in an unending data transfer cycle.
3. Exterior Gateway Protocol (EGP)
Exterior gateway protocols, such as EGP, are helpful for transferring data or
information between several gateway hosts in autonomous systems. In
particular, it aids in giving routers the room they need to exchange data
between domains, such as the internet.
4. Enhanced interior gateway routing protocol (EIGRP)
This kind is categorised as a classless protocol, inner gateway, and distance
vector routing. In order to maximise efficiency, it makes use of the diffusing
update method and the dependable transport protocol. A router can use the
tables of other routers to obtain information and store it for later use. Every
router communicates with its neighbour when something changes so that
everyone is aware of which data paths are active. It stops routers from
miscommunicating with one another. The only external gateway protocol is
called Border Gateway Protocol (BGP).
5. Open shortest path first (OSPF)
OSPF is an inner gateway, link state, and classless protocol that makes use
of the shortest path first (SPF) algorithm to guarantee effective data transfer.
Multiple databases containing topology tables and details about the network
as a whole are maintained by it. The ads, which resemble reports, provide
thorough explanations of the path’s length and potential resource
requirements. When topology changes, OSPF recalculates paths using the
Dijkstra algorithm. In order to guarantee that its data is safe from modifications
or network intrusions, it also employs authentication procedures. Using OSPF
can be advantageous for both large and small network organisations because
to its scalability features.
6. Border gateway protocol (BGP)
Another kind of outer gateway protocol that was first created to take the role of
EGP is called BGP. It is also a distance vector protocol since it performs data
package transfers using the best path selection technique. BGP defines
communication over the internet. The internet is a vast network of
interconnected autonomous systems. Every autonomous system has
autonomous system number (ASN) that it receives by registering with the
Internet Assigned Numbers Authority.
Difference between Routing and Flooding
The difference between Routing and Flooding is listed below:
Routing Flooding

A routing table is required. No Routing table is required.

May give the shortest path. Always gives the shortest path.

Less Reliable. More Reliable.

Traffic is less. Traffic is high.

No duplicate packets. Duplicate packets are present.

Congestion Control Algorithms


Congestion Control in Computer Networks

Congestion control is a crucial concept in computer networks. It refers to the


methods used to prevent network overload and ensure smooth data flow.
When too much data is sent through the network at once, it can cause delays
and data loss. Congestion control techniques help manage the traffic, so all
users can enjoy a stable and efficient network connection. These techniques
are essential for maintaining the performance and reliability of modern
networks.
What is Congestion?
Congestion in a computer network happens when there is too much data
being sent at the same time, causing the network to slow down. Just like
traffic congestion on a busy road, network congestion leads to delays and
sometimes data loss. When the network can’t handle all the incoming data, it
gets “clogged,” making it difficult for information to travel smoothly from one
place to another.
Effects of Congestion in Computer Network
● Improved Network Stability: Congestion control helps keep the network
stable by preventing it from getting overloaded. It manages the flow of data
so the network doesn’t crash or fail due to too much traffic.
● Reduced Latency and Packet Loss: Without congestion control, data
transmission can slow down, causing delays and data loss. Congestion
control helps manage traffic better, reducing these delays and ensuring
fewer data packets are lost, making data transfer faster and the network
more responsive.
● Enhanced Throughput: By avoiding congestion, the network can use its
resources more effectively. This means more data can be sent in a shorter
time, which is important for handling large amounts of data and supporting
high-speed applications.
● Fairness in Resource Allocation: Congestion control ensures that
network resources are shared fairly among users. No single user or
application can take up all the bandwidth, allowing everyone to have a fair
share.
● Better User Experience: When data flows smoothly and quickly, users
have a better experience. Websites, online services, and applications work
more reliably and without annoying delays.
● Mitigation of Network Congestion Collapse: Without congestion control,
a sudden spike in data traffic can overwhelm the network, causing severe
congestion and making it almost unusable. Congestion control helps
prevent this by managing traffic efficiently and avoiding such critical
breakdowns.
Congestion Control Algorithm
● Congestion Control is a mechanism that controls the entry of data packets
into the network, enabling a better use of a shared network infrastructure
and avoiding congestive collapse.
● Congestive-avoidance algorithms (CAA) are implemented at the TCP
layer as the mechanism to avoid congestive collapse in a network.
● There are two congestion control algorithms which are as follows:
Leaky Bucket Algorithm
● The leaky bucket algorithm discovers its use in the context of network
traffic shaping or rate-limiting.
● A leaky bucket execution and a token bucket execution are predominantly
used for traffic shaping algorithms.
● This algorithm is used to control the rate at which traffic is sent to the
network and shape the burst traffic to a steady traffic stream.
● The disadvantages compared with the leaky-bucket algorithm are the
inefficient use of available network resources.
● The large area of network resources such as bandwidth is not being used
effectively.
Let us consider an example to understand Imagine a bucket with a small hole
in the bottom. No matter at what rate water enters the bucket, the outflow is at
constant rate. When the bucket is full with water additional water entering
spills over the sides and is lost.
Similarly, each network interface contains a leaky bucket and the following
steps are involved in leaky bucket algorithm:
● When host wants to send packet, packet is thrown into the bucket.
● The bucket leaks at a constant rate, meaning the network interface
transmits packets at a constant rate.
● Bursty traffic is converted to a uniform traffic by the leaky bucket.
● In practice the bucket is a finite queue that outputs at a finite rate.
To learn more about Leaky Bucket Algorithm please refer the article.
Token Bucket Algorithm
● The leaky bucket algorithm has a rigid output design at an average rate
independent of the bursty traffic.
● In some applications, when large bursts arrive, the output is allowed to
speed up. This calls for a more flexible algorithm, preferably one that never
loses information. Therefore, a token bucket algorithm finds its uses in
network traffic shaping or rate-limiting.
● It is a control algorithm that indicates when traffic should be sent. This
order comes based on the display of tokens in the bucket.
● The bucket contains tokens. Each of the tokens defines a packet of
predetermined size. Tokens in the bucket are deleted for the ability to
share a packet.
● When tokens are shown, a flow to transmit traffic appears in the display of
tokens.
● No token means no flow sends its packets. Hence, a flow transfers traffic
up to its peak burst rate in good tokens in the bucket.
To learn more about Token Bucket Algorithm please refer the article.
Need of Token Bucket Algorithm
The leaky bucket algorithm enforces output pattern at the average rate, no
matter how bursty the traffic is. So in order to deal with the bursty traffic we
need a flexible algorithm so that the data is not lost. One such algorithm is
token bucket algorithm.
Steps of this algorithm can be described as follows:
● In regular intervals tokens are thrown into the bucket. ƒ
● The bucket has a maximum capacity. ƒ
● If there is a ready packet, a token is removed from the bucket, and the
packet is sent.
● If there is no token in the bucket, the packet cannot be sent.
Let’s understand with an example, In figure (A) we see a bucket holding three
tokens, with five packets waiting to be transmitted. For a packet to be
transmitted, it must capture and destroy one token. In figure (B) We see that
three of the five packets have gotten through, but the other two are stuck
waiting for more tokens to be generated.
Token Bucket vs Leaky Bucket
The leaky bucket algorithm controls the rate at which the packets are
introduced in the network, but it is very conservative in nature. Some flexibility
is introduced in the token bucket algorithm. In the token bucket algorithm,
tokens are generated at each tick (up to a certain limit). For an incoming
packet to be transmitted, it must capture a token and the transmission takes
place at the same rate. Hence some of the busty packets are transmitted at
the same rate if tokens are available and thus introduces some amount of
flexibility in the system.
Formula: M * s = C + ? * s where S – is time taken M – Maximum output
rate ? – Token arrival rate C – Capacity of the token bucket in byte
Let’s understand with an example,
Link to question on leaky bucket
algorithm: https://www.geeksforgeeks.org/computer-networks-set-8/
Advantages
● Stable Network Operation: Congestion control ensures that networks
remain stable and operational by preventing them from becoming
overloaded with too much data traffic.
● Reduced Delays: It minimizes delays in data transmission by managing
traffic flow effectively, ensuring that data packets reach their destinations
promptly.
● Less Data Loss: By regulating the amount of data in the network at any
given time, congestion control reduces the likelihood of data packets being
lost or discarded.
● Optimal Resource Utilization: It helps networks use their resources
efficiently, allowing for better throughput and ensuring that users can
access data and services without interruptions.
● Scalability: Congestion control mechanisms are scalable, allowing
networks to handle increasing volumes of data traffic as they grow without
compromising performance.
● Adaptability: Modern congestion control algorithms can adapt to changing
network conditions, ensuring optimal performance even in dynamic and
unpredictable environments.
Disadvantages
● Complexity: Implementing congestion control algorithms can add
complexity to network management, requiring sophisticated systems and
configurations.
● Overhead: Some congestion control techniques introduce additional
overhead, which can consume network resources and affect overall
performance.
● Algorithm Sensitivity: The effectiveness of congestion control algorithms
can be sensitive to network conditions and configurations, requiring fine-
tuning for optimal performance.
● Resource Allocation Issues: Fairness in resource allocation, while a
benefit, can also pose challenges when trying to prioritize critical
applications over less essential ones.
● Dependency on Network Infrastructure: Congestion control relies on the
underlying network infrastructure and may be less effective in
environments with outdated or unreliable equipment.
Conclusion
Congestion control is essential for keeping computer networks running
smoothly. It helps prevent network overloads by managing the flow of data,
ensuring that information gets where it needs to go without delays or loss.
Effective congestion control improves network performance and reliability,
making sure that users have a stable and efficient connection. By using these
techniques, networks can handle high traffic volumes and continue to operate
effectively.

Internetworking

Internetworking is combined of 2 words, inter and networking which


implies an association between totally different nodes or segments.
This connection area unit is established through intercessor devices
akin to routers or gateway. The first term for associate degree
internetwork was catenet. This interconnection is often among or
between public, private, commercial, industrial, or governmental
networks. Thus, associate degree internetwork could be an
assortment of individual networks, connected by intermediate
networking devices, that function as one giant network.
Internetworking refers to the trade, products, and procedures that
meet the challenge of making and administering internet works.
To enable communication, every individual network node or phase is
designed with a similar protocol or communication logic, that is
Transfer Control Protocol (TCP) or Internet Protocol (IP). Once a
network communicates with another network having constant
communication procedures, it’s called Internetworking.
Internetworking was designed to resolve the matter of delivering a
packet of information through many links.
There is a minute difference between extending the network and
Internetworking. Merely exploitation of either a switch or a hub to
attach 2 local area networks is an extension of LAN whereas
connecting them via the router is an associate degree example of
Internetworking. Internetworking is enforced in Layer three (Network
Layer) of the OSI-ISO model. The foremost notable example of
internetworking is the Internet.
There is chiefly 3 units of Internetworking:
1. Extranet
2. Intranet
3. Internet
Intranets and extranets might or might not have connections to the
net. If there is a connection to the net, the computer network or
extranet area unit is usually shielded from being accessed from the
net if it is not authorized. The net isn’t thought-about to be a section
of the computer network or extranet, though it should function as a
portal for access to parts of the associate degree extranet.
1. Extranet – It’s a network of the internetwork that’s restricted in
scope to one organization or entity however that additionally has
restricted connections to the networks of one or a lot of different
sometimes, however not essential. It’s the very lowest level of
Internetworking, usually enforced in an exceedingly personal
area. Associate degree extranet may additionally be classified as
a Man, WAN, or different form of network however it cannot
encompass one local area network i.e. it should have a minimum
of one reference to associate degree external network.
2. Intranet – This associate degree computer network could be a
set of interconnected networks, which exploits the Internet
Protocol and uses IP-based tools akin to web browsers and FTP
tools, that are underneath the management of one body entity.
That body entity closes the computer network to the remainder of
the planet and permits solely specific users. Most typically, this
network is the internal network of a corporation or different
enterprise. An outsized computer network can usually have its
own internet server to supply users with browsable data.
3. Internet – A selected Internetworking, consisting of a worldwide
interconnection of governmental, academic, public, and personal
networks based mostly upon the Advanced analysis comes
Agency Network (ARPANET) developed by ARPA of the U.S.
Department of Defense additionally home to the World Wide Web
(WWW) and cited as the ‘Internet’ to differentiate from all
different generic Internetworks. Participants within the web, or
their service suppliers, use IP Addresses obtained from address
registries that manage assignments.
Internetworking has evolved as an answer to a few key problems:
isolated LANs, duplication of resources, and an absence of network
management. Isolated LANs created transmission problems between
totally different offices or departments. Duplication of resources
meant that constant hardware and code had to be provided to every
workplace or department, as did a separate support employee. This
lack of network management meant that no centralized
methodology of managing and troubleshooting networks existed.
One more form of the interconnection of networks usually happens
among enterprises at the Link Layer of the networking model, i.e. at
the hardware-centric layer below the amount of the TCP/IP logical
interfaces. Such interconnection is accomplished through network
bridges and network switches. This can be typically incorrectly
termed internetworking, however, the ensuing system is just a
bigger, single subnetwork, and no internetworking protocol, akin to
web Protocol, is needed to traverse these devices.
However, one electronic network is also reborn into associate
degree internetwork by dividing the network into phases and
logically dividing the segment traffic with routers. The Internet
Protocol is meant to supply an associate degree unreliable packet
service across the network. The design avoids intermediate network
components maintaining any state of the network. Instead, this task
is allotted to the endpoints of every communication session. To
transfer information correctly, applications should utilize associate
degree applicable Transport Layer protocol, akin to Transmission
management Protocol (TCP), that provides a reliable stream. Some
applications use a less complicated, connection-less transport
protocol, User Datagram Protocol (UDP), for tasks that don’t need
reliable delivery of information or that need period of time service,
akin to video streaming or voice chat.

Internetwork Addressing –

Internetwork addresses establish devices severally or as members


of a bunch. Addressing schemes differ based on the protocol family
and therefore the OSI layer. Three kinds of internetwork addresses
area units are ordinarily used: data-link layer addresses, Media
Access control (MAC) addresses, and network-layer addresses.

1. Data Link Layer addresses: A data-link layer address


unambiguously identifies every physical network association of a
network device. Data-link addresses typically area units cited as
physical or hardware addresses. Data-link addresses sometimes
exist among a flat address area and have a pre-established and
usually fastened relationship to a selected device. End systems
usually have just one physical network association, and therefore
have just one data-link address. Routers and different
internetworking devices usually have multiple physical network
connections and so eventually have multiple data-link addresses.
2. MAC Addresses: Media Access management (MAC) addresses
encompass a set of data-link layer addresses. MAC addresses
establish network entities in LANs that implement the IEEE MAC
addresses of the data-link layer. MAC addresses different area
units distinctively for every local area network interface. MAC
addresses are forty-eight bits long and are expressed in form of
twelve hexadecimal digits. The primary half dozen hexadecimal
digits, which are usually administered by the IEEE, establish the
manufacturer or merchant and therefore comprise the
Organizational Unique Identifier (OUI). The last half dozen
positional notation digits comprise the interface serial variety or
another price administered by the particular merchant. MAC
addresses are typically area units referred to as burned-in
addresses (BIAs) as a result of being burned into read-only
memory(ROM) and are traced into random-access memory (RAM)
once the interface card initializes.
3. Network-Layer Addresses: Network addresses sometimes
exist among a gradable address area and typically area units
referred to as virtual or logical addresses. the connection
between a network address and a tool is logical and unfixed, it
usually relies either on physical network characteristics or on
groupings that don’t have any physical basis. finish systems need
one network-layer address for every network-layer protocol they
support. Routers and different Internetworking devices need one
network-layer address per physical network association for every
network-layer protocol supported.

Challenges to Internetworking –

Implementing useful internetwork isn’t at any certainty. There are


several challenging fields, particularly in the areas of
dependableness, connectivity, network management, and
adaptability, and each and every space is essential in establishing
associate degree economical and effective internetwork. A few of
them are:-
● The initial challenge lies when we are trying to connect numerous
systems to support communication between disparate
technologies. For example, Totally different sites might use
different kinds of media, or they could operate at variable speeds.
● Another essential thought is reliable service that should be
maintained in an internetwork. Individual users and whole
organizations depend upon consistent, reliable access to network
resources.
● Network management should give centralized support associate
degree troubleshooting capabilities on the internetwork.
Configuration, security, performance, and different problems
should be adequately addressed for the internetwork to perform
swimmingly.
● Flexibility, the ultimate concern, is important for network
enlargement and new applications and services, among different
factors.
Advantages:
Increased connectivity: Internetworking enables devices on
different networks to communicate with each other, which increases
connectivity and enables new applications and services.
Resource sharing: Internetworking allows devices to share
resources across networks, such as printers, servers, and storage
devices. This can reduce costs and improve efficiency by allowing
multiple devices to share resources.
Improved scalability: Internetworking allows networks to be
expanded and scaled as needed to accommodate growing numbers
of devices and users.
Improved collaboration: Internetworking enables teams and
individuals to collaborate and work together more effectively,
regardless of their physical location.
Access to remote resources: Internetworking allows users to
access resources and services that are physically located on remote
networks, improving accessibility and flexibility.
Disadvantages:
Security risks: Internetworking can create security vulnerabilities
and increase the risk of cyberattacks and data breaches. Connecting
multiple networks together increases the number of entry points for
attackers, making it more difficult to secure the entire system.
Complexity: Internetworking can be complex and requires
specialized knowledge and expertise to set up and maintain. This
can increase costs and create additional maintenance overhead.
Performance issues: Internetworking can lead to performance
issues, particularly if networks are not properly optimized and
configured. This can result in slow response times and poor network
performance.
Compatibility issues: Internetworking can lead to compatibility
issues, particularly if different networks are using different protocols
or technologies. This can make it difficult to integrate different
systems and may require additional resources to resolve.
Management overhead: Internetworking can create additional
management overhead, particularly if multiple networks are
involved. This can increase costs and require additional resources to
manage effectively.

Switching

What is Switching?


In computer networking, Switching is the process of transferring data packets


from one device to another in a network, or from one network to another,
using specific devices called switches. A computer user experiences
switching all the time for example, accessing the Internet from your computer
device, whenever a user requests a webpage to open, the request is
processed through switching of data packets only.
Switching takes place at the Data Link layer of the OSI Model. This means
that after the generation of data packets in the Physical Layer, switching is the
immediate next process in data communication. In this article, we shall
discuss different processes involved in switching, what kind of hardware is
used in switching, etc.
What is a Switch?
A switch is a hardware device in a network that connects other devices, like
computers and servers. It helps multiple devices share a network without their
data interfering with each other.
A switch works like a traffic cop at a busy intersection. When a data packet
arrives, the switch decides where it needs to go and sends it through the right
port.
Some data packets come from devices directly connected to the switch, like
computers or VoIP phones. Other packets come from devices connected
through hubs or routers.
The switch knows which devices are connected to it and can send data
directly between them. If the data needs to go to another network, the switch
sends it to a router, which forwards it to the correct destination.
What is a Network Switching?
A switch is a dedicated piece of computer hardware that facilitates the
process of switching i.e., incoming data packets and transferring them to their
destination. A switch works at the Data Link layer of the OSI Model. A switch
primarily handles the incoming data packets from a source computer or
network and decides the appropriate port through which the data packets will
reach their target computer or network.
A switch decides the port through which a data packet shall pass with the help
of its destination MAC(Media Access Control) Address. A switch does this
effectively by maintaining a switching table, (also known as forwarding table).
A network switch is more efficient than a network Hub or repeater because it
maintains a switching table, which simplifies its task and reduces congestion
on a network, which effectively improves the performance of the network.
Process of Switching
The switching process involves the following steps:
● Frame Reception: The switch receives a data frame or packet from a
computer connected to its ports.
● MAC Address Extraction: The switch reads the header of the data
frame and collects the destination MAC Address from it.
● MAC Address Table Lookup: Once the switch has retrieved the MAC
Address, it performs a lookup in its Switching table to find a port that leads
to the MAC Address of the data frame.
● Forwarding Decision and Switching Table Update: If the switch
matches the destination MAC Address of the frame to the MAC address in
its switching table, it forwards the data frame to the respective port.
However, if the destination MAC Address does not exist in its forwarding
table, it follows the flooding process, in which it sends the data frame to all
its ports except the one it came from and records all the MAC Addresses to
which the frame was delivered. This way, the switch finds the new MAC
Address and updates its forwarding table.
● Frame Transition: Once the destination port is found, the switch sends the
data frame to that port and forwards it to its target computer/network.
Types of Switching
There are three types of switching methods:
● Message Switching
● Circuit Switching
● Packet Switching
o Datagram Packet Switching
o Virtual Circuit Packet Switching

Let us now discuss them individually:


Message Switching: This is an older switching technique that has become
obsolete. In message switching technique, the entire data block/message is
forwarded across the entire network thus, making it highly inefficient.
Circuit Switching: In this type of switching, a connection is established
between the source and destination beforehand. This connection receives the
complete bandwidth of the network until the data is transferred completely.
This approach is better than message switching as it does not involve sending
data to the entire network, instead of its destination only.
Packet Switching: This technique requires the data to be broken down into
smaller components, data frames, or packets. These data frames are then
transferred to their destinations according to the available resources in the
network at a particular time.
This switching type is used in modern computers and even the Internet. Here,
each data frame contains additional information about the destination and
other information required for proper transfer through network components.
Datagram Packet Switching: In Datagram Packet switching, each data
frame is taken as an individual entity and thus, they are processed separately.
Here, no connection is established before data transmission occurs. Although
this approach provides flexibility in data transfer, it may cause a loss of data
frames or late delivery of the data frames.
Virtual-Circuit Packet Switching: In Virtual-Circuit Packet switching, a
logical connection between the source and destination is made before
transmitting any data. These logical connections are called virtual circuits.
Each data frame follows these logical paths and provides a reliable way of
transmitting data with less chance of data loss.
Conclusion
In conclusion, switching is a fundamental networking process that enables the
exchange of data between devices within a network. By efficiently directing
data packets to their correct destinations, switches help maintain smooth and
organized communication, ensuring that multiple devices can share the same
network without interference. Switching is crucial for the seamless operation
of local-area networks (LANs) and the overall performance of network
infrastructure.

The network Layer in the Internet (IPv4 and IPv6)

Difference Between IPv4 and IPv6




The address through which any computer communicates with our computer is
simply called an Internet Protocol Address or IP address. For example, if we
want to load a web page or download something, we require the address to
deliver that particular file or webpage. That address is called an IP Address.
There are two versions of IP: IPv4 and IPv6. IPv4 is the older version, while
IPv6 is the newer one. Both have their own features and functions, but they
differ in many ways. Understanding these differences helps us see why we
need IPv6 as the internet grows and evolves.
What is IP?
An IP, or Internet Protocol address, is a unique set of numbers assigned to
each device connected to a network, like the Internet. It’s like an address for
your computer, phone, or any other device, allowing them to communicate
with each other. When you visit a website, your device uses the IP address to
find and connect to the website’s server.
Types of IP Addresses
● IPv4 (Internet Protocol Version 4)
● IPv6 (Internet Protocol Version 6)
To deepen your understanding of networking concepts like IPv4 and IPv6,
consider enrolling in the GATE CS Self-Paced course. This course covers
crucial topics needed for GATE preparation and provides a strong foundation
in computer science, equipping you with the skills needed to excel in your
studies and career.
What is IPv4?
IPv4 addresses consist of two things: the network address and the host
address. It stands for Internet Protocol version four. It was introduced in
1981 by DARPA and was the first deployed version in 1982 for production on
SATNET and on the ARPANET in January 1983.
IPv4 addresses are 32-bit integers that have to be expressed in Decimal
Notation. It is represented by 4 numbers separated by dots in the range of 0-
255, which have to be converted to 0 and 1, to be understood by Computers.
For Example, An IPv4 Address can be written as 189.123.123.90.
IPv4 Address Format
IPv4 Address Format is a 32-bit Address that comprises binary digits
separated by a dot (.).

IPv4 Address Format


Drawback of IPv4
● Limited Address Space : IPv4 has a limited number of addresses, which
is not enough for the growing number of devices connecting to the internet.
● Complex Configuration : IPv4 often requires manual configuration or
DHCP to assign addresses, which can be time-consuming and prone to
errors.
● Less Efficient Routing : The IPv4 header is more complex, which can
slow down data processing and routing.
● Security Issues : IPv4 does not have built-in security features, making it
more vulnerable to attacks unless extra security measures are added.
● Limited Support for Quality of Service (QoS) : IPv4 has limited
capabilities for prioritizing certain types of data, which can affect the
performance of real-time applications like video streaming and VoIP.
● Fragmentation : IPv4 allows routers to fragment packets, which can lead
to inefficiencies and increased chances of data being lost or corrupted.
● Broadcasting Overhead : IPv4 uses broadcasting to communicate with
multiple devices on a network, which can create unnecessary network
traffic and reduce performance.
What is IPv6?
IPv6 is based on IPv4 and stands for Internet Protocol version 6. It was first
introduced in December 1995 by Internet Engineering Task Force. IP version
6 is the new version of Internet Protocol, which is way better than IP version 4
in terms of complexity and efficiency. IPv6 is written as a group of 8
hexadecimal numbers separated by colon (:). It can be written as 128 bits of
0s and 1s.
IPv6 Address Format
IPv6 Address Format is a 128-bit IP Address, which is written in a group of 8
hexadecimal numbers separated by colon (:).

IPv6 Address Format


To switch from IPv4 to IPv6, there are several strategies:
● Dual Stacking : Devices can use both IPv4 and IPv6 at the same time.
This way, they can talk to networks and devices using either version.
● Tunneling : This method allows IPv6 users to send data through an IPv4
network to reach other IPv6 users. Think of it as creating a “tunnel” for IPv6
traffic through the older IPv4 system.
● Network Address Translation (NAT) : NAT helps devices using different
versions of IP addresses (IPv4 and IPv6) to communicate with each other
by translating the addresses so they understand each other.
Difference Between IPv4 and IPv6
IPv4 IPv6

IPv4 has a 32-bit address length IPv6 has a 128-bit address length

It Supports Manual It supports Auto and renumbering address


and DHCP address configuration configuration

In IPv4 end to end, connection In IPv6 end-to-end, connection integrity is


integrity is Unachievable Achievable

It can generate 4.29×10 9 address The address space of IPv6 is quite large it can
space produce 3.4×10 38 address space

The Security feature is dependent IPSEC is an inbuilt security feature in the IPv6
on the application protocol

Address representation of IPv4 is


Address representation of IPv6 is in hexadecimal
in decimal

Fragmentation performed by In IPv6 fragmentation is performed only by the


Sender and forwarding routers sender

In IPv4 Packet flow identification In IPv6 packet flow identification are Available and
is not available uses the flow label field in the header

In IPv4 checksum field is available In IPv6 checksum field is not available


IPv4 IPv6

It has a broadcast Message In IPv6 multicast and anycast message transmission


Transmission Scheme scheme is available

In IPv4 Encryption and


Authentication facility not In IPv6 Encryption and Authentication are provided
provided

IPv4 has a header of 20-60 bytes. IPv6 has a header of 40 bytes fixed

IPv4 can be converted to IPv6 Not all IPv6 can be converted to IPv4

IPv4 consists of 4 fields which are IPv6 consists of 8 fields, which are separated by a
separated by addresses dot (.) colon (:)

IPv4’s IP addresses are divided


into five different classes. Class
IPv6 does not have any classes of the IP address.
A , Class B, Class C, Class D ,
Class E.

IPv4 supports VLSM( Variable


IPv6 does not support VLSM.
Length subnet mask ).

Example of IPv6:
Example of IPv4: 66.94.29.13
2001:0000:3238:DFE1:0063:0000:0000:FEFB

Benefits of IPv6 over IPv4


The recent Version of IP IPv6 has a greater advantage over IPv4. Here are
some of the mentioned benefits:
● Larger Address Space: IPv6 has a greater address space than IPv4,
which is required for expanding the IP Connected Devices. IPv6 has 128
bit IP Address rather and IPv4 has a 32-bit Address.
● Improved Security: IPv6 has some improved security which is built in with
it. IPv6 offers security like Data Authentication, Data Encryption, etc. Here,
an Internet Connection is more Secure.
● Simplified Header Format: As compared to IPv4, IPv6 has a simpler and
more effective header Structure, which is more cost-effective and also
increases the speed of Internet Connection.
● Prioritize: IPv6 contains stronger and more reliable support for QoS
features, which helps in increasing traffic over websites and increases
audio and video quality on pages.
● Improved Support for Mobile Devices: IPv6 has increased and better
support for Mobile Devices. It helps in making quick connections over other
Mobile Devices and in a safer way than IPv4.
Conclusion
In simple terms, IPv4 and IPv6 are two versions of Internet Protocol
addresses used to identify devices on a network. IPv6 is the newer version
and offers many improvements over IPv4, such as a much larger address
space, better security, and more efficient routing . However, IPv4 is still widely
used, and the transition to IPv6 is ongoing. The main difference is that IPv6
can handle many more devices, which is crucial as the number of internet-
connected devices continues to grow.

Quality of Service

Quality of Service (QoS) is an important concept, particularly when


working with multimedia applications. Multimedia applications, such
as video conferencing, streaming services, and VoIP (Voice over IP),
require certain bandwidth, latency, jitter, and packet loss
parameters. QoS methods help ensure that these requirements are
satisfied, allowing for seamless and reliable communication.
What is Quality of Service?
Quality-of-service (QoS) refers to traffic control mechanisms that
seek to differentiate performance based on application or network-
operator requirements or provide predictable or guaranteed
performance to applications, sessions, or traffic aggregates. The
basic phenomenon for QoS is in terms of packet delay and losses of
various kinds.
QoS Specification
● Delay
● Delay Variation(Jitter)
● Throughput
● Error Rate
Types of Quality of Service
● Stateless Solutions – Routers maintain no fine-grained state
about traffic, one positive factor of it is that it is scalable and
robust. But it has weak services as there is no guarantee about
the kind of delay or performance in a particular application which
we have to encounter.
● Stateful Solutions – Routers maintain a per-flow state as flow is
very important in providing the Quality-of-Service i.e. providing
powerful services such as guaranteed services and high resource
utilization, providing protection, and is much less scalable and
robust.
QoS Parameters
● Packet loss: This occurs when network connections get
congested, and routers and switches begin losing packets.
● Jitter: This is the result of network congestion, time drift, and
routing changes. Too much jitter can reduce the quality of voice
and video communication.
● Latency: This is how long it takes a packet to travel from its
source to its destination. The latency should be as near to zero as
possible.
● Bandwidth: This is a network communications link’s ability to
transmit the majority of data from one place to another in a
specific amount of time.
● Mean opinion score: This is a metric for rating voice quality
that uses a five-point scale, with five representing the highest
quality.
How does QoS Work?
Quality of Service (QoS) ensures the performance of critical
applications within limited network capacity.
● Packet Marking: QoS marks packets to identify their service
types. For example, it distinguishes between voice, video, and
data traffic.
● Virtual Queues: Routers create separate virtual queues for each
application based on priority. Critical apps get reserved
bandwidth.
● Handling Allocation: QoS assigns the order in which packets
are processed, ensuring appropriate bandwidth for each
application
Benefits of QoS
● Improved Performance for Critical Applications
● Enhanced User Experience
● Efficient Bandwidth Utilization
● Increased Network Reliability
● Compliance with Service Level Agreements (SLAs)
● Reduced Network Costs
● Improved Security
● Better Scalability
Why is QoS Important?
● Video and audio conferencing require a bounded delay and loss
rate.
● Video and audio streaming requires a bounded packet loss rate, it
may not be so sensitive to delay.
● Time-critical applications (real-time control) in which bounded
delay is considered to be an important factor.
● Valuable applications should provide better services than less
valuable applications.
Implementing QoS
● Planning: The organization should develop an awareness of each
department’s service needs and requirements, select an
appropriate model, and build stakeholder support.
● Design: The organization should then keep track of all key
software and hardware changes and modify the chosen QoS
model to the characteristics of its network infrastructure.
● Testing: The organization should test QoS settings and policies
in a secure, controlled testing environment where faults can be
identified.
● Deployment: Policies should be implemented in phases. An
organization can choose to deploy rules by network segment or
by QoS function (what each policy performs).
● Monitoring and analyzing: Policies should be modified to
increase performance based on performance data.
Models to Implement QoS
1. Integrated Services(IntServ)
● An architecture for providing QoS guarantees in IP networks for
individual application sessions.
● Relies on resource reservation, and routers need to maintain
state information of allocated resources and respond to new call
setup requests.
● Network decides whether to admit or deny a new call setup
request.
2. IntServ QoS Components
● Resource reservation: call setup signaling, traffic, QoS
declaration, per-element admission control.
● QoS-sensitive scheduling e.g WFQ queue discipline.
● QoS-sensitive routing algorithm(QSPF)
● QoS-sensitive packet discard strategy.
3. RSVP-Internet Signaling
It creates and maintains distributed reservation state, initiated by
the receiver and scales for multicast, which needs to be refreshed
otherwise reservation times out as it is in soft state. Latest paths
were discovered through “PATH” messages (forward direction) and
used by RESV messages (reserve direction).
4. Call Admission
● Session must first declare it’s QoS requirement and characterize
the traffic it will send through the network.
● R-specification: defines the QoS being requested, i.e. what kind
of bound we want on the delay, what kind of packet loss is
acceptable, etc.
● T-specification: defines the traffic characteristics like bustiness
in the traffic.
● A signaling protocol is needed to carry the R-spec and T-spec to
the routers where reservation is required.
● Routers will admit calls based on their R-spec, T-spec and based
on the current resource allocated at the routers to other calls.
5. Diff-Serv
Differentiated Service is a stateful solution in which each flow
doesn’t mean a different state. It provides reduced state services
i.e. maintaining state only for larger granular flows rather than end-
to-end flows tries to achieve the best of both worlds. Intended to
address the following difficulties with IntServ and RSVP:
● Flexible Service Models: IntServ has only two classes, want to
provide more qualitative service classes: want to provide
‘relative’ service distinction.
● Simpler signaling: Many applications and users may only want
to specify a more qualitative notion of service.
QoS Tools
● Traffic Classification and Marking
● Traffic Shaping and Policing
● Queue Management and Scheduling
● Resource Reservation
● Congestion Management
What is Multimedia?
The word multi and media are combined to form the
word multimedia. The word “multi” signifies
“many.” Multimedia is a type of medium that allows
information to be easily transferred from one location to
another. Multimedia is the presentation of text, pictures, audio,
and video with links and tools that allow the user to navigate,
engage, create, and communicate using a computer.
Components of Multimedia
● Text: Characters are used to form words, phrases, and
paragraphs in the text. The text can be in a variety of fonts and
sizes to match the multimedia software’s professional
presentation.
● Graphics: Non-text information, such as a sketch, chart, or
photograph, is represented digitally. Graphics add to the appeal
of the multimedia application. The use of visuals in multimedia
enhances the effectiveness and presentation of the concept.
Windows Picture, Internet Explorer, and other similar programs
are often used to see visuals.
● Animations: Animation is the process of making a still image
appear to move. A presentation can also be made lighter and
more appealing by using animation. In multimedia applications,
the animation is quite popular. The following are some of the
most regularly used animation viewing programs: Fax Viewer,
Internet Explorer, etc.
● Video: Photographic images that appear to be in full motion and
are played back at speeds of 15 to 30 frames per second. The
term video refers to a moving image that is accompanied by
sound, such as a television picture.
● Audio: Any sound, whether it’s music, conversation, or
something else. Sound is the most serious aspect of multimedia,
delivering the joy of music, special effects, and other forms of
entertainment. Decibels are a unit of measurement for volume
and sound pressure level. Audio files are used as part of the
application context as well as to enhance interaction. Audio files
must occasionally be distributed using plug-in media players
when they appear within online applications and webpages. MP3,
WMA, Wave, MIDI, and RealAudio are examples of audio formats.
The following programs are widely used to view videos: Real
Player, Window Media Player, etc.
Conclusion
QoS is critical for ensuring that multimedia applications run
smoothly and effectively across a network. QoS techniques
contribute to the quality and reliability of real-time applications by
regulating bandwidth, latency, jitter, and packet loss. To fulfill the
distinct requirements of various forms of network traffic, QoS is
implemented using a combination of categorization, prioritization,
resource reservation, and traffic management techniques.

Address mapping – ARP, RARP, BOOTP. DHCP


1. Address Resolution Protocol (ARP) –
Address Resolution Protocol is a communication protocol used for
discovering physical address associated with given network address.
Typically, ARP is a network layer to data link layer mapping process,
which is used to discover MAC address for given Internet Protocol
Address. In order to send the data to destination, having IP address
is necessary but not sufficient; we also need the physical address of
the destination machine. ARP is used to get the physical address
(MAC address) of destination machine.
Before sending the IP packet, the MAC address of destination must
be known. If not so, then sender broadcasts the ARP-discovery
packet requesting the MAC address of intended destination. Since
ARP-discovery is broadcast, every host inside that network will get
this message but the packet will be discarded by everyone except
that intended receiver host whose IP is associated. Now, this
receiver will send a unicast packet with its MAC address (ARP-reply)
to the sender of ARP-discovery packet. After the original sender
receives the ARP-reply, it updates ARP-cache and start sending
unicast message to the destination.

2. Reverse Address Resolution Protocol (RARP) –


Reverse ARP is a networking protocol used by a client machine in a
local area network to request its Internet Protocol address (IPv4)
from the gateway-router’s ARP table. The network administrator
creates a table in gateway-router, which is used to map the MAC
address to corresponding IP address. When a new machine is setup
or any machine which don’t have memory to store IP address, needs
an IP address for its own use. So the machine sends a RARP
broadcast packet which contains its own MAC address in both
sender and receiver hardware address field.

A special host configured inside the local area network, called as


RARP-server is responsible to reply for these kind of broadcast
packets. Now the RARP server attempt to find out the entry in IP to
MAC address mapping table. If any entry matches in table, RARP
server send the response packet to the requesting device along with
IP address.
● LAN technologies like Ethernet, Ethernet II, Token Ring and Fiber
Distributed Data Interface (FDDI) support the Address Resolution
Protocol.
● RARP is not being used in today’s networks. Because we have
much great featured protocols like BOOTP (Bootstrap Protocol)
and DHCP( Dynamic Host Configuration Protocol).
A computer networking protocol helps the users to establish the
rules for the data transmission between two users. The main
purpose of the computer networking protocols like BOOTP and RARP
is that they allow the users to have the ability to communicate with
each other simultaneously without having any ambiguity of the
structure and the architecture of the protocol. In this article, we are
going to discuss BOOTP and RARP, and how both are different from
each other.
What is BOOTP?
Bootstrap Protocol (BOOTP) is the latest version of the Reverse
Address Resolution Protocol (RARP) and the previous version of
the DHCP (Dynamic Host Configuration Protocol Server) protocol.
Bootstrap Protocol is a TCP/IP technology-based protocol which
allows the user or a client to find and locate the IP address of a file
or website and the load from the server machine which is connected
to the network. The Bootstrap Protocol does allow the users to
configure the temporary IP addresses of the server machine. When a
new device on the network gets connected to the service, the host
machine requests the IP address of the new device connected to the
network. The Bootstrap Protocol (BOOTP) is automatically configured
by the host machine to provide the IP addresses as compared to the
DHCP protocol.

Advantages of BOOTP
● Automates IP Configuration: Devices can easily and automatically
get IP address and other network settings as required by a
network.
● Supports Additional Configurations: It also contains other
information like the gateway and the subnet mask which is useful
in complicated network configuration.
● Widely Supported: BOOTP is widely supported by many routers
and servers making its implementation quite easy.
Disadvantages of BOOTP
● Static Configuration: Configuration of the server is done by the
coder and it does not support assigning of IPs like the DHCP.
● Limited Features: On the same note, BOOTP has been observed
to have fewer options and high rigidity as compared to other
emerging protocols such as DHCP.
What is RARP?
Reverse Address Resolution Protocol (RARP) is a type of internet
protocol that is used by the client machine to deal with the Internet
Protocol address (IPv4) which asks the server for gateway router
protocol and the request for the connection in the local area
network (LAN) gets accepted by the host machine. A table in the
gateway router of the host machine is created by the administrator
of the network which grants the user with the access to the creating
of new IP address of the machine and linking it to its MAC address of
the same machine. It is most popularly known as the network layer
protocol which deals with the configuration and mapping of a device
with the IP address and the MAC address of the same device.

Advantages of RARP
● Simple Operation: RARP has a quite simple operation which
makes its implementation rather more convenient in systems that
require purely IP address assignment only.
● Low Overhead: In terms of functionality, RARP is rather limited as
it is only in charge of allocating IP addresses and, therefore, is not
very resource-intensive.
Disadvantages of RARP
● Limited Functionality: As mentions earlier, RARP only replies the
IP address of this machine and does not contain other
configuration information such as subnet mask or gateway etc.
● Dependency on Broadcasts: RARP totally relies on broadcasts;
thus, it is less efficient and slower as the contemporary protocols
like BOOTP or DHCP.
Differences between BOOTP and RARP
BOOTP RARP

BOOTP stands for Bootstrap RARP stands for Reverse Address


Protocol. Resolution Protocol.

It is the latest version as It is the older version of the internet


compared to RARP. protocol.

It requires the manual configuration


Manual configuration of MAC
of MAC addresses with the IP
addresses is not required.
addresses on the server.

It transfers the IP address along


It is only able to transfer the IP
with the bootstrap info to the
addresses only.
server.

It uses dynamically discovered


It uses static routers which are
routers which are very unstable and
more stable and secure.
risky.

The host can remain connected It requires the host to be connected


to any layer of the network. to a Layer 3 device on the network.

We can configure and change We have to use a centrally


the settings of the protocol connected device to change the
from any device connected to configurations of the network in the
BOOTP RARP

the network. RARP protocol.

A protocol used to dynamically A protocol used to dynamically


assign IP addresses to network assign IP addresses to network
devices devices

To provide IP addresses to To obtain the IP address of a network


diskless workstations or device when only its MAC address is
network devices during bootup known

Client broadcasts a request for Client broadcasts its MAC address


an IP address and server and requests an IP address, and the
responds with an available IP server responds with the
address corresponding IP address

IP addresses MAC addresses

Used in DHCP (Dynamic Host


Rarely used in modern networks as
Configuration Protocol) to
most devices have a pre-assigned IP
assign IP addresses
address
dynamically

Conclusion
In computer networking, BOOTP and RARP protocols are networking
protocols that promote communication between devices. Both
protocols have distinguishable features, the BOOTP protocol is more
advanced than the RARP protocol. BOOTP can dynamically allot IP
addresses and bootstrap information to network devices during
bootup. It is more secure and stable. Whereas RARP protocol is not
used much nowadays because of its limited flexibility and involves
the manual configuration of MAC addresses. It is mainly used for
mapping the MAC addresses of devices to IP addresses in LAN (Local
Area Network).
Dynamic Host Configuration Protocol (DHCP)


Dynamic Host Configuration Protocol is a network protocol used to automate


the process of assigning IP addresses and other network configuration
parameters to devices (such as computers, smartphones, and printers) on a
network. Instead of manually configuring each device with an IP address,
DHCP allows devices to connect to a network and receive all necessary
network information, like IP address, subnet mask, default gateway, and DNS
server addresses, automatically from a DHCP server.
This makes it easier to manage and maintain large networks, ensuring
devices can communicate effectively without conflicts in their network settings.
DHCP plays a crucial role in modern networks by simplifying the process of
connecting devices and managing network resources efficiently.
What is DHCP?
DHCP stands for Dynamic Host Configuration Protocol. It is the critical feature
on which the users of an enterprise network communicate. DHCP helps
enterprises to smoothly manage the allocation of IP addresses to the end-user
clients’ devices such as desktops, laptops, cellphones, etc. is an application
layer protocol that is used to provide:
Subnet Mask (Option 1 - e.g., 255.255.255.0)
Router Address (Option 3 - e.g., 192.168.1.1)
DNS Address (Option 6 - e.g., 8.8.8.8)
Vendor Class Identifier (Option 43 - e.g.,
'unifi' = 192.168.1.9 ##where unifi = controller)
DHCP is based on a client-server model and based on discovery, offer,
request, and ACK.
DHCP simplifies network configuration by dynamically assigning IP
addresses. To build a solid foundation in networking protocols like DHCP,
consider enrolling in the GATE CS Self-Paced Course. This course will equip
you with the networking knowledge you need for both practical applications
and exam preparation.
Why Do We Use DHCP?
DHCP helps in managing the entire process automatically and centrally.
DHCP helps in maintaining a unique IP Address for a host using the server.
DHCP servers maintain information on TCP/IP configuration and provide
configuration of address to DHCP-enabled clients in the form of a lease offer.
Components of DHCP
The main components of DHCP include:
● DHCP Server: DHCP Server is a server that holds IP Addresses and other
information related to configuration.
● DHCP Client: It is a device that receives configuration information from the
server. It can be a mobile, laptop, computer, or any other electronic device
that requires a connection.
● DHCP Relay: DHCP relays basically work as a communication channel
between DHCP Client and Server.
● IP Address Pool: It is the pool or container of IP Addresses possessed by
the DHCP Server. It has a range of addresses that can be allocated to
devices.
● Subnets: Subnets are smaller portions of the IP network partitioned to
keep networks under control.
● Lease: It is simply the time that how long the information received from the
server is valid, in case of expiration of the lease, the tenant must have to
re-assign the lease.
● DNS Servers: DHCP servers can also provide DNS (Domain Name
System) server information to DHCP clients, allowing them to resolve
domain names to IP addresses.
● Default Gateway: DHCP servers can also provide information about the
default gateway, which is the device that packets are sent to when the
destination is outside the local network.
● Options: DHCP servers can provide additional configuration options to
clients, such as the subnet mask, domain name, and time server
information.
● Renewal: DHCP clients can request to renew their lease before it expires
to ensure that they continue to have a valid IP address and configuration
information.
● Failover: DHCP servers can be configured for failover, where two servers
work together to provide redundancy and ensure that clients can always
obtain an IP address and configuration information, even if one server goes
down.
● Dynamic Updates: DHCP servers can also be configured to dynamically
update DNS records with the IP address of DHCP clients, allowing for
easier management of network resources.
● Audit Logging: DHCP servers can keep audit logs of all DHCP
transactions, providing administrators with visibility into which devices are
using which IP addresses and when leases are being assigned or
renewed.
DHCP Packet Format
DHCP Packet Format
● Hardware Length: This is an 8-bit field defining the length of the physical
address in bytes. e.g for Ethernet the value is 6.
● Hop count: This is an 8-bit field defining the maximum number of hops the
packet can travel.
● Transaction ID: This is a 4-byte field carrying an integer. The transcation
identification is set by the client and is used to match a reply with the
request. The server returns the same value in its reply.
● Number of Seconds: This is a 16-bit field that indicates the number of
seconds elapsed since the time the client started to boot.
● Flag: This is a 16-bit field in which only the leftmost bit is used and the rest
of the bit should be set to os. A leftmost bit specifies a forced broadcast
reply from the server. If the reply were to be unicast to the client, the
destination. IP address of the IP packet is the address assigned to the
client.
● Client IP Address: This is a 4-byte field that contains the client IP address
. If the client does not have this information this field has a value of 0.
● Your IP Address: This is a 4-byte field that contains the client IP address.
It is filled by the server at the request of the client.
● Server IP Address: This is a 4-byte field containing the server IP address.
It is filled by the server in a reply message.
● Gateway IP Address: This is a 4-byte field containing the IP address of a
routers. IT is filled by the server in a reply message.
● Client Hardware Address: This is the physical address of the
client .Although the server can retrieve this address from the frame sent by
the client it is more efficient if the address is supplied explicity by the client
in the request message.
● Server Name: This is a 64-byte field that is optionally filled by the server in
a reply packet. It contains a null-terminated string consisting of the domain
name of the server. If the server does not want to fill this filed with data, the
server must fill it with all 0s.
● Boot Filename: This is a 128-byte field that can be optionally filled by the
server in a reply packet. It contains a null- terminated string consisting of
the full pathname of the boot file. The client can use this path to retrieve
other booting information. If the server does not want to fill this field with
data, the server must fill it with all 0s.
● Options: This is a 64-byte field with a dual purpose. IT can carry either
additional information or some specific vendor information. The field is
used only in a reply message. The server uses a number, called a magic
cookie, in the format of an IP address with the value of 99.130.83.99.
When the client finishes reading the message, it looks for this magic
cookie. If present the next 60 bytes are options.
Working of DHCP
DHCP works on the Application layer of the UDP Protocol. The main task of
DHCP is to dynamically assigns IP Addresses to the Clients and allocate
information on TCP/IP configuration to Clients. For more, you can refer to the
Article Working of DHCP.
The DHCP port number for the server is 67 and for the client is 68. It is a
client-server protocol that uses UDP services. An IP address is assigned from
a pool of addresses. In DHCP, the client and the server exchange mainly 4
DHCP messages in order to make a connection, also called
the DORA process, but there are 8 DHCP messages in the process.

Working of DHCP

The 8 DHCP Messages


1. DHCP Discover Message: This is the first message generated in the
communication process between the server and the client. This message is
generated by the Client host in order to discover if there is any DHCP
server/servers are present in a network or not. This message is broadcasted
to all devices present in a network to find the DHCP server. This message is
342 or 576 bytes long.
DHCP Discover Message

As shown in the figure, the source MAC address (client PC) is


08002B2EAF2A, the destination MAC address(server) is FFFFFFFFFFFF, the
source IP address is 0.0.0.0(because the PC has had no IP address till now)
and the destination IP address is 255.255.255.255 (IP address used for
broadcasting). As they discover message is broadcast to find out the DHCP
server or servers in the network therefore broadcast IP address and MAC
address is used.
2. DHCP Offers A Message: The server will respond to the host in this
message specifying the unleased IP address and other TCP configuration
information. This message is broadcasted by the server. The size of the
message is 342 bytes. If there is more than one DHCP server present in the
network then the client host will accept the first DHCP OFFER message it
receives. Also, a server ID is specified in the packet in order to identify the
server.
DHCP Offer Message

Now, for the offer message, the source IP address is 172.16.32.12 (server’s
IP address in the example), the destination IP address is 255.255.255.255
(broadcast IP address), the source MAC address is 00AA00123456, the
destination MAC address is 00:11:22:33:44:55 (client’s MAC address). Here,
the offer message is broadcast by the DHCP server therefore destination IP
address is the broadcast IP address and destination MAC address is
00:11:22:33:44:55 (client’s MAC address)and the source IP address is the
server IP address and the MAC address is the server MAC address.
Also, the server has provided the offered IP address 192.16.32.51 and a lease
time of 72 hours(after this time the entry of the host will be erased from the
server automatically). Also, the client identifier is the PC MAC address
(08002B2EAF2A) for all the messages.
3. DHCP Request Message: When a client receives an offer message, it
responds by broadcasting a DHCP request message. The client will produce a
gratuitous ARP in order to find if there is any other host present in the network
with the same IP address. If there is no reply from another host, then there is
no host with the same TCP configuration in the network and the message is
broadcasted to the server showing the acceptance of the IP address. A Client
ID is also added to this message.

DHCP Request Message

Now, the request message is broadcast by the client PC therefore source IP


address is 0.0.0.0(as the client has no IP right now) and destination IP
address is 255.255.255.255 (the broadcast IP address) and the source MAC
address is 08002B2EAF2A (PC MAC address) and destination MAC address
is FFFFFFFFFFFF.
Note – This message is broadcast after the ARP request broadcast by the PC
to find out whether any other host is not using that offered IP. If there is no
reply, then the client host broadcast the DHCP request message for the
server showing the acceptance of the IP address and Other TCP/IP
Configuration.
4. DHCP Acknowledgment Message: In response to the request message
received, the server will make an entry with a specified client ID and bind the
IP address offered with lease time. Now, the client will have the IP address
provided by the server.

Now the server will make an entry of the client host with the offered IP
address and lease time. This IP address will not be provided by the server to
any other host. The destination MAC address is 00:11:22:33:44:55 (client’s
MAC address) and the destination IP address is 255.255.255.255 and the
source IP address is 172.16.32.12 and the source MAC address is
00AA00123456 (server MAC address).
5. DHCP Negative Acknowledgment Message: Whenever a DHCP server
receives a request for an IP address that is invalid according to the scopes
that are configured, it sends a DHCP Nak message to the client. Eg-when the
server has no IP address unused or the pool is empty, then this message is
sent by the server to the client.
6. DHCP Decline: If the DHCP client determines the offered configuration
parameters are different or invalid, it sends a DHCP decline message to the
server. When there is a reply to the gratuitous ARP by any host to the client,
the client sends a DHCP decline message to the server showing the offered
IP address is already in use.
7. DHCP Release: A DHCP client sends a DHCP release packet to the server
to release the IP address and cancel any remaining lease time.
8. DHCP Inform: If a client address has obtained an IP address manually
then the client uses DHCP information to obtain other local configuration
parameters, such as domain name. In reply to the DHCP inform message, the
DHCP server generates a DHCP ack message with a local configuration
suitable for the client without allocating a new IP address. This DHCP ack
message is unicast to the client.
Note – All the messages can be unicast also by the DHCP relay agent if the
server is present in a different network.
Security Considerations for Using DHCP
To make sure your DHCP servers are safe, consider these DHCP security
issues:
● Limited IP Addresses : A DHCP server can only offer a set number of IP
addresses. This means attackers could flood the server with requests,
causing essential devices to lose their connection.
● Fake DHCP Servers : Attackers might set up fake DHCP servers to give
out fake IP addresses to devices on your network.
● DNS Access : When users get an IP address from DHCP, they also get
DNS server details. This could potentially allow them to access more data
than they should. It’s important to restrict network access, use firewalls,
and secure connections with VPNs to protect against this.
Protection Against DHCP Starvation Attack
A DHCP starvation attack happens when a hacker floods a DHCP server with
requests for IP addresses. This overwhelms the server, making it unable to
assign addresses to legitimate users. The hacker can then block access for
authorized users and potentially set up a fake DHCP server to intercept and
manipulate network traffic, which could lead to a man-in-the-middle attack.
Reasons Why Enterprises Must Automate DHCP?
Automating your DHCP system is crucial for businesses because it reduces
the time and effort your IT team spends on manual tasks. For instance,
DHCP-related issues like printers not connecting or subnets not working with
the main network can be avoided automatically.
Automated DHCP also allows your operations to grow smoothly. Instead of
hiring more staff to handle tasks that automation can manage, your team can
focus on other important areas of business growth.
Advantages
● Centralized management of IP addresses.
● Centralized and automated TCP/IP configuration .
● Ease of adding new clients to a network.
● Reuse of IP addresses reduces the total number of IP addresses that are
required.
● The efficient handling of IP address changes for clients that must be
updated frequently, such as those for portable devices that move to
different locations on a wireless network.
● Simple reconfiguration of the IP address space on the DHCP server
without needing to reconfigure each client.
● The DHCP protocol gives the network administrator a method to configure
the network from a centralized area.
● With the help of DHCP, easy handling of new users and the reuse of IP
addresses can be achieved.
Disadvantages
● IP conflict can occur.
● The problem with DHCP is that clients accept any server. Accordingly,
when another server is in the vicinity, the client may connect with this
server, and this server may possibly send invalid data to the client.
● The client is not able to access the network in absence of a DHCP Server.
● The name of the machine will not be changed in a case when a new IP
Address is assigned.
Conclusion
In conclusion, DHCP is a technology that simplifies network setup by
automatically assigning IP addresses and network configurations to devices.
While DHCP offers convenience, it’s important to manage its security
carefully. Issues such as IP address exhaustion, and potential data access
through DNS settings highlight the need for robust security measures like
firewalls and VPNs to protect networks from unauthorized access and
disruptions. DHCP remains essential for efficiently managing network
connections while ensuring security against potential risks.

Delivery, Forwarding and Unicast Routing protocols

Delivery Techniques and routing


DELIVERY
The term delivery refers to the way in which the packet is handled by the
networks under the network layer.The network layer handles the packets in
the physical network and it is termed as delivery of the packet.
DELIVERY TECHNIQUES
The delivery to the final destination takes place in two ways.
.1)Direct Delivery
In this method when the source and the destination are located on the same
physical network then the delivery takes place.The sender can find out the
network address with the addresses of the networks to which it is connected.If
the match is found then it is assured that is direct delivery.
2)Indirect Delivery
In this type of the method the destination is not on the same network as the
delivery point from where the packet is to be delivered.Such type of the
delivery is termed as the indirect type of delivery.
Note: A delivery always involves one direct delivery but zero or more indirect
delivery.
Forwarding Techniques
Forwarding means the way in which the packet is delivered to the next
station.The various forwarding techniques are:
1)Next Hop Method
The technique which reduces the contents of the routing table is called next
hop method
2)Route Method
In this method the routing table contains the address of the next hop instead
of the information of the complete route.
3)Network Specific Method
It is the technique in which the routing table is reduced and the searching
process is simplified.
4)Hot Specific Method
In this method entry of every destination is connected to the same physical
network.
Forwarding Process
Here we make the assumption that the routers and the hosts use the
classless addressing as we know that the classful addressing can be treated
as the special case of the classless addressing.In the case of the classless
addressing the routing tables needs to have one row of information for each of
the blocks.The table which is to be searched is based on the network
address.It is to be noted that the destination address gives no idea about the
network address.To solve the problem we need to involve the mask in the
table i.e. we need to take an extra column that includes the mask for the
corresponding block.
ROUTING TABLE
The term Routing signifies the way routing tables are created to help in the
forwarding process.To continuously update the routing tables the routing
protocols are used.A router has the routing table with the entry for each
destination to route the IP packets.The routing packet can be of two types.
1)Static Routing Table
It contains the informations entered manually.The route for each destination is
entered into the table by the administrator.It can be used in the small internet
which is not to be changed quite often.It will be the case of poor strategy if the
static routing table is to be used in the big internet.
2)Dynamic Routing Table
This table is updated timely using the dynamic protocols such as RIP,OSPF or
BGP.Whenever changes occur in the internet then the dynamic routing
protocols update all the tables in the router automatically for the efficient
delivery of the IP packets.
Unicast Routing Protocols
As we know that the dynamic routing table is the demand of the todays
internet.The tables needs to be updated whenever there are changes in the
internet.The routing protocols have been created for the dynamic routing
tables.It is the combination of the certain rules and the procedures which let
the router know about the changes in the internet.It even allows the router to
share the informations abot the internet and their neighbourhood
.Optimization
The function of the router is to receive the packets from the network and then
passes it to the other network.The router is connected to the various
networks.The optimization makes the decision that when the router receives
the packet to which network the packet is to be passed on and out of the
available pathways the optimum path is to be selected.The term optimium can
be explained as the one approach to assign the cost for the passing network
and this assigned cost is called metric.The metric is assigned to the networks
depending on the type of the protocols.
1)Routing Information Protocol(RIP)-In this the cost of passing the network is
all same.All the networks are treated as same in this protocol.
2)Open Shorted Path First(OSPF)-It allows the administrator to assign the
cost for passing through the network based on the type of the service
required.A route has different cost through the network.
3)Border Gateway Protocol-(BGP)The cfriteria is based on the policy and the
policy decides which path is to be choosen by the administrator.
Intra-Domain Routing and Inter-Domain Routing
In today’s era the internet has become such a vast network that it will be
impossible for one routing protocol to perform the task of updating the routing
table of all the routers.For such a reason the internet is divided into the
autonomous systems.The autonomous system(AS) can be defined as the
group of the networks and routers under the authority of the single
administration.
1)When the routing is done inside the autonomous system then it is referred
as INTRA-DOMAIN ROUTING.Each autonomous system can choose either
one or more than one intra domain routing protocols to handle the routing in
the autonomous systems.
2)When the routing is done between the autonomous systems then it is known
as INTER DOMAIN ROUTING.Each autonomous system can choose only
one inter domain routing protocol to handle the routing between the
autonomous systems.
Distance Vector Routing
In this the minimum cost route between any two nodes is the route with the
mnimum distance.Each node is given the vector of minimum distance to every
node.The table at each node instructs the packets to their node by showing
the next stoppage in the route.It is the Intra domain routing protocol.
Link State Routing
Link state routing is quite different in the concept than the distance vector
routing.If each node in the domain has the entire topology of the domain then
the entire list of the links and the nodes,cost including the condition of the
links the Dijkstra’s algorithm can be used in this case to build the routing
table.It is the Intra domain routing protocol.
Path Vector Routing
Path vector routing is used in the inter domain routing.Its principle is same as
the distance vector routing.In this lets assume that a node in each
autonomous system can act on the behalf of the entire autonomous
systems.This particular node can be called as the speaker node.The speaker
node in the autonomous system creates the routing table and advertises it to
the speaker nodes of the neighbouring autonomous systems.The speaker
node in each autonomous system can freely communicate with each other.It
is to be noted that the speaker node advertises the paths not the metric
nodes.
MULTICAST ROUTING PROTOCOLS
Unicast, Multicast, and Broadcast
▪ Unicasting

unicasting
In this communication there is one source and one destination.There is one to
one relationship between source and the destination.The address of the
source and the destination in the IP diagram are the unicast address which
are assigned to the host.In the unicasting when the router receives the
packets the packet is forwarded through only one of its interfaces as
described in the routing table.A packet can be discarded by the router if it is
unable to find the destination address in the routing table.
.2)multicasting

multicasting
In this type of the communication there is one source and group of the
destinations.The relation is one to many type.The source address is the
unicast address but the destination address is the group address i.e it defines
one or more destinations.The group address identifies the members of the
group.When the router receives the packet it forwards it through the several of
its interfaces.In this type of the communication technique the router forwards
the received packets.
3.Broadcasting
In this communication on the relationship between the source and the
destination is one to all. In this one host is the source and other hosts are the
destinations.The internet does not support the broadcasting as there is huge
amount of the traffic and because of the bandwidth it needs.
APPLICATIONS
Multicasting technique has many applications in day today.
1)Access to the distributed databases.
The information is stored in more than one location at the time of the
production.The user who needs to access the data does not know about the
location of the information.In the multi cast to all the data base location and
the location has the information responds.
2)Information Dissemination
Bussiness often sends the information to the customers and the nature of the
information has to be same for each customer.
3)Teleconferencing
It involves multicasting.The indivisual attending the teleconference.Temporary
or permanent groups can be formed for this purpose.
4)Distance Learning
One growing area in the use of multicasting is the distance learning.It is
convenient for the students who find it difficult to attend the classes on the
campus.
Multicast Routing
In the case of the multicasting the concept of the optimal routing is to be
considered.
Optimal routing routing:shortest path trees
In this the root of the tree is the source and the leaves are the potential
destination.The path from the roots to the destination is considered as the
shortest path.
The formation of the trees and the number of the trees varies in the
multicating and unicasting.
In the multicast routing when the router receives the multicast packet,it might
have destinations in more than one network.Forwarding it requires the
shortest path of the tree.if there are n groups then n shortest path trees are
needed.
Routing Protocols
1)Distance Vector Multicast Routing Protocol is an implementation of the
multicast distance vector routing.It can said as the source based routing
protocol which is based on the RIP.
2)Core based tree protocol is nothing but a group shared protocol that uses
the core as the root of the tree.The AS system is divided into various regions
and the core is chosen from this region
3)Protocol independent Multicast is the name which is given to the two
independent multicast routing protocols.
a)Protocol Independent multicast Dense Mode(PIM-DM)
b)Protocol Independent.Multicast Sparse mode(PIM-SM)
PIM-DM is used in the case when there is the probability that each router is
involved in the multicasting.In this all the routers are involved in the process to
broadcast the packets.
PIM-SM is used when there is possibility that each router is involved in the
multi casting.The use of the protocols that are used to broadcast the packet
are not justified.It is basically used in the multicast environment such as WAN.
Related Questions and Answers
Q.What are the delivery techniques??
There are two types of the delivery techniques
1. direct delivery 2.indirect delivery
Q what is the concept of forwarding??
The Forwarding is the process in which the data are forwarded to the
destination.
Q.What is multicasting??
In the multi casting the data packet is to be delivered from the single source to
the multiple destinations.
Q what do you mean by the term Routing??
The term routing means the way the routing tables are created to help out in
the forwarding process.

After Completion of Unit II and III you should be able to answer the following
Questions

1. Explain Design issues of the data link layer and services to the network layer with the
help of diagram.

2. Explain 3 types of errors with diagram.

3. What are 4 main error correction codes? Explain any one in detail with example.

4. Explain framing, Error control and flow control with diagram.

5. Explain error detection techniques in detail with diagram.

6. Explain all types of Data Link Protocols.

7. Explain the working principle of Sliding Window Protocols with diagram.

8. Data word to be sent – 100100 and Key - 1101 [ Or generator polynomial x3 + x2 +


1]. Calculate the Encoded the data to be sent to the receiver using CRC Modulo-2
division and also prove that received data has no error.
9. Differentiate between Flow Control and Error Control in tabular form.

10. Suppose there are four strings 010, 011, 101 and 111. Calculate the minimum
Hamming distance.

11. Differentiate between Go-Back-N and Selective Repeat Protocol in tabular form.

12. Describe the Channel allocations problem of the Medium Access Sublayer.

13. Draw the flow chart of the types of multiple access protocols.

14. What are the main functions of the network layer?

15. What are three main ways for bridging?

16. What are the main differences between Adaptive and Non-Adaptive Routing
Algorithms?

17. How does a network switch works?

18. What is Ethernet?

19. What are Network Layer Design Issues?

20. Draw the classification chart of the types of Routing Algorithms.

21. Differentiate between Routing and Flooding.

22. Differentiate between IPv4 and IPv6 in tabular form.


Unit – IV
The Transport Layer
Transport Service

The Transport Layer in the OSI (Open Systems Interconnection) model is responsible for
providing reliable and efficient data transfer between computers on a network. It serves as a
bridge between the application layer (layer 7) and the network layer (layer 3). It ensures that data
is transmitted accurately and efficiently from the source to the destination by handling various
functions like error detection, flow control, and connection management.

Key Transport Services

The Transport Layer (Layer 4) provides several essential services, including:

1. Connection-Oriented Service (TCP)


o Transmission Control Protocol (TCP) provides a connection-oriented service,
which means it establishes a reliable connection between a sender and a receiver
before transmitting data.
o It guarantees that data packets are delivered in order, without loss, duplication, or
corruption.
o Features include:
▪ Three-way handshake to establish a connection.
▪ Error detection and correction using checksums.
▪ Flow control using a sliding window protocol.
▪ Congestion control to avoid network overload.
o Example: Web browsing (HTTP), file transfer (FTP), and email (SMTP) use TCP
for reliable communication.
2. Connectionless Service (UDP)
o User Datagram Protocol (UDP) provides a connectionless service, meaning data
packets (datagrams) are sent without establishing a dedicated connection.
o It is faster and uses fewer resources than TCP but does not guarantee reliability,
order, or error correction.
o Features include:
▪ No handshake or session establishment.
▪ No flow control or congestion management.
▪ Simple checksum for basic error detection.
o Example: Real-time applications like video streaming, VoIP, online gaming, and
DNS queries use UDP where speed is critical, and some data loss is acceptable.

Differences Between TCP and UDP Services

TCP (Connection-
Feature UDP (Connectionless)
Oriented)
Connection Requires handshake No connection required
Reliability Guaranteed delivery No guarantee
Order of
Maintains order No ordering guarantee
delivery
Error Basic error detection
Yes, with acknowledgments
correction only
Faster, minimal
Speed Slower, more overhead
overhead
Streaming, gaming,
Use cases Web, email, file transfer
DNS

Transport Layer Protocols

1. TCP (Transmission Control Protocol)


o Ensures reliable, ordered delivery of data.
o Performs error checking, flow control, and congestion control.
2. UDP (User Datagram Protocol)
o Provides fast, connectionless data transmission.
o Suitable for real-time applications where speed is more important than reliability.
3. SCTP (Stream Control Transmission Protocol)
o Combines features of both TCP and UDP.
o Supports multiple streams within a single connection and provides improved
reliability for certain applications like telephony.
Functions of the Transport Layer

1. Segmentation and Reassembly: Divides large messages into smaller segments and
reassembles them at the destination.
2. Flow Control: Manages the rate of data transmission to prevent overwhelming the
receiver.
3. Error Detection and Correction: Identifies errors during data transfer and attempts to
correct them.
4. Multiplexing and Demultiplexing: Allows multiple applications to share a single
network connection by using port numbers.
5. Connection Establishment and Termination: Manages the setup and closing of
connections (TCP only).

Conclusion

The Transport Layer is crucial for enabling seamless communication between networked
devices. By providing services like reliable delivery, error correction, and flow control, it ensures
that data is transferred accurately and efficiently.

Elements of Transport Protocol

A transport protocol defines the rules and methods that the transport layer uses to deliver data
reliably and efficiently between hosts over a network. Examples of widely used transport
protocols include TCP (Transmission Control Protocol) and UDP (User Datagram Protocol).

The elements of a transport protocol include several mechanisms and techniques to ensure
reliable, ordered, and error-checked communication. These elements are crucial for providing
transport services to the upper layers of the OSI model.

Key Elements of Transport Protocols

Here are the core elements involved in transport protocols:

1. Addressing (Port Numbers)


o Transport protocols use port numbers to identify specific processes or services
on a device.
o Source port and destination port numbers are included in the protocol headers
to route data correctly.
o Example:
▪ HTTP uses port 80.
▪ HTTPS uses port 443.
▪ DNS uses port 53.
2. Connection Establishment and Termination
o In connection-oriented protocols like TCP, a connection is established between
sender and receiver before data transmission.
o The connection is established using a three-way handshake:

0. SYN (synchronize)
1. SYN-ACK (synchronize acknowledgment)
2. ACK (acknowledgment)
o After data transfer, the connection is closed gracefully using a termination
sequence.
2. Data Segmentation and Reassembly
o Large messages are divided into smaller segments to fit within the maximum
transmission unit (MTU) of the network.
o Each segment is assigned a sequence number for correct reassembly at the
receiver's side.
o This process ensures data integrity and proper ordering.
3. Multiplexing and Demultiplexing
o Multiplexing allows multiple applications to use the same network connection by
assigning unique port numbers.
o Demultiplexing at the receiver’s end ensures that incoming data is delivered to
the correct application.
4. Error Detection and Correction
o Error detection is performed using checksums in TCP and UDP headers to
detect any corruption in transmitted segments.
o TCP supports error correction through automatic retransmission of lost or
corrupted segments (using acknowledgments and timeouts).
o UDP only detects errors but does not correct them.
5. Flow Control
o Flow control mechanisms, like TCP's sliding window protocol, prevent a fast
sender from overwhelming a slower receiver.
o The window size adjusts dynamically based on the network conditions to control
the flow of data.
6. Congestion Control
o Prevents network congestion by adjusting the rate of data transmission based on
the network’s capacity.
o TCP uses algorithms like Slow Start, Congestion Avoidance, and Fast Recovery
to handle congestion.
o UDP does not have built-in congestion control, which makes it suitable for
applications that prioritize speed over reliability.
7. Sequence Numbers and Acknowledgments
o TCP assigns a sequence number to each segment to ensure that they are received
in the correct order.
o The receiver sends back an acknowledgment number indicating the next
expected byte.
o This mechanism helps detect lost or duplicated segments.
8. Timers and Retransmission
o Transport protocols use various timers to manage retransmission, connection
timeouts, and connection termination.
o If an acknowledgment is not received within a certain timeframe, TCP
retransmits the segment, assuming it was lost.
9. Checksum
o A checksum is used to detect errors in the header and data portions of a segment.
o If the checksum computed by the receiver does not match the value in the header,
the segment is considered corrupted and may be discarded.

Summary of Transport Protocol Elements

Element Description
Identifies source and destination applications using port
Addressing
numbers.
Connection Establishment Sets up and tears down connections (TCP).
Data Segmentation Splits data into smaller segments for transmission.
Multiplexing/
Manages multiple connections on the same network interface.
Demultiplexing
Error Detection/Correction Ensures data integrity with checksums and retransmissions.
Controls data flow between sender and receiver to avoid
Flow Control
overload.
Prevents network congestion with dynamic rate adjustments
Congestion Control
(TCP).
Ensures segments are delivered in order and without
Sequence Numbers
duplication.
Timers/Retransmission Manages timeouts and retransmission of lost segments.
Checksum Validates segment integrity.

These elements work together to ensure that data is delivered reliably, accurately, and
efficiently between communicating hosts. The choice of transport protocol (TCP vs. UDP)
depends on the application's requirements, such as reliability vs. speed.

Simple Transport Protocol

A simple transport protocol is a protocol that ensures reliable and ordered


communication between a sender and a receiver. Transport protocols hide
problems like delay, corruption, disorder, and losses. They usually retransmit
packets that are lost or corrupted.
Here are some examples of transport protocols:
● User Datagram Protocol (UDP)
A simple and fast protocol for connectionless transmissions. It's best for real-time data
where speed is more important than reliability, like video conferencing. UDP is
unreliable because it doesn't use retransmissions or acknowledgements, so packets
may be lost.
● Transmission Control Protocol (TCP)
A more feature-rich protocol that's connection-oriented. It uses acknowledgment and
synchronization messages to ensure delivery. TCP reorders and retransmits packets if
needed. It's slower than UDP but is the most common protocol on the internet.
● QUIC
A new protocol that combines the best reliability features of TCP with the speed of
UDP. It's optimized for use over the internet and for Hypertext Transfer Protocol 3.
Other transport protocols include: Stream Control Transmission Protocol
(SCTP), Real Time Transport Protocol (RTP), Fibre Channel Protocol, and
Reliable Data Protocol.

Internet Transport Layer Protocols: UDP and TCP, SCTP

The transport layer is the fourth layer in the OSI model and the
second layer in the TCP/IP model. The transport layer provides with
end to end connection between the source and the destination and
reliable delivery of the services. Therefore transport layer is known
as the end-to-end layer. The transport layer takes the services from
its upward layer which is the application layer and provides it to the
network layer. Segment is the unit of data encapsulation at the
transport layer.
In this article, we are going to discuss all the important aspects of
Transport Layer Protocol which include: Functions of Transport Layer
protocol, characteristics of TLP, UDP & UDP Segemnts and their
Advantages and Disadvantages, TCP & TCP Segemnts and their
Advantages and Disadvantages, SCTP and its Advantages &
Disadvantages.
Functions of Transport Layer
● The process to process delivery
● End-to-end connection between devices
● Multiplexing and Demultiplexing
● Data integrity and error Correction
● Congestion Control
● Flow Control
Characteristics of Transport Layer Protocol
● The two protocols that make up the transport layer are TCP and
UDP.
● A datagram is sent by the IP protocol at the network layer from a
source host to a destination host.
● These days, an operating system can support environments with
multiple users and processes; a programme under execution is
referred to as a process.
● A source process is transmitting a process to a destination
process when a host sends a message to another host. Certain
connections to certain ports, referred to as protocol ports, are
defined by the transport layer protocols.
● A positive integer address, consisting of 16 bits, defines each
port.
Transport Layer Protocols
The transport layer is represented majorly by TCP and UDP
protocols. Today almost all operating systems support
multiprocessing multi-user environments. This transport layer
protocol provides connections to the individual ports. These ports
are known as protocol ports. Transport layer protocols work above
the IP protocols and deliver the data packets from IP serves to
destination port and from the originating port to destination IP
services. Below are the protocols used at the transport layer.
1. UDP
UDP stands for User Datagram Protocol. User Datagram Protocol
provides a nonsequential transmission of data. It is a connectionless
transport protocol. UDP protocol is used in applications where the
speed and size of data transmitted is considered as more important
than the security and reliability. User Datagram is defined as a
packet produced by User Datagram Protocol. UDP protocol adds
checksum error control, transport level addresses, and information
of length to the data received from the layer above it. Services
provided by User Datagram Protocol(UDP) are connectionless
service, faster delivery of messages, checksum, and process-to-
process communication.
UDP Segment
While the TCP header can range from 20 to 60 bytes, the UDP
header is a fixed, basic 8 bytes. All required header information is
contained in the first 8 bytes, with data making up the remaining
portion. Because UDP port number fields are 16 bits long, the range
of possible port numbers is defined as 0 to 65535, with port 0 being
reserved.

UDP

● Source Port: Source Port is a 2 Byte long field used to identify


the port number of the source.
● Destination Port: This 2-byte element is used to specify the
packet’s destination port.
● Length: The whole length of a UDP packet, including the data
and header. The field has sixteen bits.
● Cheksum: The checksum field is two bytes long. The data is
padded with zero octets at the end (if needed) to create a
multiple of two octets. It is the 16-bit one’s complement of the
one’s complement sum of the UDP header, the pseudo-header
containing information from the IP header, and the data.
Advantages of UDP
● UDP also provides multicast and broadcast transmission of data.
● UDP protocol is preferred more for small transactions such as DNS
lookup.
● It is a connectionless protocol, therefore there is no compulsion to
have a connection-oriented network.
● UDP provides fast delivery of messages.
Disadvantages of UDP
● In UDP protocol there is no guarantee that the packet is
delivered.
● UDP protocol suffers from worse packet loss.
● UDP protocol has no congestion control mechanism.
● UDP protocol does not provide the sequential transmission of
data.
2. TCP
TCP stands for Transmission Control Protocol. TCP protocol provides
transport layer services to applications. TCP protocol is a
connection-oriented protocol. A secured connection is being
established between the sender and the receiver. For a generation
of a secured connection, a virtual circuit is generated between the
sender and the receiver. The data transmitted by TCP protocol is in
the form of continuous byte streams. A unique sequence number is
assigned to each byte. With the help of this unique number, a
positive acknowledgment is received from receipt. If the
acknowledgment is not received within a specific period the data is
retransmitted to the specified destination.

TCP Segment
A TCP segment’s header may have 20–60 bytes. The options take
about 40 bytes. A header consists of 20 bytes by default, although it
can contain up to 60 bytes.

● Source Port Address: The port address of the programme


sending the data segment is stored in the 16-bit field known as
the source port address.
● Destination Port Address: The port address of the application
running on the host receiving the data segment is stored in the
destination port address, a 16-bit field.
● Sequence Number: The sequence number, or the byte number
of the first byte sent in that specific segment, is stored in a 32-bit
field. At the receiving end, it is used to put the message back
together once it has been received out of sequence.
● Acknowledgement Number : The acknowledgement number,
or the byte number that the recipient anticipates receiving next,
is stored in a 32-bit field called the acknowledgement number. It
serves as a confirmation that the earlier bytes were successfully
received.
● Header Length (HLEN): This 4-bit field stores the number of 4-
byte words in the TCP header, indicating how long the header is.
For example, if the header is 20 bytes (the minimum length of the
TCP header), this field will store 5 because 5 x 4 = 20, and if the
header is 60 bytes (the maximum length), it will store 15 because
15 x 4 = 60. As a result, this field’s value is always between 5 and
15.
● Control flags: These are six 1-bit control bits that regulate flow
control, method of transfer, connection abortion, termination, and
establishment. They serve the following purposes:
o Urgent: This pointer is legitimate
o ACK: The acknowledgement number (used in cumulative
acknowledgement cases) is valid.
o PSH: Push request
o RST: Restart the link.
o SYN: Sequence number synchronisation
o FIN: Cut off the communication
o Window size: This parameter provides the sender TCP’s
window size in bytes.
● Checksum: The checksum for error control is stored in this field.
Unlike UDP, it is required for TCP.
● Urgent pointer: This field is used to point to data that must
urgently reach the receiving process as soon as possible. It is only
valid if the URG control flag is set. To obtain the byte number of
the final urgent byte, the value of this field is appended to the
sequence number.
Advantages of TCP
● TCP supports multiple routing protocols.
● TCP protocol operates independently of that of the operating
system.
● TCP protocol provides the features of error control and flow
control.
● TCP provides a connection-oriented protocol and provides the
delivery of data.
Disadvantages of TCP
● TCP protocol cannot be used for broadcast or multicast
transmission.
● TCP protocol has no block boundaries.
● No clear separation is being offered by TCP protocol between its
interface, services, and protocols.
● In TCP/IP replacement of protocol is difficult.
3. SCTP
SCTP stands for Stream Control Transmission Protocol. SCTP is a
connection-oriented protocol. Stream Control Transmission Protocol
transmits the data from sender to receiver in full duplex mode. SCTP
is a unicast protocol that provides with point to point-to-point
connection and uses different hosts for reaching the destination.
SCTP protocol provides a simpler way to build a connection over a
wireless network. SCTP protocol provides a reliable transmission of
data. SCTP provides a reliable and easier telephone conversation
over the internet. SCTP protocol supports the feature of multihoming
ie. it can establish more than one connection path between the two
points of communication and does not depend on the IP layer. SCTP
protocol also ensures security by not allowing the half-open
connections.

Advantages of SCTP
● SCTP provides a full duplex connection. It can send and receive
the data simultaneously.
● SCTP protocol possesses the properties of both TCP and UDP
protocol.
● SCTP protocol does not depend on the IP layer.
● SCTP is a secure protocol.
Disadvantages of SCTP
● To handle multiple streams simultaneously the applications need
to be modified accordingly.
● The transport stack on the node needs to be changed for the
SCTP protocol.
● Modification is required in applications if SCTP is used instead of
TCP or UDP protocol.

Congestion Control

Congestion control is a crucial concept in computer networks. It


refers to the methods used to prevent network overload and ensure
smooth data flow. When too much data is sent through the network
at once, it can cause delays and data loss. Congestion control
techniques help manage the traffic, so all users can enjoy a stable
and efficient network connection. These techniques are essential for
maintaining the performance and reliability of modern networks.
What is Congestion?
Congestion in a computer network happens when there is too much
data being sent at the same time, causing the network to slow
down. Just like traffic congestion on a busy road, network congestion
leads to delays and sometimes data loss. When the network can’t
handle all the incoming data, it gets “clogged,” making it difficult for
information to travel smoothly from one place to another.
Effects of Congestion in Computer Network
● Improved Network Stability: Congestion control helps keep
the network stable by preventing it from getting overloaded. It
manages the flow of data so the network doesn’t crash or fail due
to too much traffic.
● Reduced Latency and Packet Loss: Without congestion
control, data transmission can slow down, causing delays and
data loss. Congestion control helps manage traffic better,
reducing these delays and ensuring fewer data packets are lost,
making data transfer faster and the network more responsive.
● Enhanced Throughput: By avoiding congestion, the network
can use its resources more effectively. This means more data can
be sent in a shorter time, which is important for handling large
amounts of data and supporting high-speed applications.
● Fairness in Resource Allocation: Congestion control ensures
that network resources are shared fairly among users. No single
user or application can take up all the bandwidth, allowing
everyone to have a fair share.
● Better User Experience: When data flows smoothly and
quickly, users have a better experience. Websites, online
services, and applications work more reliably and without
annoying delays.
● Mitigation of Network Congestion Collapse: Without
congestion control, a sudden spike in data traffic can overwhelm
the network, causing severe congestion and making it almost
unusable. Congestion control helps prevent this by managing
traffic efficiently and avoiding such critical breakdowns.
Congestion Control Algorithm
● Congestion Control is a mechanism that controls the entry of data
packets into the network, enabling a better use of a shared
network infrastructure and avoiding congestive collapse.
● Congestive-avoidance algorithms (CAA) are implemented at
the TCP layer as the mechanism to avoid congestive collapse in a
network.
● There are two congestion control algorithms which are as
follows:
Leaky Bucket Algorithm
● The leaky bucket algorithm discovers its use in the context of
network traffic shaping or rate-limiting.
● A leaky bucket execution and a token bucket execution are
predominantly used for traffic shaping algorithms.
● This algorithm is used to control the rate at which traffic is sent to
the network and shape the burst traffic to a steady traffic stream.
● The disadvantages compared with the leaky-bucket algorithm are
the inefficient use of available network resources.
● The large area of network resources such as bandwidth is not
being used effectively.
Let us consider an example to understand Imagine a bucket with a
small hole in the bottom. No matter at what rate water enters the
bucket, the outflow is at constant rate. When the bucket is full with
water additional water entering spills over the sides and is lost.

Similarly, each network interface contains a leaky bucket and the


following steps are involved in leaky bucket algorithm:
● When host wants to send packet, packet is thrown into the
bucket.
● The bucket leaks at a constant rate, meaning the network
interface transmits packets at a constant rate.
● Bursty traffic is converted to a uniform traffic by the leaky bucket.
● In practice the bucket is a finite queue that outputs at a finite
rate.
To learn more about Leaky Bucket Algorithm please refer
the article.
Token Bucket Algorithm
● The leaky bucket algorithm has a rigid output design at an
average rate independent of the bursty traffic.
● In some applications, when large bursts arrive, the output is
allowed to speed up. This calls for a more flexible algorithm,
preferably one that never loses information. Therefore, a token
bucket algorithm finds its uses in network traffic shaping or rate-
limiting.
● It is a control algorithm that indicates when traffic should be sent.
This order comes based on the display of tokens in the bucket.
● The bucket contains tokens. Each of the tokens defines a packet
of predetermined size. Tokens in the bucket are deleted for the
ability to share a packet.
● When tokens are shown, a flow to transmit traffic appears in the
display of tokens.
● No token means no flow sends its packets. Hence, a flow
transfers traffic up to its peak burst rate in good tokens in the
bucket.
To learn more about Token Bucket Algorithm please refer
the article.
Need of Token Bucket Algorithm
The leaky bucket algorithm enforces output pattern at the average
rate, no matter how bursty the traffic is. So in order to deal with the
bursty traffic we need a flexible algorithm so that the data is not
lost. One such algorithm is token bucket algorithm.
Steps of this algorithm can be described as follows:
● In regular intervals tokens are thrown into the bucket. ƒ
● The bucket has a maximum capacity. ƒ
● If there is a ready packet, a token is removed from the bucket,
and the packet is sent.
● If there is no token in the bucket, the packet cannot be sent.
Let’s understand with an example, In figure (A) we see a bucket
holding three tokens, with five packets waiting to be transmitted.
For a packet to be transmitted, it must capture and destroy one
token. In figure (B) We see that three of the five packets have
gotten through, but the other two are stuck waiting for more tokens
to be generated.
Token Bucket vs Leaky Bucket
The leaky bucket algorithm controls the rate at which the packets
are introduced in the network, but it is very conservative in nature.
Some flexibility is introduced in the token bucket algorithm. In
the token bucket algorithm, tokens are generated at each tick (up to
a certain limit). For an incoming packet to be transmitted, it must
capture a token and the transmission takes place at the same rate.
Hence some of the busty packets are transmitted at the same rate if
tokens are available and thus introduces some amount of flexibility
in the system.
Formula: M * s = C + ? * s where S – is time taken M – Maximum
output rate ? – Token arrival rate C – Capacity of the token bucket in
byte
Let’s understand with an example,

Link to question on leaky bucket


algorithm: https://www.geeksforgeeks.org/computer-networks-set-8/
Advantages
● Stable Network Operation: Congestion control ensures that
networks remain stable and operational by preventing them from
becoming overloaded with too much data traffic.
● Reduced Delays: It minimizes delays in data transmission by
managing traffic flow effectively, ensuring that data packets
reach their destinations promptly.
● Less Data Loss: By regulating the amount of data in the network
at any given time, congestion control reduces the likelihood of
data packets being lost or discarded.
● Optimal Resource Utilization: It helps networks use their
resources efficiently, allowing for better throughput and ensuring
that users can access data and services without interruptions.
● Scalability: Congestion control mechanisms are scalable,
allowing networks to handle increasing volumes of data traffic as
they grow without compromising performance.
● Adaptability: Modern congestion control algorithms can adapt to
changing network conditions, ensuring optimal performance even
in dynamic and unpredictable environments.
Disadvantages
● Complexity: Implementing congestion control algorithms can
add complexity to network management, requiring sophisticated
systems and configurations.
● Overhead: Some congestion control techniques introduce
additional overhead, which can consume network resources and
affect overall performance.
● Algorithm Sensitivity: The effectiveness of congestion control
algorithms can be sensitive to network conditions and
configurations, requiring fine-tuning for optimal performance.
● Resource Allocation Issues: Fairness in resource allocation,
while a benefit, can also pose challenges when trying to prioritize
critical applications over less essential ones.
● Dependency on Network Infrastructure: Congestion control
relies on the underlying network infrastructure and may be less
effective in environments with outdated or unreliable equipment.
Conclusion
Congestion control is essential for keeping computer
networks running smoothly. It helps prevent network overloads by
managing the flow of data, ensuring that information gets where it
needs to go without delays or loss. Effective congestion control
improves network performance and reliability, making sure that
users have a stable and efficient connection. By using these
techniques, networks can handle high traffic volumes and continue
to operate effectively.

Quality of Service (QoS)

Quality of Service (QoS) is an important concept, particularly when


working with multimedia applications. Multimedia applications, such
as video conferencing, streaming services, and VoIP (Voice over IP),
require certain bandwidth, latency, jitter, and packet loss
parameters. QoS methods help ensure that these requirements are
satisfied, allowing for seamless and reliable communication.
What is Quality of Service?
Quality-of-service (QoS) refers to traffic control mechanisms that
seek to differentiate performance based on application or network-
operator requirements or provide predictable or guaranteed
performance to applications, sessions, or traffic aggregates. The
basic phenomenon for QoS is in terms of packet delay and losses of
various kinds.
QoS Specification
● Delay
● Delay Variation(Jitter)
● Throughput
● Error Rate
Types of Quality of Service
● Stateless Solutions – Routers maintain no fine-grained state
about traffic, one positive factor of it is that it is scalable and
robust. But it has weak services as there is no guarantee about
the kind of delay or performance in a particular application which
we have to encounter.
● Stateful Solutions – Routers maintain a per-flow state as flow is
very important in providing the Quality-of-Service i.e. providing
powerful services such as guaranteed services and high resource
utilization, providing protection, and is much less scalable and
robust.
QoS Parameters
● Packet loss: This occurs when network connections get
congested, and routers and switches begin losing packets.
● Jitter: This is the result of network congestion, time drift, and
routing changes. Too much jitter can reduce the quality of voice
and video communication.
● Latency: This is how long it takes a packet to travel from its
source to its destination. The latency should be as near to zero as
possible.
● Bandwidth: This is a network communications link’s ability to
transmit the majority of data from one place to another in a
specific amount of time.
● Mean opinion score: This is a metric for rating voice quality
that uses a five-point scale, with five representing the highest
quality.
How does QoS Work?
Quality of Service (QoS) ensures the performance of critical
applications within limited network capacity.
● Packet Marking: QoS marks packets to identify their service
types. For example, it distinguishes between voice, video, and
data traffic.
● Virtual Queues: Routers create separate virtual queues for each
application based on priority. Critical apps get reserved
bandwidth.
● Handling Allocation: QoS assigns the order in which packets
are processed, ensuring appropriate bandwidth for each
application
Benefits of QoS
● Improved Performance for Critical Applications
● Enhanced User Experience
● Efficient Bandwidth Utilization
● Increased Network Reliability
● Compliance with Service Level Agreements (SLAs)
● Reduced Network Costs
● Improved Security
● Better Scalability
Why is QoS Important?
● Video and audio conferencing require a bounded delay and loss
rate.
● Video and audio streaming requires a bounded packet loss rate, it
may not be so sensitive to delay.
● Time-critical applications (real-time control) in which bounded
delay is considered to be an important factor.
● Valuable applications should provide better services than less
valuable applications.
Implementing QoS
● Planning: The organization should develop an awareness of each
department’s service needs and requirements, select an
appropriate model, and build stakeholder support.
● Design: The organization should then keep track of all key
software and hardware changes and modify the chosen QoS
model to the characteristics of its network infrastructure.
● Testing: The organization should test QoS settings and policies
in a secure, controlled testing environment where faults can be
identified.
● Deployment: Policies should be implemented in phases. An
organization can choose to deploy rules by network segment or
by QoS function (what each policy performs).
● Monitoring and analyzing: Policies should be modified to
increase performance based on performance data.
Models to Implement QoS
1. Integrated Services(IntServ)
● An architecture for providing QoS guarantees in IP networks for
individual application sessions.
● Relies on resource reservation, and routers need to maintain
state information of allocated resources and respond to new call
setup requests.
● Network decides whether to admit or deny a new call setup
request.
2. IntServ QoS Components
● Resource reservation: call setup signaling, traffic, QoS
declaration, per-element admission control.
● QoS-sensitive scheduling e.g WFQ queue discipline.
● QoS-sensitive routing algorithm (QSPF)
● QoS-sensitive packet discard strategy.
3. RSVP-Internet Signaling
It creates and maintains distributed reservation state, initiated by
the receiver and scales for multicast, which needs to be refreshed
otherwise reservation times out as it is in soft state. Latest paths
were discovered through “PATH” messages (forward direction) and
used by RESV messages (reserve direction).
4. Call Admission
● Session must first declare it’s QoS requirement and characterize
the traffic it will send through the network.
● R-specification: defines the QoS being requested, i.e. what kind
of bound we want on the delay, what kind of packet loss is
acceptable, etc.
● T-specification: defines the traffic characteristics like bustiness
in the traffic.
● A signaling protocol is needed to carry the R-spec and T-spec to
the routers where reservation is required.
● Routers will admit calls based on their R-spec, T-spec and based
on the current resource allocated at the routers to other calls.
5. Diff-Serv
Differentiated Service is a stateful solution in which each flow
doesn’t mean a different state. It provides reduced state services
i.e. maintaining state only for larger granular flows rather than end-
to-end flows tries to achieve the best of both worlds. Intended to
address the following difficulties with IntServ and RSVP:
● Flexible Service Models: IntServ has only two classes, want to
provide more qualitative service classes: want to provide
‘relative’ service distinction.
● Simpler signaling: Many applications and users may only want
to specify a more qualitative notion of service.
QoS Tools
● Traffic Classification and Marking
● Traffic Shaping and Policing
● Queue Management and Scheduling
● Resource Reservation
● Congestion Management
What is Multimedia?
The word multi and media are combined to form the
word multimedia. The word “multi” signifies
“many.” Multimedia is a type of medium that allows
information to be easily transferred from one location to
another. Multimedia is the presentation of text, pictures, audio,
and video with links and tools that allow the user to navigate,
engage, create, and communicate using a computer.
Components of Multimedia
● Text: Characters are used to form words, phrases, and
paragraphs in the text. The text can be in a variety of fonts and
sizes to match the multimedia software’s professional
presentation.
● Graphics: Non-text information, such as a sketch, chart, or
photograph, is represented digitally. Graphics add to the appeal
of the multimedia application. The use of visuals in multimedia
enhances the effectiveness and presentation of the concept.
Windows Picture, Internet Explorer, and other similar programs
are often used to see visuals.
● Animations: Animation is the process of making a still image
appear to move. A presentation can also be made lighter and
more appealing by using animation. In multimedia applications,
the animation is quite popular. The following are some of the
most regularly used animation viewing programs: Fax Viewer,
Internet Explorer, etc.
● Video: Photographic images that appear to be in full motion and
are played back at speeds of 15 to 30 frames per second. The
term video refers to a moving image that is accompanied by
sound, such as a television picture.
● Audio: Any sound, whether it’s music, conversation, or
something else. Sound is the most serious aspect of multimedia,
delivering the joy of music, special effects, and other forms of
entertainment. Decibels are a unit of measurement for volume
and sound pressure level. Audio files are used as part of the
application context as well as to enhance interaction. Audio files
must occasionally be distributed using plug-in media players
when they appear within online applications and webpages. MP3,
WMA, Wave, MIDI, and RealAudio are examples of audio formats.
The following programs are widely used to view videos: Real
Player, Window Media Player, etc.
Conclusion
QoS is critical for ensuring that multimedia applications run
smoothly and effectively across a network. QoS techniques
contribute to the quality and reliability of real-time applications by
regulating bandwidth, latency, jitter, and packet loss. To fulfill the
distinct requirements of various forms of network traffic, QoS is
implemented using a combination of categorization, prioritization,
resource reservation, and traffic management techniques.

QoS improving techniques

Techniques for achieving good Quality of


Service (QoS)



Quality of Service (QoS) in networks :


A stream of packets from a source to destination is called a flow. Quality of
Service is defined as something a flow seeks to attain. In connection oriented
network, all the packets belonging to a flow follow the same order. In a
connectionless network, all the packets may follow different routes.
The needs of each flow can be characterized by four primary parameters
:

● Reliability, Lack of reliability means losing a packet or acknowledgement


which entertains retransmission.

● Delay, Increase in delay means destination will find the packet later than
expected, Importance of delay changes according to the various
application.

● Jitter, Variation of the delay is jitter, If the delay is not at a constant rate, it
may result in poor quality.

● Bandwidth, Increase in bandwidth means increase in the amount of data


which can be transferred in given amount of time, Importance of bandwidth
also varies according to various application.

Application Reliability Delay Jitter Bandwidth

E-mail High Low Low Low

File transfer High Low Low Medium

Web access High Medium Low Medium

Remote login High Medium Medium Low

Audio on demand Low Low High Medium

Video on demand Low Low High High

Telephony Low High High Low

Videoconferencin Low High High High


Application Reliability Delay Jitter Bandwidth

Techniques for achieving good Quality of Service :

1. Overprovisioning –
The logic of overprovisioning is to provide greater router capacity, buffer
space and bandwidth. It is an expensive technique as the resources are
costly. Eg: Telephone System.

2. Buffering –
Flows can be buffered on the receiving side before being delivered. It will
not affect reliability or bandwidth, but helps to smooth out jitter. This
technique can be used at uniform intervals.

3. Traffic Shaping –
It is defined as about regulating the average rate of data transmission. It
smooths the traffic on server side other than client side. When a
connection is set up, the user machine and subnet agree on a certain
traffic pattern for that circuit called as Service Level Agreement. It reduces
congestion and thus helps the carrier to deliver the packets in the agreed
pattern.

Leaky Bucket and Token Bucket algorithms

In the network layer, before the network can make Quality of service
guarantees, it must know what traffic is being guaranteed. One of
the main causes of congestion is that traffic is often bursty.
To understand this concept first we have to know little about traffic
shaping. Traffic Shaping is a mechanism to control the amount
and the rate of traffic sent to the network. Approach of congestion
management is called Traffic shaping. Traffic shaping helps to
regulate the rate of data transmission and reduces congestion.
There are 2 types of traffic shaping algorithms:
1. Leaky Bucket
2. Token Bucket
Suppose we have a bucket in which we are pouring water, at
random points in time, but we have to get water at a fixed rate, to
achieve this we will make a hole at the bottom of the bucket. This
will ensure that the water coming out is at some fixed rate, and also
if the bucket gets full, then we will stop pouring water into it.
The input rate can vary, but the output rate remains constant.
Similarly, in networking, a technique called leaky bucket can smooth
out bursty traffic. Bursty chunks are stored in the bucket and sent
out at an average rate.

In the above figure, we assume that the network has committed a


bandwidth of 3 Mbps for a host. The use of the leaky bucket shapes
the input traffic to make it conform to this commitment. In the
above figure, the host sends a burst of data at a rate of 12 Mbps for
2s, for a total of 24 Mbits of data. The host is silent for 5 s and then
sends data at a rate of 2 Mbps for 3 s, for a total of 6 Mbits of data.
In all, the host has sent 30 Mbits of data in 10 s. The leaky bucket
smooths out the traffic by sending out data at a rate of 3 Mbps
during the same 10 s.
Without the leaky bucket, the beginning burst may have hurt the
network by consuming more bandwidth than is set aside for this
host. We can also see that the leaky bucket may prevent
congestion.
The leaky bucket algorithm is a simple yet effective way to
control data flow and prevent congestion. If you’re studying for
GATE or want to better understand traffic shaping algorithms like
the leaky bucket, the GATE CS Self-Paced Course covers this
topic in detail with practical examples.
A simple leaky bucket algorithm can be implemented using FIFO
queue. A FIFO queue holds the packets. If the traffic consists of
fixed-size packets (e.g., cells in ATM networks), the process removes
a fixed number of packets from the queue at each tick of the clock.
If the traffic consists of variable-length packets, the fixed output rate
must be based on the number of bytes or bits.
The following is an algorithm for variable-length packets:
1. Initialize a counter to n at the tick of the clock.
2. Repeat until n is smaller than the packet size of the packet at the
head of the queue.
1. Pop a packet out of the head of the queue, say P.
2. Send the packet P, into the network
3. Decrement the counter by the size of packet P.
3. Reset the counter and go to step 1.
Note: In the below examples, the head of the queue is the
rightmost position and the tail of the queue is the leftmost position.
Example: Let n=1000
Packet=

Since n > size of the packet at the head of the Queue, i.e. n > 200
Therefore, n = 1000-200 = 800
Packet size of 200 is sent into the network.

Now, again n > size of the packet at the head of the Queue, i.e. n >
400
Therefore, n = 800-400 = 400
Packet size of 400 is sent into the network.
Since, n < size of the packet at the head of the Queue, i.e. n < 450
Therefore, the procedure is stopped.
Initialise n = 1000 on another tick of the clock.
This procedure is repeated until all the packets are sent into the
network.
Below is the implementation of above explained approach:

C++JavaPythonC#JavaScript
// cpp program to implement leakybucket
#include <bits/stdc++.h>
using namespace std;
int main()
{
int no_of_queries, storage, output_pkt_size;
int input_pkt_size, bucket_size, size_left;

// initial packets in the bucket


storage = 0;

// total no. of times bucket content is checked


no_of_queries = 4;

// total no. of packets that can


// be accommodated in the bucket
bucket_size = 10;

// no. of packets that enters the bucket at a time


input_pkt_size = 4;

// no. of packets that exits the bucket at a time


output_pkt_size = 1;
for (int i = 0; i < no_of_queries; i++) // space left
{
size_left = bucket_size - storage;
if (input_pkt_size <= size_left) {
// update storage
storage += input_pkt_size;
}
else {
printf("Packet loss = %d\n", input_pkt_size);
}
printf("Buffer size= %d out of bucket size= %d\n",
storage, bucket_size);
storage -= output_pkt_size;
}
return 0;
}

// This code is contributed by bunny09262002


// Improved by: rishitchaudhary
Output

Buffer size= 4 out of bucket size= 10


Buffer size= 7 out of bucket size= 10
Buffer size= 10 out of bucket size= 10
Packet loss = 4
Buffer size= 9 out of bucket size= 10

Difference between Leaky and Token buckets –

Leaky Bucket Token Bucket

When the host has to send a In this, the bucket holds tokens
packet , packet is thrown in generated at regular intervals of
bucket. time.

Bucket leaks at constant rate Bucket has maximum capacity.

If there is a ready packet , a token is


Bursty traffic is converted into
removed from Bucket and packet is
uniform traffic by leaky bucket.
send.

In practice bucket is a finite If there is no token in the bucket,


queue outputs at finite rate then the packet cannot be sent.

Some advantage of token Bucket over leaky bucket


● If a bucket is full in tokens Bucket, tokens are discard not
packets. While in leaky bucket, packets are discarded.
● Token Bucket can send large bursts at a faster rate while leaky
bucket always sends packets at constant rate.
● Predictable Traffic Shaping: Token Bucket offers more
predictable traffic shaping compared to leaky bucket. With token
bucket, the network administrator can set the rate at which
tokens are added to the bucket, and the maximum number of
tokens that the bucket can hold. This allows for better control
over the network traffic and can help prevent congestion.
● Better Quality of Service (QoS): Token Bucket provides better
QoS compared to leaky bucket. This is because token bucket can
prioritize certain types of traffic by assigning different token
arrival rates to different classes of packets. This ensures that
important packets are sent first, while less important packets are
sent later, helping to ensure that the network runs smoothly.
● More efficient use of network bandwidth: Token Bucket
allows for more efficient use of network bandwidth as it allows for
larger bursts of data to be sent at once. This can be useful for
applications that require high-speed data transfer or for
streaming video content.
● More granular control: Token Bucket provides more granular
control over network traffic compared to leaky bucket. This is
because it allows the network administrator to set the token
arrival rate and the maximum token count, which can be adjusted
according to the specific needs of the network.
● Easier to implement: Token Bucket is generally considered
easier to implement compared to leaky bucket. This is because
token bucket only requires the addition and removal of tokens
from a bucket, while leaky bucket requires the use of timers and
counters to determine when to release packets.
Some Disadvantage of token Bucket over leaky bucket
● Tokens may be wasted: In Token Bucket, tokens are generated
at a fixed rate, even if there is no traffic on the network. This
means that if no packets are sent, tokens will accumulate in the
bucket, which could result in wasted resources. In contrast, with
leaky bucket, the network only generates packets when there is
traffic, which helps to conserve resources.
● Delay in packet delivery: Token Bucket may introduce delay in
packet delivery due to the accumulation of tokens. If the token
bucket is empty, packets may need to wait for the arrival of new
tokens, which can lead to increased latency and packet loss.
● Lack of flexibility: Token Bucket is less flexible compared to
leaky bucket in terms of shaping network traffic. This is because
the token generation rate is fixed, and cannot be changed easily
to meet the changing needs of the network. In contrast, leaky
bucket can be adjusted more easily to adapt to changes in
network traffic.
● Complexity: Token Bucket can be more complex to implement
compared to leaky bucket, especially when different token
generation rates are used for different types of traffic. This can
make it more difficult for network administrators to configure and
manage the network.
● Inefficient use of bandwidth: In some cases, Token Bucket
may lead to inefficient use of bandwidth. This is because Token
Bucket allows for large bursts of data to be sent at once, which
can cause congestion and lead to packet loss. In contrast, leaky
bucket helps to prevent congestion by limiting the amount of
data that can be sent at any given time.

Unit – V
The Application Layer
Domain Name System

The Domain Name System (DNS) is a hierarchical and distributed system that
translates human-readable domain names into machine-readable IP
addresses:
● How it works
DNS acts like a phone book for the internet, connecting domain names to IP
addresses. When you type a domain name into your web browser, DNS servers
translate the request into an IP address, which allows your browser to access the
website.
● How it's used
DNS is an important part of the web's infrastructure. Every time you visit a website, your
computer performs a DNS lookup. Complex pages may require multiple DNS lookups
before they start loading.
● How it's implemented
DNS is part of the TCP/IP protocol suite. The DNS process involves four servers,
including a DNS recursor, which receives the query from the DNS client and
communicates with other DNS servers to find the IP address.
● Domain name rules
There are rules for domain names, including:
● A maximum of 127 levels of subdomains
● A maximum of 63 characters per label
● A maximum total domain character length of 253 characters
● Labels cannot start or end with hyphens
● TLD names cannot be fully numeric
You can find your computer's domain name by:
1. Clicking on the Start Menu
2. Going to Control Panel
3. Selecting System and Security
4. Selecting System
5. Looking at the bottom of the screen

Electronic Mail

Introduction to Electronic Mail



Introduction:
Electronic mail, commonly known as email, is a method of
exchanging messages over the internet. Here are the
basics of email:

1. An email address: This is a unique identifier for each user,


typically in the format of [email protected].
2. An email client: This is a software program used to send, receive
and manage emails, such as Gmail, Outlook, or Apple Mail.
3. An email server: This is a computer system responsible for storing
and forwarding emails to their intended recipients.

To send an email:
1. Compose a new message in your email client.
2. Enter the recipient’s email address in the “To” field.
3. Add a subject line to summarize the content of the message.
4. Write the body of the message.
5. Attach any relevant files if needed.
6. Click “Send” to deliver the message to the recipient’s email
server.
7. Emails can also include features such as cc (carbon copy) and bcc
(blind carbon copy) to send copies of the message to multiple
recipients, and reply, reply all, and forward options to manage the
conversation.
Electronic Mail (e-mail) is one of most widely used services
of Internet. This service allows an Internet user to send a message
in formatted manner (mail) to the other Internet user in any part
of world. Message in mail not only contain text, but it also contains
images, audio and videos data. The person who is sending mail is
called sender and person who receives mail is called recipient. It
is just like postal mail service. Components of E-Mail
System : The basic components of an email system are : User
Agent (UA), Message Transfer Agent (MTA), Mail Box, and Spool file.
These are explained as following below.
1. User Agent (UA) : The UA is normally a program which is used
to send and receive mail. Sometimes, it is called as mail reader. It
accepts variety of commands for composing, receiving and
replying to messages as well as for manipulation of the
mailboxes.
2. Message Transfer Agent (MTA) : MTA is actually responsible
for transfer of mail from one system to another. To send a mail, a
system must have client MTA and system MTA. It transfer mail to
mailboxes of recipients if they are connected in the same
machine. It delivers mail to peer MTA if destination mailbox is in
another machine. The delivery from one MTA to another MTA is
done by Simple Mail Transfer Protocol.

3. Mailbox : It is a file on local hard drive to collect mails. Delivered


mails are present in this file. The user can read it delete it
according to his/her requirement. To use e-mail system each user
must have a mailbox . Access to mailbox is only to owner of
mailbox.
4. Spool file : This file contains mails that are to be sent. User
agent appends outgoing mails in this file using SMTP. MTA
extracts pending mail from spool file for their delivery. E-mail
allows one name, an alias, to represent several different e-mail
addresses. It is known as mailing list, Whenever user have to
sent a message, system checks recipient’s name against alias
database. If mailing list is present for defined alias, separate
messages, one for each entry in the list, must be prepared and
handed to MTA. If for defined alias, there is no such mailing list is
present, name itself becomes naming address and a single
message is delivered to mail transfer entity.
Services provided by E-mail system :
● Composition – The composition refer to process that creates
messages and answers. For composition any kind of text editor
can be used.
● Transfer – Transfer means sending procedure of mail i.e. from
the sender to recipient.
● Reporting – Reporting refers to confirmation for delivery of mail.
It help user to check whether their mail is delivered, lost or
rejected.
● Displaying – It refers to present mail in form that is understand
by the user.
● Disposition – This step concern with recipient that what will
recipient do after receiving mail i.e save mail, delete before
reading or delete after reading.

Advantages Or Disadvantages:
Advantages of email:

1. Convenient and fast communication with individuals or groups


globally.
2. Easy to store and search for past messages.
3. Ability to send and receive attachments such as documents,
images, and videos.
4. Cost-effective compared to traditional mail and fax.
5. Available 24/7.

Disadvantages of email:

1. Risk of spam and phishing attacks.


2. Overwhelming amount of emails can lead to information overload.
3. Can lead to decreased face-to-face communication and loss of
personal touch.
4. Potential for miscommunication due to lack of tone and body
language in written messages.
5. Technical issues, such as server outages, can disrupt email
service.
6. It is important to use email responsibly and effectively, for
example, by keeping the subject line clear and concise, using
proper etiquette, and protecting against security threats.
World Wide Web: architectural overview, dynamic web document and http.

The World Wide Web (WWW), often called the Web, is a system of
interconnected webpages and information that you can access using
the Internet. It was created to help people share and find
information easily, using links that connect different pages together.
The Web allows us to browse websites, watch videos, shop online,
and connect with others around the world through our computers
and phones.
All public websites or web pages that people may access on their
local computers and other devices through the internet are
collectively known as the World Wide Web or W3. Users can get
further information by navigating to links interconnecting these
pages and documents. This data may be presented in text, picture,
audio, or video formats on the internet.
What is WWW?
WWW stands for World Wide Web and is commonly known as the
Web. The WWW was started by CERN in 1989. WWW is defined as
the collection of different websites around the world, containing
different information shared via local servers(or computers).
Web pages are linked together using hyperlinks which are HTML-
formatted and, also referred to as hypertext, these are the
fundamental units of the Internet and are accessed
through Hypertext Transfer Protocol(HTTP). Such digital
connections, or links, allow users to easily access desired
information by connecting relevant pieces of information. The
benefit of hypertext is it allows you to pick a word or phrase from
the text and click on other sites that have more information about it.
History of the WWW
It is a project created, by Tim Berner Lee in 1989, for researchers to
work together effectively at CERN. It is an organization, named
the World Wide Web Consortium (W3C), which was developed for
further development of the web. This organization is directed by Tim
Berner’s Lee, aka the father of the web. CERN, where Tim Berners
worked, is a community of more than 1700 researchers from more
than 100 countries. These researchers spend a little time on CERN
and the rest of the time they work at their colleges and national
research facilities in their home country, so there was a requirement
for solid communication so that they can exchange data.
System Architecture
From the user’s point of view, the web consists of a vast, worldwide
connection of documents or web pages. Each page may contain
links to other pages anywhere in the world. The pages can be
retrieved and viewed by using browsers of which internet explorer,
Netscape Navigator, Google Chrome, etc are the popular ones. The
browser fetches the page requested interprets the text and
formatting commands on it, and displays the page, properly
formatted, on the screen.
The basic model of how the web works are shown in the figure
below. Here the browser is displaying a web page on the client
machine. When the user clicks on a line of text that is linked to a
page on the abd.com server, the browser follows the hyperlink by
sending a message to the abd.com server asking it for the page.

Here the browser displays a web page on the client machine when
the user clicks on a line of text that is linked to a page on abd.com,
the browser follows the hyperlink by sending a message to the
abd.com server asking for the page.
Working of WWW
A Web browser is used to access web pages. Web browsers can be
defined as programs which display text, data, pictures, animation
and video on the Internet. Hyperlinked resources on the World Wide
Web can be accessed using software interfaces provided by Web
browsers. Initially, Web browsers were used only for surfing the Web
but now they have become more universal.
The below diagram indicates how the Web operates just like client-
server architecture of the internet. When users request web pages
or other information, then the web browser of your system request
to the server for the information and then the web server provide
requested services to web browser back and finally the requested
service is utilized by the user who made the request.

World Wide Web

Web browsers can be used for several tasks including conducting


searches, mailing, transferring files, and much more. Some of the
commonly used browsers are Internet Explorer, Opera Mini, and
Google Chrome.
Features of WWW
● WWW is open source.
● It is a distributed system spread across various websites.
● It is a Hypertext Information System.
● It is Cross-Platform.
● Uses Web Browsers to provide a single interface for many
services.
● Dynamic, Interactive and Evolving.
Components of the Web
There are 3 components of the web:
● Uniform Resource Locator (URL): URL serves as a system for
resources on the web.
● Hyper Text Transfer Protocol (HTTP): HTTP specifies
communication of browser and server.
● Hyper Text Markup Language (HTML): HTML defines the
structure, organisation and content of a web page.
Difference Between WWW and Internet
WWW Internet

It is originated in 1989. It is originated in 1960.

WWW is an interconnected network of Internet is used to connect


websites and documents that can be a computer with other
accessed via the Internet. computer .

Internet used protocols


WWW used protocols such as HTTP
such as TCP/IP

It is based on software. It is based on hardware.

It is a service contained inside an There is a entire


infrastructure. infrastructure in internet.

Web Browser Evolution and the Growth of the


World Wide Web
In the early 1990s, Tim Berners-Lee and his team created a basic
text web browser. It was the release of the more user-friendly
Mosaic browser in 1993 that really sparked widespread interest in
the World Wide Web (WWW). Mosaic had a clickable interface
similar to what people were already familiar with on personal
computers, which made it easier for everyone to use the internet.
Mosaic was developed by Marc Andreessen and others in the United
States. They later made Netscape Navigator, which became the
most popular browser in 1994. Microsoft’s Internet Explorer took
over in 1995 and held the top spot for many years. Mozilla
Firefox came out in 2004, followed by Google Chrome in 2008, both
challenging IE’s dominance. In 2015, Microsoft replaced Internet
Explorer with Microsoft Edge.
Conclusion
The World Wide Web (WWW) has revolutionized how information is
accessed and shared globally. It provides a vast network of
interconnected documents and resources accessible via
the Internet. Through web browsers, users can navigate websites,
access multimedia content, communicate, and conduct transactions
online. The WWW has transformed communication, commerce,
education, and entertainment, shaping modern society and
facilitating a connected global community. Its continued evolution
and accessibility drive innovation and connectivity worldwide.

APPLICATION LAYER PROTOCOLS: Simple Network Management Protocol, File


Transfer Protocol, Simple Mail Transfer Protocol, Telnet.

The Application Layer is the topmost layer in the Open System


Interconnection (OSI) model. This layer provides several ways for
manipulating the data which enables any type of user to access the
network with ease. The Application Layer interface directly interacts
with the application and provides common web application services.
The application layer performs several kinds of functions that are
required in any kind of application or communication process. In this
article, we will discuss various application layer protocols.
What are Application Layer Protocols?
Application layer protocols are those protocols utilized at the
application layer of the OSI (Open Systems Interconnection) and
TCP/IP models. They facilitate communication and data sharing
between software applications on various network devices. These
protocols define the rules and standards that allow applications to
interact and communicate quickly and effectively over a network.
To understand how various protocols interact at the application
layer, the GATE CS Self-Paced Course provides comprehensive
coverage of these protocols with practical exercises
Application Layer Protocol in Computer
Network
1. TELNET
Telnet stands for the TELetype NETwork. It helps in terminal
emulation. It allows Telnet clients to access the resources of the
Telnet server. It is used for managing files on the Internet. It is used
for the initial setup of devices like switches. The telnet command is
a command that uses the Telnet protocol to communicate with a
remote device or system. The port number of the telnet is 23.
Command
telnet [\\RemoteServer]
\\RemoteServer
: Specifies the name of the server
to which you want to connect
2. FTP
FTP stands for File Transfer Protocol. It is the protocol that actually
lets us transfer files. It can facilitate this between any two machines
using it. But FTP is not just a protocol but it is also a program.FTP
promotes sharing of files via remote computers with reliable and
efficient data transfer. The Port number for FTP is 20 for data and 21
for control.
Command
ftp machinename
3. TFTP
The Trivial File Transfer Protocol (TFTP) is the stripped-down, stock
version of FTP, but it’s the protocol of choice if you know exactly
what you want and where to find it. It’s a technology for transferring
files between network devices and is a simplified version of FTP. The
Port number for TFTP is 69.
Command
tftp [ options... ] [host [port]] [-c command]
4. NFS
It stands for a Network File System. It allows remote hosts to mount
file systems over a network and interact with those file systems as
though they are mounted locally. This enables system
administrators to consolidate resources onto centralized servers on
the network. The Port number for NFS is 2049.
Command
service nfs start
5. SMTP
It stands for Simple Mail Transfer Protocol. It is a part of the TCP/IP
protocol. Using a process called “store and forward,” SMTP moves
your email on and across networks. It works closely with something
called the Mail Transfer Agent (MTA) to send your communication to
the right computer and email inbox. The Port number for SMTP is
25.
Command
MAIL FROM:<[email protected]?
6. LPD
It stands for Line Printer Daemon. It is designed for printer sharing.
It is the part that receives and processes the request. A “daemon” is
a server or agent. The Port number for LPD is 515.
Command
lpd [ -d ] [ -l ] [ -D DebugOutputFile]
7. X window
It defines a protocol for the writing of graphical user interface–based
client/server applications. The idea is to allow a program, called a
client, to run on one computer. It is primarily used in networks of
interconnected mainframes. Port number for X window starts from
6000 and increases by 1 for each server.
Command
Run xdm in runlevel 5
8. SNMP
It stands for Simple Network Management Protocol. It gathers data
by polling the devices on the network from a management station at
fixed or random intervals, requiring them to disclose certain
information. It is a way that servers can share information about
their current state, and also a channel through which an
administrate can modify pre-defined values. The Port number of
SNMP is 161(TCP) and 162(UDP).
Command
snmpget -mALL -v1 -cpublic snmp_agent_Ip_address sysName.0
9. DNS
It stands for Domain Name System. Every time you use a domain
name, therefore, a DNS service must translate the name into the
corresponding IP address. For example, the domain name
www.abc.com might translate to 198.105.232.4.
The Port number for DNS is 53.
Command
ipconfig /flushdns
10. DHCP
It stands for Dynamic Host Configuration Protocol (DHCP). It gives IP
addresses to hosts. There is a lot of information a DHCP server can
provide to a host when the host is registering for an IP address with
the DHCP server. Port number for DHCP is 67, 68.
Command
clear ip dhcp binding {address | * }
11. HTTP/HTTPS
HTTP stands for Hypertext Transfer Protocol and HTTPS is the more
secured version of HTTP, that’s why HTTPS stands for Hypertext
Transfer Protocol Secure. This protocol is used to access data from
the World Wide Web. The Hypertext is the well-organized
documentation system that is used to link pages in the text
document.
● HTTP is based on the client-server model.
● It uses TCP for establishing connections.
● HTTP is a stateless protocol, which means the server doesn’t
maintain any information about the previous request from the
client.
● HTTP uses port number 80 for establishing the connection.
12. POP
POP stands for Post Office Protocol and the latest version is known
as POP3 (Post Office Protocol version 3). This is a simple protocol
used by User agents for message retrieval from mail servers.
● POP protocol work with Port number 110.
● It uses TCP for establishing connections.
POP works in dual mode- Delete mode, Keep Mode.
In Delete mode, it deletes the message from the mail server once
they are downloaded to the local system.
In Keep mode, it doesn’t delete the message from the mail server
and also facilitates the users to access the mails later from the mail
server.
13. IRC
IRC stands for Internet Relay Chat. It is a text-based instant
messaging/chatting system. IRC is used for group or one-to-one
communication. It also supports file, media, data sharing within the
chat. It works upon the client-server model. Where users connect to
IRC server or IRC network via some web/ standalone application
program.
● It uses TCP or TLS for connection establishment.
● It makes use of port number 6667.
14. MIME
MIME stands for Multipurpose Internet Mail Extension. This protocol
is designed to extend the capabilities of the existing Internet email
protocol like SMTP. MIME allows non-ASCII data to be sent via SMTP.
It allows users to send/receive various kinds of files over the Internet
like audio, video, programs, etc. MIME is not a standalone protocol it
works in collaboration with other protocols to extend their
capabilities.
Conclusion
Application layer protocols are required to enable communication
and data exchange between software applications on different
network devices. These protocols, which include HTTP, FTP, SMTP,
and DNS, specify the rules and standards that enable applications to
communicate easily across a network. Each protocol serves a
distinct purpose, ranging from file transfer and email management
to network device configuration and web page access, providing
efficient and effective network connection.
After Completion of Unit IV and V you should be able to answer the following
Questions

1. Explain all the services of the Transport Layer.


2. Explain Quality of Service (QoS).
3. Explain Leaky Bucket and Token Bucket algorithms.
4. Explain Internet Transport Layer Protocols: UDP and TCP, SCTP.
5. Explain techniques for achieving good Quality of Service (QoS).
6. Explain Domain Name System.
7. Explain the steps to send an Email.
8. What are the advantages and disadvantages of E-mail?
9. What is WWW?
10. Explain the Services provided by E-mail system.
11. What are the Features of WWW?
12. Differentiate between WWW and Internet.
13. What is TELNET?
14. What is TFTP?
15. What is SMTP?
16. What is DHCP?
17. What is MIME?
18. What is FTP?
19. What is NFS?
20. What is LPD?
21. What is HTTP/HTTPS?
22. How does QoS Work?

You might also like