0% found this document useful (0 votes)
9 views80 pages

Computer Network

Download as pdf or txt
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 80

What is Protocol Layering?

A protocol is a set of rules and standards that primarily outline a language that devices will use to
communicate. There are an excellent range of protocols in use extensively in networking, and that
they are usually implemented in numerous layers.

It provides a communication service where the process is used to exchange the messages. When
the communication is simple, we can use only one simple protocol.

When the communication is complex, we must divide the task between different layers, so, we need
to follow a protocol at each layer, this technique we used to call protocol layering. This layering
allows us to separate the services from the implementation.

Each layer needs to receive a set of services from the lower layer and to give the services to the
upper layer. The modification done in any one layer will not affect the other layers.

Basic Elements of Layered Architecture


The basic elements of the layered architecture are as follows −

Service − Set of actions or services provided from one layer to the higher layer.

Protocol − It defines a set of rules where a layer uses to exchange the information with its peer
entity. It is concerned about both the contents and order of the messages used.

Interface − It is a way through that the message is transferred from one layer to another layer.

Software AG
Search

Toggle navigation
Contents menu
Entire Net-Work Mainframe 6.3.2 Entire Net-Work TCP/IP Option 6.3.2 Entire Net-Work TCP/IP
Option Administration TCP/IP Protocol Stack

TCP/IP Protocol Stack


As shown in the following diagram, the TCP/IP protocol stack contains four layers:
Physical layer

Internet Protocol (IP) layer

Transport layer, comprising

Transmission Control Protocol (TCP); and

User Datagram Protocol (UDP)

Applications layer

graphics/graphic5.png

This document covers the following topics:

Physical Layer

Internet Protocol (IP) Layer

Transport Layer

Applications Layer

Physical Layer
At the bottom of the stack is the physical layer, which deals with the actual transmission of data over
physical media such as serial lines, Ethernet, token rings, FDDI rings, and hyperchannels.
Messages can also be sent and received over other, non-physical access methods such as
VTAM/SNA.

Internet Protocol (IP) Layer


Above the physical layer is the Internet Protocol (IP) layer, which deals with the routing of packets
from one computer to another. The IP layer

determines which lower level protocol to use when multiple interfaces exist.

determines whether to send a packet directly to the host or indirectly to a relay host known as a
router.

When a packet is larger than the size supported by the physical medium, the IP layer breaks the
packet into smaller packets, a process referred to as "fragmentation and reassembly".

provides some control services for packets, and ensures that they are not sent from router to router
indefinitely.
However, the IP layer does not keep track of a packet after it is sent, nor does it guarantee that the
packet will be delivered.

Transport Layer
Above the IP layer is the transport layer, which contains the Transmission Control Protocol (TCP)
and the User Datagram Protocol (UDP).

Transmission Control Protocol (TCP)


The Transmission Control Protocol (TCP) guarantees that data sent by higher levels is delivered in
order and without corruption. To accomplish this level of service, the TCP implementation on one
computer establishes a session or connection with the TCP implementation on another computer.
This process is referred to as Connection Oriented Transport Service (COTS).

After a session is established, data is sent and received as a stream of contiguous bytes; each byte
can be referenced by an exact sequence number. When data is received by the remote TCP, it
sends an acknowledgment back to the local TCP advising it of the sequence number of the last byte
of data received. If an acknowledgment is not received, or if an acknowledgment for previously sent
data is received twice, the local TCP retransmits the data until it is all acknowledged. The remote
TCP discards any bytes that are received more than once.

All data sent and received by TCP is validated for corruption using checksums. Whenever a
checksum is incorrect, the bad data is discarded by TCP, and the correct data is retransmitted until
it is accurately received.

User Datagram Protocol (UDP)


Unlike the TCP, the User Datagram Protocol (UDP) transmits and receives data in packets
(datagrams), and delivery is not guaranteed. The contents of the data can be sent with or without a
checksum. The use of checksums varies widely from one implementation to another.

Applications Layer
Above the transport layer is the applications layer, which contains both general applications and
function libraries for use by applications.

Some general applications that run over TCP include

File Transfer Protocol (FTP);

remote terminal emulation (TELNET in line mode, TN3270 in full screen);

Electronic Mail (SMTP); and

Entire Net-Work.
Some general applications that run over UDP are

Network File Server (NFS); and

Domain Name Server (DNS).

Interface with TCP and UDP


Function libraries provide routines to simplify the interface between applications and TCP/UDP of
the Transport layer:

The most common function library is known as Sockets, which allows an application written in C to
access TCP as if it were just another stream input/output device.

Another function library that is less commonly used is Remote Procedure Call (RPC), which allows
applications to make calls to functions that are located in another application on a different
computer.

The environment in which an application runs often dictates the interface used between it and TCP
or UDP:

Most UNIX, OS/2, and Windows applications are written in C and utilize a direct socket interface.

On IBM mainframes and other systems based on the same architecture such as Fujitsu Technology
Solutions, applications are often written in S/390 assembler, and use either a pseudo-socket
interface or an application program interface (API) to gain access to the TCP/IP protocol stack.

Ports
The interface that exists between an application and TCP is referred to as a port. Ports are
classified as server ports and client ports:

Server ports are generally ports on which the application "listens" for incoming connections to be
made.

Client ports are generally ports on which the application "connects" outwardly to a server port.

An application may control multiple client ports and server ports simultaneously.

Each port is identified by a port number, which ranges from 1 to 65535.

The port number used by client ports usually has no significance and is often assigned by TCP.

Server port numbers, however, are usually required to be "well known"; that is, the client must know
which port the server is listening on when it attempts to connect. Server port numbers usually are
specified by the server application.
What is a packet?
In networking, a packet is a small segment of a larger message. Data sent over computer
networks*, such as the Internet, is divided into packets. These packets are then recombined by the
computer or device that receives them.

Virtual Circuit in Computer Network


Virtual Circuit is the computer network providing connection-oriented service. It is a
connection-oriented network. In virtual circuit resource are reserve for the time interval of data
transmission between two nodes. This network is a highly reliable medium of transfer. Virtual
circuits are costly to implement.
Working of Virtual Circuit:

In the first step a medium is set up between the two end nodes.
Resources are reserved for the transmission of packets.
Then a signal is sent to sender to tell the medium is set up and transmission can be started.
It ensures the transmission of all packets.
A global header is used in the first packet of the connection.
Whenever data is to be transmitted a new connection is set up.
Advantages of Virtual Circuit:

Packets are delivered to the receiver in the same order sent by the sender.
Virtual circuits is a reliable network circuit.
There is no need for overhead in each packet.
Single global packet overhead is used in virtual circuit.
Disadvantages of Virtual Circuit:

Virtual circuits are costly to implement.


It provides only connection-oriented service.
Always a new connection set up is required for transmission.

Physical Layer in OSI Model


The physical Layer is the bottom-most layer in the Open System Interconnection (OSI) Model which is a
physical and electrical representation of the system. It consists of various network components such as
power plugs, connectors, receivers, cable types, etc. The physical layer sends data bits from one device(s)
(like a computer) to another device(s). The physical Layer defines the types of encoding (that is how the 0’s
and 1’s are encoded in a signal). The physical Layer is responsible for the communication of the unstructured
raw data streams over a physical medium.

Functions Performed by Physical Layer


The following are some important and basic functions that are performed by the Physical Layer of the OSI
Model –
1. The physical layer maintains the data rate (how many bits a sender can send per second).
2. It performs the Synchronization of bits.
3. It helps in Transmission Medium decisions (direction of data transfer).
4. It helps in Physical Topology (Mesh, Star, Bus, Ring) decisions (Topology through which we can
connect the devices with each other).
5. It helps in providing Physical Medium and Interface decisions.
6. It provides two types of configuration: Point Point configuration and Multi-Point configuration.
7. It provides an interface between devices (like PCs or computers) and transmission medium.
8. It has a protocol data unit in bits.
9. Hubs, Ethernet, etc. devices are used in this layer.
10. This layer comes under the category of Hardware Layers (since the hardware layer is responsible for
all the physical connection establishment and processing too).
11. It provides an important aspect called Modulation, which is the process of converting the data into
radio waves by adding the information to an electrical or optical nerve signal.
12. It also provides a Switching mechanism wherein data packets can be forwarded from one port
(sender port) to the leading destination port.

Line Configuration
● Point-to-Point configuration: In Point-to-Point configuration, there is a line (link) that is fully
dedicated to carrying the data between two devices.
● Multi-Point configuration: In a Multi-Point configuration, there is a line (link) through which
multiple devices are connected.
Modes of Transmission Medium
1. Simplex mode: In this mode, out of two devices, only one device can transmit the data, and the
other device can only receive the data. Example- Input from keyboards, monitors, TV
broadcasting, Radio broadcasting, etc.
2. Half Duplex mode: In this mode, out of two devices, both devices can send and receive the data
but only one at a time not simultaneously. Examples- Walkie-Talkie, Railway Track, etc.
3. Full-Duplex mode: In this mode, both devices can send and receive the data simultaneously.
Examples- Telephone Systems, Chatting applications, etc.

Guided Media
It is defined as the physical medium through which the signals are transmitted. It is also known as Bounded
media.

Types Of Guided media:

Twisted pair:
Twisted pair is a physical media made up of a pair of cables twisted with each other. A twisted pair cable is
cheap as compared to other transmission media. Installation of the twisted pair cable is easy, and it is a
lightweight cable. The frequency range for twisted pair cable is from 0 to 3.5KHz.

A twisted pair consists of two insulated copper wires arranged in a regular spiral pattern.kward Skip 10s

The degree of reduction in noise interference is determined by the number of turns per foot. Increasing the
number of turns per foot decreases noise interference.

Types of Twisted pair:

Unshielded Twisted Pair:


An unshielded twisted pair is widely used in telecommunication. Following are the categories of the
unshielded twisted pair cable:

○ Category 1: Category 1 is used for telephone lines that have low-speed data.
○ Category 2: It can support upto 4Mbps.
○ Category 3: It can support upto 16Mbps.
○ Category 4: It can support upto 20Mbps. Therefore, it can be used for long-distance communication.
○ Category 5: It can support upto 200Mbps.

Advantages Of Unshielded Twisted Pair:

○ It is cheap.
○ Installation of the unshielded twisted pair is easy.
○ It can be used for high-speed LAN.

Disadvantage:

○ This cable can only be used for shorter distances because of attenuation.

Shielded Twisted Pair


A shielded twisted pair is a cable that contains the mesh surrounding the wire that allows the higher
transmission rate.

Characteristics Of Shielded Twisted Pair:

○ The cost of the shielded twisted pair cable is not very high and not very low.
○ Installation of STP is easy.
○ It has higher capacity as compared to unshielded twisted pair cable.
○ It has a higher attenuation.
○ It is shielded and provides a higher data transmission rate.

Disadvantages

○ It is more expensive as compared to UTP and coaxial cable.


○ It has a higher attenuation rate.

Coaxial Cable
○ Coaxial cable is a very commonly used transmission media, for example, TV wire is usually a coaxial
cable.
○ The name of the cable is coaxial as it contains two conductors parallel to each other.
○ It has a higher frequency as compared to Twisted pair cable.
○ The inner conductor of the coaxial cable is made up of copper, and the outer conductor is made up of
copper mesh. The middle core is made up of non-conductive cover that separates the inner
conductor from the outer conductor.
○ The middle core is responsible for the data transferring whereas the copper mesh prevents
EMI(Electromagnetic interference).

Coaxial cable is of two types:

1. Baseband transmission: It is defined as the process of transmitting a single signal at high speed.
2. Broadband transmission: It is defined as the process of transmitting multiple signals simultaneously.

Advantages Of Coaxial cable:

○ The data can be transmitted at high speed.


○ It has better shielding as compared to twisted pair cable.
○ It provides higher bandwidth.

Disadvantages Of Coaxial cable:

○ It is more expensive as compared to twisted pair cable.


○ If any fault occurs in the cable causes the failure in the entire network.

Fibre Optic
○ Fibre optic cable is a cable that uses electrical signals for communication.
○ Fibre optic is a cable that holds the optical fibres coated in plastic that are used to send the data by pulses of
light.
○ The plastic coating protects the optical fibres from heat, cold, electromagnetic interference from other types of
wiring.
○ Fibre optics provide faster data transmission than copper wires.

Diagrammatic representation of fibre optic cable:

Basic elements of Fibre optic cable:


○ Core: The optical fibre consists of a narrow strand of glass or plastic known as a core. A core is a light
transmission area of the fibre. The more the area of the core, the more light will be transmitted into the fibre.
○ Cladding: The concentric layer of glass is known as cladding. The main functionality of the cladding is to provide
the lower refractive index at the core interface as to cause the reflection within the core so that the light waves
are transmitted through the fibre.
○ Jacket: The protective coating consisting of plastic is known as a jacket. The main purpose of a jacket is to
preserve the fibre strength, absorb shock and extra fibre protection.

Following are the advantages of fibre optic cable over copper:

○ Greater Bandwidth: The fibre optic cable provides more bandwidth as compared copper. Therefore, the fibre
optic carries more data as compared to copper cable.
○ Faster speed: Fibre optic cable carries the data in the form of light. This allows the fibre optic cable to carry the
signals at a higher speed.
○ Longer distances: The fibre optic cable carries the data at a longer distance as compared to copper cable.
○ Better reliability: The fibre optic cable is more reliable than the copper cable as it is immune to any temperature
changes while it can cause obstruct in the connectivity of copper cable.
○ Thinner and Sturdier: Fibre optic cable is thinner and lighter in weight so it can withstand more pull pressure
than copper cable.

UNIT II - DATA LINK LAYER LECTURE 07


Data-link layer is the second layer after the physical layer. The data link layer is responsible for
maintaining the data link between two hosts or nodes.
Before going through the design issues in the data link layer. Some of its sub-layers and their
functions are as follows below.
The data link layer is divided into two sub-layers :
1. Logical Link Control Sub-layer (LLC) –
Provides the logic for the data link, Thus it controls the synchronization, flow control,
and error checking functions of the data link layer. Functions are –
● (i) Error Recovery.
● (ii) It performs the flow control operations.
● (iii) User addressing.
2. Media Access Control Sub-layer (MAC) –
It is the second sub-layer of the data-link layer. It controls the flow and multiplexing for
the transmission medium. Transmission of data packets is controlled by this layer. This
layer is responsible for sending the data over the network interface card.
Functions are –
● (i) To perform the control of access to media.
● (ii) It performs the unique addressing to stations directly connected to LAN.
● (iii) Detection of errors.
Design issues with data link layer are :
1. Services provided to the network layer –
The data link layer acts as a service interface to the network layer. The principal service is
transferring data from the network layer on the sending machine to the network layer on the
destination machine. This transfer also takes place via DLL (Data link-layer).
It provides three types of services:
1. Unacknowledged and connectionless services.
2. Acknowledged and connectionless services.
3. Acknowledged and connection-oriented services
Unacknowledged and connectionless services-
● Here the sender machine sends the independent frames without any
acknowledgement from the sender.
● There is no logical connection established.
Acknowledged and connectionless services-
● There is no logical connection between sender and receiver established.
● Each frame is acknowledged by the receiver.
● If the frame didn’t reach the receiver in a specific time interval it has to be sent
again.
● It is very useful in wireless systems.
Acknowledged and connection-oriented services-
● A logical connection is established between sender and receiver before data is
trimester.
● Each frame is numbered so the receiver can ensure all frames have arrived and
exactly once.
2. Frame synchronization –
The source machine sends data in the form of blocks called frames to the destination machine.
The starting and ending of each frame should be identified so that the frame can be recognized
by the destination machine.
3. Flow control –
Flow control is done to prevent the flow of data frames at the receiver end. The source machine
must not send data frames at a rate faster than the capacity of the destination machine to
accept them.
4. Error control –
Error control is done to prevent duplication of frames. The errors introduced during
transmission from source to destination machines must be detected and corrected at the
destination machine.

Error Detection and Correction in Data link Layer


Data-link layer uses error control techniques to ensure that frames, i.e. bit streams of data,
are transmitted from the source to the destination with a certain extent of accuracy.
Errors-
When bits are transmitted over the computer network, they are subject to get corrupted due to
interference and network problems. The corrupted bits leads to spurious data being received by
the destination and are called errors.
Types of Errors
Errors can be of three types, namely single bit errors, multiple bit errors, and burst errors.
​ Single bit error − In the received frame, only one bit has been corrupted, i.e. either
changed from 0 to 1 or from 1 to 0.

​ Multiple bits error − In the received frame, more than one bits are corrupted.

○ Burst error − In the received frame, more than one consecutive bit is corrupted.

Error Control
Error control can be done in two ways
​ Error detection − Error detection involves checking whether any error has occurred or
not. The number of error bits and the type of error does not matter.

​ Error correction − Error correction involves ascertaining the exact number of bits that
has been corrupted and the location of the corrupted bits.
For both error detection and error correction, the sender needs to send some additional bits
along with the data bits. The receiver performs necessary checks based upon the additional
redundant bits. If it finds that the data is free from errors, it removes the redundant bits before
passing the message to the upper layers.
Error Detection Techniques
There are three main techniques for detecting errors in frames: Parity Check, Checksum, and
Cyclic Redundancy Check (CRC).

Elementary Data Link Protocols


Protocols in the data link layer are designed so that this layer can perform its
basic functions: framing, error control and flow control. Framing is the process of
dividing bit - streams from physical layer into data frames whose size ranges from
a few hundred to a few thousand bytes. Error control mechanisms deals with
transmission errors and retransmission of corrupted and lost frames. Flow control
regulates speed of delivery and so that a fast sender does not drown a slow
receiver.

Types of Data Link Protocols


Data link protocols can be broadly divided into two categories, depending on
whether the transmission channel is noiseless or noisy.

Simplex Protocol
The Simplex protocol is hypothetical protocol designed for unidirectional data
transmission over an ideal channel, i.e. a channel through which transmission can
never go wrong. It has distinct procedures for sender and receiver. The sender
simply sends all its data available onto the channel as soon as they are available its
buffer. The receiver is assumed to process all incoming data instantly. It is
hypothetical since it does not handle flow control or error control.

Stop – and – Wait Protocol


Stop – and – Wait protocol is for noiseless channel too. It provides unidirectional
data transmission without any error control facilities. However, it provides for flow
control so that a fast sender does not drown a slow receiver. The receiver has a
finite buffer size with finite processing speed. The sender can send a frame only
when it has received indication from the receiver that it is available for further data
processing.

Stop – and – Wait ARQ


Stop – and – wait Automatic Repeat Request (Stop – and – Wait ARQ) is a variation
of the above protocol with added error control mechanisms, appropriate for noisy
channels. The sender keeps a copy of the sent frame. It then waits for a finite time
to receive a positive acknowledgement from receiver. If the timer expires or a
negative acknowledgement is received, the frame is retransmitted. If a positive
acknowledgement is received then the next frame is sent.

Go – Back – N ARQ
Go – Back – N ARQ provides for sending multiple frames before receiving the
acknowledgement for the first frame. It uses the concept of sliding window, and so
is also called sliding window protocol. The frames are sequentially numbered and a
finite number of frames are sent. If the acknowledgement of a frame is not
received within the time period, all frames starting from that frame are
retransmitted.

Selective Repeat ARQ


This protocol also provides for sending multiple frames before receiving the
acknowledgement for the first frame. However, here only the erroneous or lost
frames are retransmitted, while the good frames are received and buffered.

Stop and Wait Protocol


Before understanding the stop and Wait protocol, we first know about the error control
mechanism. The error control mechanism is used so that the received data should be
exactly same whatever sender has sent the data. The error control mechanism is divided
into two categories, i.e., Stop and Wait ARQ and sliding window. The sliding window is
further divided into two categories, i.e., Go Back N, and Selective Repeat. Based on the
usage, the people select the error control mechanism whether it is stop and wait or sliding
window.

What is Stop and Wait protocol?

Here stop and wait means, whatever the data that sender wants to send, he sends the data
to the receiver. After sending the data, he stops and waits until he receives the
acknowledgment from the receiver. The stop and wait protocol is a flow control protocol
where flow control is one of the services of the data link layer.

It is a data-link layer protocol which is used for transmitting the data over the noiseless
channels. It provides unidirectional data transmission which means that either sending or
receiving of data will take place at a time. It provides flow-control mechanism but does not
provide any error control mechanism.

The idea behind the usage of this frame is that when the sender sends the frame then he
waits for the acknowledgment before sending the next frame.

Primitives of Stop and Wait Protocol


The primitives of stop and wait protocol are:

Sender side

Rule 1: Sender sends one data packet at a time.

Rule 2: Sender sends the next packet only when it receives the acknowledgment of the
previous packet.

Therefore, the idea of a stop and wait protocol on the sender's side is very simple, i.e., send
one packet at a time, and do not send another packet before receiving the
acknowledgement.

Receiver side

Rule 1: Receive and then consume the data packet.

Rule 2: When the data packet is consumed, receiver sends the acknowledgment to the
sender.

Therefore, the idea of stop and wait protocol in the receiver's side is also very simple, i.e.,
consume the packet, and once the packet is consumed, the acknowledgment is sent. This
is known as a flow control mechanism.

Working of Stop and Wait protocol

The above figure shows the working of the stop and wait protocol. If there is a sender and
receiver, then sender sends the packet and that packet is known as a data packet. The
sender will not send the second packet without receiving the acknowledgment of the first
packet. The receiver sends the acknowledgment for the data packet that it has received.
Once the acknowledgment is received, the sender sends the next packet. This process
continues until all the packet are not sent. The main advantage of this protocol is its
simplicity but it has some disadvantages also. For example, if there are 1000 data packets
to be sent, then all the 1000 packets cannot be sent at a time as in Stop and Wait protocol,
one packet is sent at a time.

Disadvantages of Stop and Wait protocol


The following are the problems associated with a stop and wait protocol:

1. Problems occur due to lost data

Suppose the sender sends the data and the data is lost. The receiver is waiting for the data
for a long time. Since the data is not received by the receiver, so it does not send any
acknowledgment. Since the sender does not receive any acknowledgment so it will not
send the next packet. This problem occurs due to the lost data.
In this case, two problems occur:
○ Sender waits for an infinite amount of time for an acknowledgment.
○ Receiver waits for an infinite amount of time for a data.

2. Problems occur due to lost acknowledgment

Suppose the sender sends the data and it has also been received by the receiver. On
receiving the packet, the receiver sends the acknowledgment. In this case, the
acknowledgment is lost in a network, so there is no chance for the sender to receive the
acknowledgment. There is also no chance for the sender to send the next packet as in stop
and wait protocol, the next packet cannot be sent until the acknowledgment of the previous
packet is received.

In this case, one problem occurs:


○ Sender waits for an infinite amount of time for an acknowledgment.

3. Problem due to the delayed data or acknowledgment

Suppose the sender sends the data and it has also been received by the receiver. The
receiver then sends the acknowledgment but the acknowledgment is received after the
timeout period on the sender's side. As the acknowledgment is received late, so
acknowledgment can be wrongly considered as the acknowledgment of some other data
packet.

Medium Access sublayer:- The medium access control (MAC) is a sublayer of the data
link layer of the open system interconnections (OSI) reference model for data
transmission. It is responsible for flow control and multiplexing for transmission medium. It
controls the transmission of data packets via remotely shared channels. It sends data over
the network interface card.

MAC Layer in the OSI Model


The Open System Interconnections (OSI) model is a layered networking framework that
conceptualizes how communications should be done between heterogeneous systems. The
data link layer is the second lowest layer. It is divided into two sublayers −

​ The logical link control (LLC) sublayer


​ The medium access control (MAC) sublayer

The following diagram depicts the position of the MAC layer −

Functions of MAC Layer


● It provides an abstraction of the physical layer to the LLC and upper layers of the
OSI network.
● It is responsible for encapsulating frames so that they are suitable for transmission
via the physical medium.
● It resolves the addressing of source station as well as the destination station, or
groups of destination stations.
● It performs multiple access resolutions when more than one data frame is to be
transmitted. It determines the channel access methods for transmission.
● It also performs collision resolution and initiating retransmission in case of collisions.
● It generates the frame check sequences and thus contributes to protection against
transmission errors.

MAC Addresses
MAC address or media access control address is a unique identifier allotted to a network
interface controller (NIC) of a device. It is used as a network address for data transmission
within a network segment like Ethernet, Wi-Fi, and Bluetooth.

MAC address is assigned to a network adapter at the time of manufacturing. It is hardwired


or hard-coded in the network interface card (NIC). A MAC address comprises six groups of
two hexadecimal digits, separated by hyphens, colons, or no separators. An example of a
MAC address is 00:0A:89:5B:F0:11.

Channel Allocation Problem


Channel allocation is a process in which a single channel is divided and allotted to multiple
users in order to carry user specific tasks. The user’s quantity may vary every time the process
takes place. If there are N number of users and the channel is divided into N equal-sized sub
channels, Each user is assigned one portion. If the number of users are small and don’t vary at
times, then Frequency Division Multiplexing can be used as it is a simple and efficient channel
bandwidth allocating technique.

Channel allocation problem can be solved by two schemes: Static Channel Allocation in LANs
and MANs, and Dynamic Channel Allocation.

These are explained below.

1. Static Channel Allocation in LANs and MANs:


It is the classical or traditional approach of allocating a single channel among multiple
competing users using Frequency Division Multiplexing (FDM). if there are N users, the
frequency channel is divided into N equal sized portions (bandwidth), each user being assigned
one portion. Since each user has a private frequency band, there is no interference between
users.

It is not efficient to divide into a fixed number of chunks.

T = 1/(U*C-L)

T(FDM) = N*T(1/U(C/N)-L/N)

Where,

T = mean time delay,


C = capacity of channel,
L = arrival rate of frames,
1/U = bits/frame,
N = number of sub channels,
T(FDM) = Frequency Division Multiplexing Time

2. Dynamic Channel Allocation:


Possible assumptions include:

1. Station Model:
Assumes that each of N stations independently produce frames. The probability of
producing a packet in the interval IDt where I is the constant arrival rate of new frames.

2. Single Channel Assumption:


In this allocation all stations are equivalent and can send and receive on that channel.

3. Collision Assumption:
If two frames overlap in time-wise, then that's a collision. Any collision is an error, and
both frames must be re-transmitted. Collisions are only possible errors.

4. Time can be divided into Slotted or Continuous.

5. Stations can sense a channel is busy before they try it.

Protocol Assumption:

● N independent stations.
● A station is blocked until its generated frame is transmitted.
● probability of a frame being generated in a period of length Dt is IDt where I is the
arrival rate of frames.
● Only a single Channel is available.
● Time can be either: Continuous or slotted.
● Carrier Sense: A station can sense if a channel is already busy before transmission.
● No Carrier Sense: Time out used to sense loss data.

Multiple access protocols :


What is a multiple access protocol?
When a sender and receiver have a dedicated link to transmit data packets, the data link
control is enough to handle the channel. Suppose there is no dedicated path to
communicate or transfer the data between two devices. In that case, multiple stations
access the channel and simultaneously transmits the data over the channel. It may create
collision and cross talk. Hence, the multiple access protocol is required to reduce the
collision and avoid crosstalk between the channels.

For example, suppose that there is a classroom full of students. When a teacher asks a
question, all the students (small channels) in the class start answering the question at the
same time (transferring the data simultaneously). All the students respond at the same
time due to which data is overlap or data lost. Therefore it is the responsibility of a teacher
(multiple access protocol) to manage the students and make them one answer.

ALOHA Random Access Protocol

It is designed for wireless LAN (Local Area Network) but can also be used in a shared
medium to transmit data. Using this method, any station can transmit data across a
network simultaneously when a data frameset is available for transmission.

Aloha Rules
1. Any station can transmit data to a channel at any time.

2. It does not require any carrier sensing.

3. Collision and data frames may be lost during the transmission of data through
multiple stations.

4. Acknowledgment of the frames exists in Aloha. Hence, there is no collision


detection.

5. It requires retransmission of data after some random amount of time.

Pure Aloha

Whenever data is available for sending over a channel at stations, we use Pure Aloha. In
pure Aloha, when each station transmits data to a channel without checking whether the
channel is idle or not, the chances of collision may occur, and the data frame can be lost.
When any station transmits the data frame to a channel, the pure Aloha waits for the
receiver's acknowledgment. If it does not acknowledge the receiver end within the specified
time, the station waits for a random amount of time, called the backoff time (Tb). And the
station may assume the frame has been lost or destroyed. Therefore, it retransmits the
frame until all the data are successfully transmitted to the receiver.

1. The total vulnerable time of pure Aloha is 2 * Tfr.

2. Maximum throughput occurs when G = 1/ 2 that is 18.4%.

3. Successful transmission of data frame is S = G * e ^ - 2 G.


As we can see in the figure above, there are four stations for accessing a shared channel
and transmitting data frames. Some frames collide because most stations send their
frames at the same time. Only two frames, frame 1.1 and frame 2.2, are successfully
transmitted to the receiver end. At the same time, other frames are lost or destroyed.
Whenever two frames fall on a shared channel simultaneously, collisions can occur, and
both will suffer damage. If the new frame's first bit enters the channel before finishing the
last bit of the second frame. Both frames are completely finished, and both stations must
retransmit the data frame.

Slotted Aloha

The slotted Aloha is designed to overcome the pure Aloha's efficiency because pure Aloha
has a very high possibility of frame hitting. In slotted Aloha, the shared channel is divided
into a fixed time interval called slots. So that, if a station wants to send a frame to a shared
channel, the frame can only be sent at the beginning of the slot, and only one frame is
allowed to be sent to each slot. And if the stations are unable to send data to the beginning
of the slot, the station will have to wait until the beginning of the slot for the next time.
However, the possibility of a collision remains when trying to send a frame at the beginning
of two or more station time slot.

1. Maximum throughput occurs in the slotted Aloha when G = 1 that is 37%.

2. The probability of successfully transmitting the data frame in the slotted Aloha is S =
G * e ^ - 2 G.
3. The total vulnerable time required in slotted Aloha is Tfr.

CSMA (Carrier Sense Multiple Access)

It is a carrier sense multiple access based on media access protocol to sense the traffic on
a channel (idle or busy) before transmitting the data. It means that if the channel is idle, the
station can send data to the channel. Otherwise, it must wait until the channel becomes
idle. Hence, it reduces the chances of a collision on a transmission medium.

CSMA Access Modes

1-Persistent: In the 1-Persistent mode of CSMA that defines each node, first sense the
shared channel and if the channel is idle, it immediately sends the data. Else it must wait
and keep track of the status of the channel to be idle and broadcast the frame
unconditionally as soon as the channel is idle.

Non-Persistent: It is the access mode of CSMA that defines before transmitting the data,
each node must sense the channel, and if the channel is inactive, it immediately sends the
data. Otherwise, the station must wait for a random time (not continuously), and when the
channel is found to be idle, it transmits the frames.

P-Persistent: It is the combination of 1-Persistent and Non-persistent modes. The


P-Persistent mode defines that each node senses the channel, and if the channel is
inactive, it sends a frame with a P probability. If the data is not transmitted, it waits for a (q
= 1-p probability) random time and resumes the frame with the next time slot.
O- Persistent: It is an O-persistent method that defines the superiority of the station before
the transmission of the frame on the shared channel. If it is found that the channel is
inactive, each station waits for its turn to retransmit the data.

What are wireless LANs?


Wireless LANs (WLANs) are wireless computer networks that use high-frequency radio
waves instead of cables for connecting the devices within a limited area forming LAN (Local
Area Network). Users connected by wireless LANs can move around within this limited area
such as home, school, campus, office building, railway platform, etc.

Most WLANs are based upon the standard IEEE 802.11 standard or WiFi.

Components of WLANs
The components of WLAN architecture as laid down in IEEE 802.11 are −

​ Stations (STA) − Stations comprises of all devices and equipment that are connected to the
wireless LAN. Each station has a wireless network interface controller. A station can be of two
types −
​ Wireless Access Point (WAP or AP)
​ Client
​ Basic Service Set (BSS) − A basic service set is a group of stations communicating at the
physical layer level. BSS can be of two categories −
​ Infrastructure BSS
​ Independent BSS
​ Extended Service Set (ESS) − It is a set of all connected BSS.
​ Distribution System (DS) − It connects access points in ESS.

Types of WLANS
WLANs, as standardized by IEEE 802.11, operates in two basic modes, infrastructure, and ad hoc
mode.

​ Infrastructure Mode − Mobile devices or clients connect to an access point (AP) that in turn
connects via a bridge to the LAN or Internet. The client transmits frames to other clients via
the AP.
​ Ad Hoc Mode − Clients transmit frames directly to each other in a peer-to-peer fashion.

Advantages of WLANs
● They provide clutter-free homes, offices and other networked places.
● The LANs are scalable in nature, i.e. devices may be added or removed from the network at
greater ease than wired LANs.
● The system is portable within the network coverage. Access to the network is not bounded by
the length of the cables.
● Installation and setup are much easier than wired counterparts.
● The equipment and setup costs are reduced.

Disadvantages of WLANs

● Since radio waves are used for communications, the signals are noisier with more
interference from nearby systems.
● Greater care is needed for encrypting information. Also, they are more prone to
errors. So, they require greater bandwidth than the wired LANs.
● WLANs are slower than wired LANs.

What is Data Link Layer Switching?

Network switching is the process of forwarding data frames or packets from one port to
another leading to data transmission from source to destination. Data link layer is the
second layer of the Open System Interconnections (OSI) model whose function is to divide
the stream of bits from the physical layer into data frames and transmit the frames
according to switching requirements. Switching in the data link layer is done by network
devices called bridges.

Bridges

A data link layer bridge connects multiple LANs (local area networks) together to form a
larger LAN. This process of aggregating networks is called network bridging. A bridge
connects the different components so that they appear as parts of a single network.

The following diagram shows connection by a bridge −


Switching by Bridges
When a data frame arrives at a particular port of a bridge, the bridge examines the
frame’s data link address, or more specifically, the MAC address. If the destination
address as well as the required switching is valid, the bridge sends the frame to the
destined port. Otherwise, the frame is discarded.

The bridge is not responsible for end to end data transfer. It is concerned with
transmitting the data frame from one hop to the next. Hence, they do not examine
the payload field of the frame. Due to this, they can help in switching any kind of
packets from the network layer above.

Bridges also connect virtual LANs (VLANs) to make a larger VLAN.

If any segment of the bridged network is wireless, a wireless bridge is used to


perform the switching.

There are three main ways for bridging −

​ simple bridging
​ multi-port bridging
​ learning or transparent bridging
UNIT III - Network Layer
The network Layer is the third layer in the OSI model of computer networks. Its main function is to transfer
network packets from the source to the destination. It involves both the source host and the destination host.
At the source, it accepts a packet from the transport layer, encapsulates it in a datagram, and then delivers
the packet to the data link layer so that it can further be sent to the receiver. At the destination, the
datagram is decapsulated, and the packet is extracted and delivered to the corresponding transport layer.

Features of Network Layer


1. The main responsibility of the Network layer is to carry the data packets from the source to the
destination without changing or using them.
2. If the packets are too large for delivery, they are fragmented i.e., broken down into smaller
packets.
3. It decides the route to be taken by the packets to travel from the source to the destination
among the multiple routes available in a network (also called routing).
4. The source and destination addresses are added to the data packets inside the network layer.
Services Offered by Network Layer
The services which are offered by the network layer protocol are as follows:
1. Packetizing
2. Routing
3. Forwarding
1. Packetizing- The process of encapsulating the data received from the upper layers of the network (also
called payload) in a network layer packet at the source and decapsulating the payload from the network
layer packet at the destination is known as packetizing.
The source host adds a header that contains the source and destination address and some other relevant
information required by the network layer protocol to the payload received from the upper layer protocol and
delivers the packet to the data link layer.
The destination host receives the network layer packet from its data link layer, decapsulates the packet, and
delivers the payload to the corresponding upper layer protocol. The routers in the path are not allowed to
change either the source or the destination address. The routers in the path are not allowed to decapsulate
the packets they receive unless they need to be fragmented.

Packetizing

Routing is the process of moving data from one device to another device. These are two
2. Routing -
other services offered by the network layer. In a network, there are a number of routes available
from the source to the destination. The network layer specifies some strategies which find out the
best possible route. This process is referred to as routing. There are a number of routing protocols
that are used in this process and they should be run to help the routers coordinate with each other
and help in establishing communication throughout the network.
Routing

3. Forwarding-Forwarding is simply defined as the action applied by each router when a packet
arrives at one of its interfaces. When a router receives a packet from one of its attached networks,
it needs to forward the packet to another attached network (unicast routing) or to some attached
networks (in the case of multicast routing). Routers are used on the network for forwarding a
packet from the local network to the remote network. So, the process of routing involves packet
forwarding from an entry interface out to an exit interface.

Forwarding

Difference between Routing and Forwarding

Routing is the process of moving data Forwarding is simply defined as the action
from one applied by each router when a packet arrives at
device to another device. one of its interfaces.

Operates on the Network Layer. Operates


on the Network Layer.

Work is based on the Forwarding Table. Check


the forwarding table and work according to that.

Works on protocols like Routing Works


Information on protocols like UDP Encapsulating Security
Protocol (RIP) for Routing. Payloads

The design issues can be elaborated under four heads −


​ Store − and − Forward Packet Switching
​ Services to Transport Layer
​ Providing Connection Oriented Service
​ Providing Connectionless Service
Store − and − Forward Packet Switching
The network layer operates in an environment that uses store and forward packet switching. The
node which has a packet to send, delivers it to the nearest router. The packet is stored in the router
until it has fully arrived and its checksum is verified for error detection. Once, this is done, the
packet is forwarded to the next router. Since, each router needs to store the entire packet before it
can forward it to the next hop, the mechanism is called store − and − forward switching.
Services to Transport Layer
The network layer provides service its immediate upper layer, namely transport layer, through the
network − transport layer interface. The two types of services provided are −
​ Connection − Oriented Service − In this service, a path is setup between the source and the
destination, and all the data packets belonging to a message are routed along this path.
​ Connectionless Service − In this service, each packet of the message is considered as an
independent entity and is individually routed from the source to the destination.
The objectives of the network layer while providing these services are −
​ The services should not be dependent upon the router technology.
​ The router configuration details should not be of a concern to the transport layer.
​ A uniform addressing plan should be made available to the transport layer, whether the
network is a LAN, MAN or WAN.
Providing Connection Oriented Service
In connection − oriented services, a path or route called a virtual circuit is setup between the source
and the destination nodes before the transmission starts. All the packets in the message are sent
along this route. Each packet contains an identifier that denotes the virtual circuit to which it belongs
to. When all the packets are transmitted, the virtual circuit is terminated and the connection is
released. An example of connection − oriented service is MultiProtocol Label Switching (MPLS).
Providing Connectionless Service
In connectionless service, since each packet is transmitted independently, each packet contains its
routing information and is termed as datagram. The network using datagrams for transmission is
called datagram networks or datagram subnets. No prior setup of routes are needed before
transmitting a message. Each datagram belong to the message follows its own individual route from
the source to the destination. An example of connectionless service is Internet Protocol or IP.
Routing algorithm

○ In order to transfer the packets from source to the destination, the network layer must determine the
best route through which packets can be transmitted.
○ Whether the network layer provides datagram service or virtual circuit service, the main job of the
network layer is to provide the best route. The routing protocol provides this job.
○ The routing protocol is a routing algorithm that provides the best path from the source to the
destination. The best path is the path that has the "least-cost path" from source to the destination.
○ Routing is the process of forwarding the packets from source to the destination but the best route to
send the packets is determined by the routing algorithm.

Classification of a Routing algorithm

The Routing algorithm is divided into two categories:

○ Adaptive Routing algorithm


○ Non-adaptive Routing algorithm

Adaptive Routing algorithm

○ An adaptive routing algorithm is also known as dynamic routing algorithm.


○ This algorithm makes the routing decisions based on the topology and network traffic.
○ The main parameters related to this algorithm are hop count, distance and estimated transit time.

An adaptive routing algorithm can be classified into three parts:

○ Centralized algorithm: It is also known as a global routing algorithm as it computes the least-cost
path between source and destination by using complete and global knowledge about the network. This
algorithm takes the connectivity between the nodes and link cost as input, and this information is
obtained before actually performing any calculation. Link state algorithm is referred to as a
centralized algorithm since it is aware of the cost of each link in the network.
○ Isolation algorithm: It is an algorithm that obtains the routing information by using local
information rather than gathering information from other nodes.
○ Distributed algorithm: It is also known as a decentralized algorithm as it computes the least-cost
path between source and destination in an iterative and distributed manner. In the decentralized
algorithm, no node has the knowledge about the cost of all the network links. In the beginning, a node
contains the information only about its own directly attached links and through an iterative process of
calculation computes the least-cost path to the destination. A Distance vector algorithm is a
decentralized algorithm as it never knows the complete path from source to the destination, instead it
knows the direction through which the packet is to be forwarded along with the least cost path.

Non-Adaptive Routing algorithm

○ Non Adaptive routing algorithm is also known as a static routing algorithm.


○ When booting up the network, the routing information stores to the routers.
○ Non Adaptive routing algorithms do not take the routing decision based on the network topology or
network traffic.

The Non-Adaptive Routing algorithm is of two types:

Flooding: In case of flooding, every incoming packet is sent to all the outgoing links except the one from it
has been reached. The disadvantage of flooding is that node may contain several copies of a particular packet.

Random walks: In case of random walks, a packet sent by the node to one of its neighbors randomly. An
advantage of using random walks is that it uses the alternative routes very efficiently.

Dijkstra’s Algorithm : -
Flooding - Flooding is a non-adaptive routing technique following this simple method: when a data packet
arrives at a router, it is sent to all the outgoing links except the one it has arrived on. For example, let us consider
the network in the figure, having six routers that are connected through transmission lines.

Using flooding technique −


​ An incoming packet to A, will be sent to B, C and D.
​ B will send the packet to C and E.
​ C will send the packet to B, D and F.
​ D will send the packet to C and F.
​ E will send the packet to F.
​ F will send the packet to C and E.
Types of Flooding -Flooding may be of three types −
​ Uncontrolled flooding − Here, each router unconditionally transmits the incoming data packets to
all its neighbors.
​ Controlled flooding − They use some methods to control the transmission of packets to the
neighboring nodes. The two popular algorithms for controlled flooding are Sequence Number
Controlled Flooding (SNCF) and Reverse Path Forwarding (RPF).
​ Selective flooding − The routers don't transmit the incoming packets only along those paths
which are heading approximately in the right direction, instead of every available path.
Advantages of Flooding
​ It is very simple to set up and implement, since a router may know only its neighbors.
​ It is extremely robust. Even in case of malfunctioning of a large number routers, the packets find
a way to reach the destination.
​ All nodes which are directly or indirectly connected are visited. So, there are no chances for any
node to be left out. This is a main criteria in case of broadcast messages.
​ The shortest path is always chosen by flooding.
Limitations of Flooding
​ Flooding tends to create an infinite number of duplicate data packets, unless some measures are
adopted to damp packet generation.
​ It is wasteful if a single destination needs the packet, since it delivers the data packet to all nodes
irrespective of the destination.
​ The network may be clogged with unwanted and duplicate data packets. This may hamper
delivery of other data packets.
Network Layer Routing
When a device has multiple paths to reach a destination, it always selects one path by
preferring it over others. This selection process is termed as Routing. Routing is done by
special network devices called routers or it can be done by means of software processes.The
software based routers have limited functionality and limited scope.
A router is always configured with some default route. A default route tells the router where
to forward a packet if there is no route found for specific destination. In case there are
multiple path existing to reach the same destination, router can make decision based on the
following information:
​ Hop Count
​ Bandwidth
​ Metric
​ Prefix-length
​ Delay
Routes can be statically configured or dynamically learnt. One route can be configured to be
preferred over others.
Unicast routing
Most of the traffic on the internet and intranets known as unicast data or unicast traffic is sent
with specified destination. Routing unicast data over the internet is called unicast routing. It
is the simplest form of routing because the destination is already known. Hence the router
just has to look up the routing table and forward the packet to next hop.

Broadcast routing
By default, the broadcast packets are not routed and forwarded by the routers on any
network. Routers create broadcast domains. But it can be configured to forward broadcasts in
some special cases. A broadcast message is destined to all network devices.
Broadcast routing can be done in two ways (algorithm):
​ A router creates a data packet and then sends it to each host one by one. In this case,
the router creates multiple copies of single data packet with different destination
addresses. All packets are sent as unicast but because they are sent to all, it simulates
as if router is broadcasting.
This method consumes lots of bandwidth and router must destination address of each
node.
​ Secondly, when router receives a packet that is to be broadcasted, it simply floods
those packets out of all interfaces. All routers are configured in the same way.

This method is easy on router's CPU but may cause the problem of duplicate packets
received from peer routers.
Reverse path forwarding is a technique, in which router knows in advance about its
predecessor from where it should receive broadcast. This technique is used to detect
and discard duplicates.
Multicast Routing
Multicast routing is special case of broadcast routing with significance difference and
challenges. In broadcast routing, packets are sent to all nodes even if they do not want it. But
in Multicast routing, the data is sent to only nodes which wants to receive the packets.

The router must know that there are nodes, which wish to receive multicast packets (or
stream) then only it should forward. Multicast routing works spanning tree protocol to avoid
looping.
Multicast routing also uses reverse path Forwarding technique, to detect and discard
duplicates and loops.
Anycast Routing
Anycast packet forwarding is a mechanism where multiple hosts can have same logical
address. When a packet destined to this logical address is received, it is sent to the host
which is nearest in routing topology.
Anycast routing is done with the help of a DNS server. Whenever an Anycast packet is
received it is enquired with DNS to where to send it. DNS provides the IP address which is
the nearest IP configured on it.
Unicast Routing Protocols
There are two kinds of routing protocols available to route unicast packets:
​ Distance Vector Routing Protocol
Distance Vector is a simple routing protocol which takes routing decisions on the
number of hops between source and destination. A route with less number of hops is
considered as the best route. Every router advertises its set best routes to other routers.
Ultimately, all routers build up their network topology based on the advertisements of
their peer routers,
For example Routing Information Protocol (RIP).
​ Link State Routing Protocol
Link State protocol is a slightly more complicated protocol than Distance Vector. It
takes into account the states of links of all the routers in a network. This technique
helps routes build a common graph of the entire network. All routers then calculate
their best path for routing purposes.for example, Open Shortest Path First (OSPF) and
Intermediate System to Intermediate System (ISIS).
Multicast Routing Protocols
Unicast routing protocols use graphs while Multicast routing protocols use trees, i.e.
spanning tree to avoid loops. The optimal tree is called the shortest path spanning tree.
​ DVMRP - Distance Vector Multicast Routing Protocol
​ MOSPF - Multicast Open Shortest Path First
​ CBT - Core Based Tree
​ PIM - Protocol independent Multicast
Protocol Independent Multicast is commonly used now. It has two flavors:
​ PIM Dense Mode
This mode uses source-based trees. It is used in dense environments such as LAN.
​ PIM Sparse Mode
This mode uses shared trees. It is used in sparse environments such as WAN.
Unit - V Application Layer
An application layer protocol defines how the application processes running on different systems
pass the messages to each other.
DNS stands for Domain Name System.
○ DNS is a directory service that provides a mapping between the name of a host on the
network and its numerical address.
○ DNS is required for the functioning of the internet.
○ Each node in a tree has a domain name, and a full domain name is a sequence of symbols
specified by dots.
○ DNS is a service that translates the domain name into IP addresses. This allows the users of
networks to utilize user-friendly names when looking for other hosts instead of remembering
the IP addresses.
○ For example, suppose the FTP site at EduSoft had an IP address of 132.147.165.50, most
people would reach this site by specifying ftp.EduSoft.com. Therefore, the domain name is
more reliable than the IP address.
DNS is a TCP/IP protocol used on different platforms. The domain name space is divided into three
different sections: generic domains, country domains, and inverse domain.

Generic Domains
○ It defines the registered hosts according to their generic behavior.
○ Each node in a tree defines the domain name, which is an index to the DNS database.
○ It uses three-character labels, and these labels describe the organization type.

Label Description

aero Airlines and aerospace companies

biz Businesses or firms

com Commercial Organizations

coop Cooperative business Organizations


edu Educational institutions

gov Government institutions

info Information service providers

int International Organizations

mil Military groups

museum Museum & other nonprofit organizations

name Personal names

net Network Support centers

org Nonprofit Organizations

pro Professional individual Organizations

Country Domain
The format of country domain is the same as a generic domain, but it uses two-character country
abbreviations (e.g., us for the United States) in place of three character organizational
abbreviations.
Inverse Domain
The inverse domain is used for mapping an address to a name. When the server has received a
request from the client, and the server contains the files of only authorized clients. To determine
whether the client is on the authorized list or not, it sends a query to the DNS server and asks for
mapping an address to the name.

Working of DNS
○ DNS is a client/server network communication protocol. DNS clients send requests to the.
server while DNS servers send responses to the client.
○ Client requests contain a name which is converted into an IP address known as a forward
DNS lookups while requests containing an IP address which is converted into a name known
as reverse DNS lookups.
○ DNS implements a distributed database to store the name of all the hosts available on the
internet.
○ If a client like a web browser sends a request containing a hostname, then a piece of
software such as DNS resolver sends a request to the DNS server to obtain the IP address
of a hostname. If a DNS server does not contain the IP address associated with a hostname,
then it forwards the request to another DNS server. If the IP address has arrived at the
resolver, which in turn completes the request over the internet protocol.

What is E-mail? E-mail is defined as the transmission of messages on the Internet. It is one of the
most commonly used features over communications networks that may contain text, files, images, or other
attachments. Generally, it is information that is stored on a computer sent through a network to a specified
individual or group of individuals.
Email messages are conveyed through email servers; it uses multiple protocols within the TCP/IP suite. For
example, SMTP is a protocol, which stands for simple mail transfer protocol and used to send messages
whereas other protocols IMAP or POP are used to retrieve messages from a mail server. If you want to login
to your mail account, you just need to enter a valid email address, password, and the mail servers used to
send and receive messages.
Although most of the webmail servers automatically configure your mail account, therefore, you are only
required to enter your email address and password. However, you may need to manually configure each
account if you use an email client like Microsoft Outlook or Apple Mail. In addition, to enter the email address
and password, you may also need to enter incoming and outgoing mail servers and the correct port numbers
for each one.
Email messages include three components, which are as follows:
○ Message envelope: It depicts the email's electronic format.
○ Message header: It contains email subject line and sender/recipient information.
○ Message body: It comprises images, text, and other file attachments.
The email was developed to support rich text with custom formatting, and the original email standard is only
capable of supporting plain text messages. In modern times, email supports HTML (Hypertext markup
language), which makes it capable of emails to support the same formatting as websites. The email that
supports HTML can contain links, images, CSS layouts, and also can send files or "email attachments" along
with messages. Most of the mail servers enable users to send several attachments with each message. The
attachments were typically limited to one megabyte in the early days of email. Still, nowadays, many mail
servers are able to support email attachments of 20 megabytes or more in size.
In 1971, as a test e-mail message, Ray Tomlinson sent the first e-mail to himself. This email was contained
the text "something like QWERTYUIOP." However, the e-mail message was still transmitted through ARPANET,
despite sending the e-mail to himself. Most of the electronic mail was being sent as compared to postal mail
till 1996.
Differences between email and webmail
The term email is commonly used to describe both browser-based electronic mail and non-browser-based
electronic mail today. AOL and Gmail are browser-based electronic mails, whereas Outlook for Office 365 is
non-browser-based electronic mail. However, to define email, a difference was earlier made as a non-browser
program that needed a dedicated client and email server. The non-browser emails offered some advantages,
which are enhanced security, integration with corporate software platforms, and lack of advertisements.
Uses of email
Email can be used in different ways: it can be used to communicate either within an organization or
personally, including between two people or a large group of people. Most people benefit from
communicating by email with colleagues or friends or individuals or small groups. It allows you to
communicate with others around the world and send and receive images, documents, links, and other
attachments. Additionally, it offers benefits to users to communicate with the flexibility on their own schedule.
There is another benefit of using email; if you use it to communicate between two people or small groups that
will be beneficial to remind participants of approaching due dates and time-sensitive activities and send
professional follow-up emails after appointments. Users can also use the email to quickly remind all
upcoming events or inform the group of a time change. Furthermore, it can be used by companies or
organizations to convey information to large numbers of employees or customers. Mainly, email is used for
newsletters, where mailing list subscribers are sent email marketing campaigns directly and promoted
content from a company.
Email can also be used to move a latent sale into a completed purchase or turn leads into paying customers.
For example, a company may create an email that is used to send emails automatically to online customers
who contain products in their shopping cart. This email can help to remind consumers that they have items in
their cart and stimulate them to purchase those items before the items run out of stock. Also, emails are used
to get reviews by customers after making a purchase. They can survey by including a question to review the
quality of service.
Advantages of Email
There are many advantages of email, which are as follows:
○ Cost-effective: Email is a very cost-effective service to communicate with others as there are several
email services available to individuals and organizations for free of cost. Once a user is online, it does
not include any additional charge for the services.
○ Email offers users the benefit of accessing email from anywhere at any time if they have an Internet
connection.
○ Email offers you an incurable communication process, which enables you to send a response at a
convenient time. Also, it offers users a better option to communicate easily regardless of different
schedules.
○ Speed and simplicity: Email can be composed very easily with the correct information and contacts.
Also, with minimum lag time, it can be exchanged quickly.
○ Mass sending: You can send a message easily to large numbers of people through email.
○ Email exchanges can be saved for future retrieval, which allows users to keep important
conversations or confirmations in their records and can be searched and retrieved when they are
needed quickly.
○ Email provides a simple user interface and enables users to categorize and filter their messages. This
can help you recognize unwanted emails like junk and spam mail. Also, users can find specific
messages easily when they are needed.
○ As compared to traditional posts, emails are delivered extremely fast.
○ Email is beneficial for the planet, as it is paperless. It reduces the cost of paper and helps to save the
environment by reducing paper usage.
○ It also offers a benefit to attaching the original message at the time you reply to an email. This is
beneficial when you get hundreds of emails a day, and the recipient knows what you are talking about.
○ Furthermore, emails are beneficial for advertising products. As email is a form of communication,
organizations or companies can interact with a lot of people and inform them in a short time.
Disadvantages of Email
○ Impersonal: As compared to other forms of communication, emails are less personal. For example,
when you talk to anyone over the phone or meeting face to face is more appropriate for
communicating than email.
○ Misunderstandings: As email includes only text, and there is no tone of voice or body language to
provide context. Therefore, misunderstandings can occur easily with email. If someone sends a joke
on email, it can be taken seriously. Also, well-meaning information can be quickly typed as rude or
aggressive that can impact wrong. Additionally, if someone types with short abbreviations and
descriptions to send content on the email, it can easily be misinterpreted.
○ Malicious Use: An email can be sent by anyone if they have an only email address. Sometimes, an
unauthorized person can send you mail, which can be harmful in terms of stealing your personal
information. Thus, they can also use email to spread gossip or false information.
○ Accidents Will Happen: With email, you can make fatal mistakes by clicking the wrong button in a
hurry. For instance, instead of sending it to a single person, you can accidentally send sensitive
information to a large group of people. Thus, the information can be disclosed, when you have clicked
the wrong name in an address list. Therefore, it can be harmful and generate big trouble in the
workplace.
○ Spam: Although in recent days, the features of email have been improved, there are still big issues
with unsolicited advertising arriving and spam through email. It can easily become overwhelming and
takes time and energy to control.
○ Information Overload: As it is very easy to send email to many people at a time, which can create
information overload. In many modern workplaces, it is a major problem where it is required to move
a lot of information and impossible to tell if an email is important. And, email needs organization and
upkeep. The bad feeling is one of the other problems with email when you returned from vacation and
found hundreds of unopened emails in your inbox.
○ Viruses: Although there are many ways to travel viruses in the devices, email is one of the common
ways to enter viruses and infect devices. Sometimes when you get mail, it might be the virus that
comes with an attached document. And, the virus can infect the system when you click on the email
and open the attached link. Furthermore, an anonymous person or a trusted friend or contact can
send infected emails.
○ Pressure to Respond: If you get emails and you do not answer them, the sender can get annoyed and
think you are ignoring them. Thus, this can be a reason to put pressure on you to keep opening emails
and then respond in some way.
○ Time Consuming: When you get an email and read, write, and respond to emails that can take up vast
amounts of time and energy. Many modern workers spend their most time with emails, which may be
caused to take more time to complete work.
○ Overlong Messages: Generally, email is a source of communication with the intention of brief
messages. There are some people who write overlong messages that can take much more time than
required.
○ Insecure: There are many hackers available that want to gain your important information, so email is
a common source to seek sensitive data, such as political, financial, documents, or personal
messages. In recent times, there have various high-profile cases occurred that shown how email is
insecure about information theft.
Different types of Email
There are many types of email; such are as follows:
Newsletters: It is studied by Clutch, the newsletter is the most common type of email that are routinely sent to
all mailing list subscribers, either daily, weekly, or monthly. These emails often contain from the blog or
website, links curated from other sources, and selected content that the company has recently published.
Typically, Newsletter emails are sent on a consistent schedule, and they offer businesses the option to convey
important information to their client through a single source. Newsletters might also incorporate upcoming
events or new webinars from the company, or other updates.
Lead Nurturing: Lead-nurturing emails are a series of related emails that marketers use to take users on a
journey that may impact their buying behavior. These emails are typically sent over a period of several days or
weeks. Lead-nurturing emails are also known as trigger campaigns, which are used for solutions in an
attempt to move any prospective sale into a completed purchase and educate potential buyers on the
services. These emails are not only helpful for converting emails but also drive engagement. Furthermore,
lead-nurturing emails are initiated by a potential buyer taking initial action, such as clicking links on a
promotional email or downloading a free sample.
Promotional emails: It is the most common type of B2B (Business to Business) email, which is used to inform
the email list of your new or existing products or services. These types of emails contain creating new or
repeat customers, speeding up the buying process, or encouraging contacts to take some type of action. It
provides some critical benefits to buyers, such as a free month of service, reduced or omitted fees for
managed services, or percentage off the purchase price.
Standalone Emails: These emails are popular like newsletters emails, but they contain a limitation. If you want
to send an email with multiple links or blurbs, your main call-to-action can weaken. Your subscriber may skip
your email and move on, as they may click on the first link or two in your email but may not come back to the
others.
Onboarding emails: An onboarding email is a message that is used to strengthen customer loyalty, also
known as post-sale emails. These emails receive users right after subscription. The onboarding emails are
sent to buyers to familiarize and educate them about how to use a product effectively. Additionally, when
clients are faced with large-scale service deployments, these emails help them facilitate user adoption.
Transactional: These emails are related to account activity or a commercial transaction and sent from one
sender to one recipient. Some examples of transactional email are purchase confirmations, password
reminder emails, and personalized product notifications. These emails are used when you have any kind of
e-commerce component to your business. As compared to any other type of email, the transactional email
messages have 8x the opens and clicks.
Plain-Text Emails: It is a simple email that does not include images or graphics and no formatting; it only
contains the text. These types of emails may be worth it if you try to only ever send fancy formatted emails,
text-only messages. According to HubSpot, although people prefer fully designed emails with various images,
plain text emails with less HTML won out in every A/B test. In fact, HTML emails contain lower open and
click-through rates, and plain text emails can be great for blog content, event invitations, and survey or
feedback requests. Even if you do not send planer emails, you can boost your open and click through rates by
simplifying your emails and including fewer images.
Welcome emails: It is a type of B2B email and common parts of onboarding emails that help users get
acquainted with the brand. These emails can improve subscriber constancy as they include additional
information, which helps the new subscriber in terms of a business objective. Generally, welcome emails are
sent to buyers who got a subscription to a business's opt-in activities, such as a blog, mailing list, or webinar.
Also, these emails can help businesses to build a better relationship between customers.

World Wide Web (WWW)


The World Wide Web is abbreviated as WWW and is commonly known as the web. The WWW
was initiated by CERN (European library for Nuclear Research) in 1989.
WWW can be defined as the collection of different websites around the world, containing
different information shared via local servers(or computers).
History:
It is a project created, by Timothy Berner Lee in 1989, for researchers to work together effectively
at CERN. is an organization, named the World Wide Web Consortium (W3C), which was
developed for further development of the web. This organization is directed by Tim Berner’s Lee,
aka the father of the web.
System Architecture:
From the user’s point of view, the web consists of a vast, worldwide connection of documents or
web pages. Each page may contain links to other pages anywhere in the world. The pages can be
retrieved and viewed by using browsers of which internet explorer, Netscape Navigator, Google
Chrome, etc are the popular ones. The browser fetches the page requested, interprets the text and
formatting commands on it, and displays the page, properly formatted, on the screen.
The basic model of how the web works are shown in the figure below. Here the browser is
displaying a web page on the client machine. When the user clicks on a line of text that is linked to
a page on the abd.com server, the browser follows the hyperlink by sending a message to the
abd.com server asking it for the page.

Here the browser displays a web page on the client machine when the user clicks on a line of text
that is linked to a page on abd.com, the browser follows the hyperlink by sending a message to
the abd.com server asking for the page.
Working of WWW:
The World Wide Web is based on several different technologies: Web browsers, Hypertext
Markup Language (HTML) and Hypertext Transfer Protocol (HTTP).
A Web browser is used to access web pages. Web browsers can be defined as programs which
display text, data, pictures, animation and video on the Internet. Hyperlinked resources on the
World Wide Web can be accessed using software interfaces provided by Web browsers. Initially,
Web browsers were used only for surfing the Web but now they have become more universal.
Web browsers can be used for several tasks including conducting searches, mailing, transferring
files, and much more. Some of the commonly used browsers are Internet Explorer, Opera Mini, and
Google Chrome.
Features of WWW:
● HyperText Information System
● Cross-Platform
● Distributed
● Open Standards and Open Source
● Uses Web Browsers to provide a single interface for many services
● Dynamic, Interactive and Evolving.
● “Web 2.0”

Components of the Web: There are 3 components of the web:

1. Uniform Resource Locator (URL): serves as a system for resources on the web.
2. HyperText Transfer Protocol (HTTP): specifies communication of browser and server.
3. Hyper Text Markup Language (HTML): defines the structure, organization and content
of a webpage.

HTTP
○ HTTP stands for HyperText Transfer Protocol.

○ It is a protocol used to access the data on the World Wide Web (www).
○ The HTTP protocol can be used to transfer the data in the form of plain text, hypertext, audio, video, and so on.

○ This protocol is known as HyperText Transfer Protocol because of its efficiency that allows us to use in a
hypertext environment where there are rapid jumps from one document to another document.

○ HTTP is similar to the FTP as it also transfers the files from one host to another host. But, HTTP is simpler than
FTP as HTTP uses only one connection, i.e., no control connection to transfer the files.

○ HTTP is used to carry the data in the form of MIME-like format.

○ HTTP is similar to SMTP as the data is transferred between client and server. The HTTP differs from the SMTP
in the way the messages are sent from the client to the server and from server to the client. SMTP messages
are stored and forwarded while HTTP messages are delivered immediately.

Features of HTTP:

○ Connectionless protocol: HTTP is a connectionless protocol. HTTP client initiates a request and waits for a
response from the server. When the server receives the request, the server processes the request and sends
back the response to the HTTP client after which the client disconnects the connection. The connection
between client and server exist only during the current request and response time only.

○ Media independent: HTTP protocol is a media independent as data can be sent as long as both the client and
server know how to handle the data content. It is required for both the client and server to specify the content
type in MIME-type header.

○ Stateless: HTTP is a stateless protocol as both the client and server know each other only during the current
request. Due to this nature of the protocol, both the client and server do not retain the information between
various requests of the web pages.

HTTP Transactions
The above figure shows the HTTP transaction between client and server. The client initiates a transaction by
sending a request message to the server. The server replies to the request message by sending a response
message.

Messages

HTTP messages are of two types: request and response. Both the message types follow the same message
format.

Request Message: The request message is sent by the client that consists of a request line, headers, and
sometimes a body.
Response Message: The response message is sent by the server to the client that consists of a status line,
headers, and sometimes a body.rward Skip 10s

Uniform Resource Locator (URL)

○ A client that wants to access the document on the internet needs an address and to facilitate the access of
documents, the HTTP uses the concept of Uniform Resource Locator (URL).

○ The Uniform Resource Locator (URL) is a standard way of specifying any kind of information on the internet.

○ The URL defines four parts: method, host computer, port, and path.

○ Method: The method is the protocol used to retrieve the document from a server. For example, HTTP.

○ Host: The host is the computer where the information is stored, and the computer is given an alias name. Web
pages are mainly stored in the computers and the computers are given an alias name that begins with the
characters "www". This field is not mandatory.

○ Port: The URL can also contain the port number of the server, but it's an optional field. If the port number is
included, then it must come between the host and path and it should be separated from the host by a colon.

○ Path: Path is the pathname of the file where the information is stored. The path itself contains slashes that
separate the directories from the subdirectories and files.

FTP
○ FTP stands for File transfer protocol.
○ FTP is a standard internet protocol provided by TCP/IP used for transmitting the files from one host
to another.
○ It is mainly used for transferring the web page files from their creator to the computer that acts as a
server for other computers on the internet.
○ It is also used for downloading the files to the computer from other servers.
Objectives of FTP
○ It provides the sharing of files.
○ It is used to encourage the use of remote computers.
○ It transfers the data more reliably and efficiently.
Why FTP?
Although transferring files from one system to another is very simple and straightforward, sometimes it can
cause problems. For example, two systems may have different file conventions. Two systems may have
different ways to represent text and data. Two systems may have different directory structures. FTP protocol
overcomes these problems by establishing two connections between hosts. One connection is used for data
transfer, and another connection is used for the control connection.
Mechanism of FTP

The above figure shows the basic model of the FTP. The FTP client has three components: the user interface,
control process, and data transfer process. The server has two components: the server control process and
the server data transfer process.
There are two types of connections in FTP:

○ Control Connection: The control connection uses very simple rules for communication. Through
control connection, we can transfer a line of command or line of response at a time. The control
connection is made between the control processes. The control connection remains connected
during the entire interactive FTP session.
○ Data Connection: The Data Connection uses very complex rules as data types may vary. The data
connection is made between data transfer processes. The data connection opens when a command
comes for transferring the files and closes when the file is transferred.
FTP Clients
○ FTP client is a program that implements a file transfer protocol which allows you to transfer files
between two hosts on the internet.
○ It allows a user to connect to a remote host and upload or download the files.
○ It has a set of commands that we can use to connect to a host, transfer the files between you and
your host and close the connection.
○ The FTP program is also available as a built-in component in a Web browser. This GUI based FTP
client makes the file transfer very easy and also does not require remembering the FTP commands.
Advantages of FTP:
○ Speed: One of the biggest advantages of FTP is speed. FTP is one of the fastest ways to transfer files
from one computer to another computer.
○ Efficient: It is more efficient as we do not need to complete all the operations to get the entire file.
○ Security: To access the FTP server, we need to login with the username and password. Therefore, we
can say that FTP is more secure.
○ Back & forth movement: FTP allows us to transfer the files back and forth. Suppose you are a
manager of the company, you send some information to all the employees, and they all send
information back on the same server.
Disadvantages of FTP:
○ The standard requirement of the industry is that all the FTP transmissions should be encrypted.
However, not all the FTP providers are equal and not all the providers offer encryption. So, we will
have to look out for the FTP providers that provide encryption.
○ FTP serves two operations, i.e., to send and receive large files on a network. However, the size limit of
the file is 2GB that can be sent. It also doesn't allow you to run simultaneous transfers to multiple
receivers.
○ Passwords and file contents are sent in clear text that allows unwanted eavesdropping. So, it is quite
possible that attackers can carry out the brute force attack by trying to guess the FTP password.
○ It is not compatible with every system.

Software Defined Networking (SDN)


oftware-defined networking (SDN) is a new networking paradigm that separates the network's control and data
planes. The traditional networking architecture has a tightly coupled relationship between the data and
control planes. This means that network devices, such as routers and switches, are responsible for
forwarding packets and determining how the network should operate.
With SDN, the control plane is decoupled from the data plane and implemented in software, allowing for
centralized network control. The control plane, also called the network controller, is responsible for making
decisions about how traffic should be forwarded, based on the overall network policy. The data plane, on the
other hand, is responsible for forwarding traffic based on the decisions made by the control plane.
In SDN, network devices are called switches, and they are typically simple, low-cost devices that forward
traffic based on the instructions received from the network controller. The controller communicates with the
switches using a standard protocol, such as OpenFlow, which allows the controller to program the switches to
forward traffic in a particular way.
What is a Data Plane?
In computer networking, the data plane is the part of a network device responsible for forwarding data packets
from one interface to another. It is also referred to as the forwarding plane or the user plane.
The data plane operates at the lowest level of the network stack, typically at Layer 2 (the Data Link layer) and
Layer 3 (the Network layer) of the OSI model. Its main responsibility is to forward packets from one interface
to another based on the destination address contained in the packet header.
In traditional networking, network devices such as routers and switches have a tightly coupled control plane
and data plane. This means that the devices are responsible for both forwarding packets and making
decisions about how the network should operate. However, in software-defined networking (SDN), the control
plane is separated from the data plane, allowing for centralized control of the network.
In SDN, the data plane is implemented in network devices, such as switches, and is responsible for forwarding
packets based on the instructions received from the centralized control plane. This allows for greater
flexibility and scalability in the network, as the data plane can be reprogrammed in real-time to accommodate
changing network conditions.

What is a Control Plane?


In computer networking, the control plane is part of a network device or system that is responsible for
managing and controlling the flow of network traffic. It is responsible for making decisions about how packets
are forwarded across the network based on factors such as network topology, routing protocols, and network
policies.
The control plane operates at a higher network stack level than the data plane, typically at Layer 3 (the
Network layer) and above in the OSI model. It is responsible for routing, switching, and traffic engineering
tasks.
In traditional networking, the control plane and data plane are tightly coupled, meaning that network devices
such as routers and switches are responsible for forwarding packets and making decisions about how the
network should operate. However, in software-defined networking (SDN), the control plane is separated from
the data plane, allowing for centralized network control.
In SDN, the controller communicates with the network devices in the data plane using a standard protocol,
such as OpenFlow, to program the devices to forward packets in a particular way.
The benefits of a separate control plane in SDN include greater network flexibility and scalability, as the
network policy can be changed in real-time to meet changing network conditions. It also allows for easier
network management, as the network can be managed from a centralized location.
SDN Architecture
The architecture of software-defined networking (SDN) consists of three main layers: the application layer, the
control layer, and the infrastructure layer. Each layer has a specific role and interacts with the other layers to
manage and control the network.
1. Infrastructure Layer: The infrastructure layer is the bottom layer of the SDN architecture, also known
as the data plane. It consists of physical and virtual network devices such as switches, routers, and
firewalls that are responsible for forwarding network traffic based on the instructions received from
the control plane.
2. Control Layer: The control layer is the middle layer of the SDN architecture, also known as the control
plane. It consists of a centralized controller that communicates with the infrastructure layer devices
and is responsible for managing and configuring the network.
The controller interacts with the devices in the infrastructure layer using protocols such as OpenFlow
to program the forwarding behaviour of the switches and routers. The controller uses network
policies and rules to make decisions about how traffic should be forwarded based on factors such as
network topology, traffic patterns, and quality of service requirements.
3. Application Layer: The application layer is the top layer of the SDN architecture and is responsible for
providing network services and applications to end-users. This layer consists of various network
applications that interact with the control layer to manage the network.
Examples of applications that can be deployed in an SDN environment include network virtualization, traffic
engineering, security, and monitoring. The application layer can be used to create customized network
services that meet specific business needs.
The main benefit of the SDN architecture is its flexibility and ability to centralize control of the network. The
separation of the control plane from the data plane enables network administrators to configure and manage
the network more easily and in a more granular way, allowing for greater network agility and faster response
times to changes in network traffic.
Advantages of SDN:
Software-defined networking (SDN) offers several advantages over traditional networking architectures,
including:
○ Centralized Network Control: One of the key benefits of SDN is that it centralizes the control of the
network in a single controller, making it easier to manage and configure the network. This allows
network administrators to define and enforce network policies in a more granular way, resulting in
better network security, performance, and reliability.
○ Programmable Network: In an SDN environment, network devices are programmable and can be
reconfigured on the fly to meet changing network requirements. This allows network administrators
to quickly adapt the network to changing traffic patterns and demands, resulting in better network
performance and efficiency.
○ Cost Savings: With SDN, network administrators can use commodity hardware to build a network,
reducing the cost of proprietary network hardware. Additionally, the centralization of network control
can reduce the need for manual network management, leading to cost savings in labor and
maintenance.
○ Enhanced Network Security: The centralized control of the network in SDN makes it easier to detect
and respond to security threats. The use of network policies and rules allows administrators to
implement fine-grained security controls that can mitigate security risks.
○ Scalability: SDN makes it easier to scale the network to meet changing traffic demands. With the
ability to programmatically control the network, administrators can quickly adjust the network to
handle more traffic without the need for manual intervention.
○ Simplified Network Management: SDN can simplify network management by abstracting the
underlying network hardware and presenting a logical view of the network to administrators. This
makes it easier to manage and troubleshoot the network, resulting in better network uptime and
reliability.
Overall, SDN offers a more flexible, programmable, and centralized approach to networking that can result in
significant cost savings, enhanced network security, and improved network performance and reliability.
Disadvantages of SDN
While software-defined networking (SDN) has several advantages over traditional networking, there are also
some potential disadvantages that organizations should be aware of. Here are some of the main
disadvantages of SDN:
○ Complexity: SDN can be more complex than traditional networking because it involves a more
sophisticated set of technologies and requires specialized skills to manage. For example, the use of a
centralized controller to manage the network requires a deep understanding of the SDN architecture
and protocols.
○ Dependency on the Controller: The centralized controller is a critical component of SDN, and if it fails,
the entire network could go down. This means that organizations need to ensure that the controller is
highly available and that they have a robust backup and disaster recovery plan in place.
○ Compatibility: Some legacy network devices may not be compatible with SDN, which means that
organizations may need to replace or upgrade these devices to take full advantage of the benefits of
SDN.
○ Security: While SDN can enhance network security, it can also introduce new security risks. For
example, a single point of control could be an attractive target for attackers, and the programmability
of the network could make it easier for attackers to manipulate traffic.
○ Vendor Lock-In: SDN solutions from different vendors may not be interoperable, which could lead to
vendor lock-in. This means that organizations may be limited in their ability to switch to another
vendor or integrate new solutions into their existing network.
○ Performance: The centralized control of the network in SDN can introduce latency, which could
impact network performance in certain situations. Additionally, the overhead of the SDN controller
could impact the performance of the network as the network scales.

Wireless Sensor Network (WSN)


Wireless Sensor Network (WSN) is an infrastructure-less wireless network that is deployed in a
large number of wireless sensors in an ad-hoc manner that is used to monitor the system, physical
or environmental conditions.
Sensor nodes are used in WSN with the onboard processor that manages and monitors the
environment in a particular area. They are connected to the Base Station which acts as a
processing unit in the WSN System.
Base Station in a WSN System is connected through the Internet to share data.

WSN can be used for processing, analysis, storage, and mining of the data.
Applications of WSN:

1. Internet of Things (IoT)


2. Surveillance and Monitoring for security, threat detection
3. Environmental temperature, humidity, and air pressure
4. Noise Level of the surrounding
5. Medical applications like patient monitoring
6. Agriculture
7. Landslide Detection
Challenges of WSN:

1. Quality of Service
2. Security Issue
3. Energy Efficiency
4. Network Throughput
5. Performance
6. Ability to cope with node failure
7. Cross layer optimisation
8. Scalability to large scale of deployment
A modern Wireless Sensor Network (WSN) faces several challenges, including:
● Limited power and energy: WSNs are typically composed of battery-powered sensors
that have limited energy resources. This makes it challenging to ensure that the
network can function for
long periods of time without the need for frequent battery replacements.
● Limited processing and storage capabilities: Sensor nodes in a WSN are typically
small and have limited processing and storage capabilities. This makes it difficult to
perform complex tasks or store large amounts of data.
● Heterogeneity: WSNs often consist of a variety of different sensor types and nodes
with different capabilities. This makes it challenging to ensure that the network can
function effectively and
efficiently.
● Security: WSNs are vulnerable to various types of attacks, such as eavesdropping,
jamming, and spoofing. Ensuring the security of the network and the data it collects is a
major challenge.
● Scalability: WSNs often need to be able to support a large number of sensor nodes
and handle large amounts of data. Ensuring that the network can scale to meet these
demands is a significant
challenge.
● Interference: WSNs are often deployed in environments where there is a lot of
interference from other wireless devices. This can make it difficult to ensure reliable
communication between sensor nodes.
● Reliability: WSNs are often used in critical applications, such as monitoring the
environment or controlling industrial processes. Ensuring that the network is reliable
and able to function correctly
in all conditions is a major challenge.
Components of WSN:
1. Sensors:
Sensors in WSN are used to capture the environmental variables and which are used
for data acquisition. Sensor signals are converted into electrical signals.
2. Radio Nodes:
It is used to receive the data produced by the Sensors and sends it to the WLAN access
point. It consists of a microcontroller, transceiver, external memory, and power source.
3. WLAN Access Point:
It receives the data which is sent by the Radio nodes wirelessly, generally through the
internet.
4. Evaluation Software:
The data received by the WLAN Access Point is processed by a software called
Evaluation Software for presenting the report to the users for further processing of the
data which can be used for processing, analysis, storage, and mining of the data.
Advantages of Wireless Sensor Networks (WSN):
Low cost: WSNs consist of small, low-cost sensors that are easy to deploy, making them a
cost-effective solution for many applications.
Wireless communication: WSNs eliminate the need for wired connections, which can be costly
and difficult to install. Wireless communication also enables flexible deployment and
reconfiguration of the network.
Energy efficiency: WSNs use low-power devices and protocols to conserve energy, enabling
long-term operation without the need for frequent battery replacements.
Scalability: WSNs can be scaled up or down easily by adding or removing sensors, making them
suitable for a range of applications and environments.
Real-time monitoring: WSNs enable real-time monitoring of physical phenomena in the
environment, providing timely information for decision making and control.
Disadvantages of Wireless Sensor Networks (WSN):
Limited range: The range of wireless communication in WSNs is limited, which can be a challenge
for large-scale deployments or in environments with obstacles that obstruct radio signals.
Limited processing power: WSNs use low-power devices, which may have limited processing
power and memory, making it difficult to perform complex computations or support advanced
applications.
Data security: WSNs are vulnerable to security threats, such as eavesdropping, tampering, and
denial of service attacks, which can compromise the confidentiality, integrity, and availability of
data.
Interference: Wireless communication in WSNs can be susceptible to interference from other
wireless devices or radio signals, which can degrade the quality of data transmission.
Deployment challenges: Deploying WSNs can be challenging due to the need for proper sensor
placement, power management, and network configuration, which can require significant time and
resources.

Introduction to Internet of Things


oT stands for Internet of Things. It refers to the interconnectedness of physical devices, such as
appliances and vehicles, that are embedded with software, sensors, and connectivity which
enables these objects to connect and exchange data. This technology allows for the collection
and sharing of data from a vast network of devices, creating opportunities for more efficient and
automated systems.

Internet of Things (IoT) is the networking of physical objects that contain electronics embedded
within their architecture in order to communicate and sense interactions amongst each other or
with respect to the external environment. In the upcoming years, IoT-based technology will offer
advanced levels of services and practically change the way people lead their daily lives.
Advancements in medicine, power, gene therapies, agriculture, smart cities, and smart homes are
just a few of the categorical examples where IoT is strongly established.

IOT is a system of interrelated things, computing devices, mechanical and digital machines,
objects, animals, or people that are provided with unique identifiers. And the ability to transfer
the data over a network requiring human-to-human or human-to-computer interaction.

Four Key Components of IOT

● Device or sensor
● Connectivity

● Data processing

● Interface

IoT is a network of interconnected computing devices which are embedded in everyday


objects, enabling them to send and receive data.

Over 9 billion ‘Things’ (physical objects) are currently connected to the Internet, as of now. In the
near future, this number is expected to rise to a whopping 20 billion.

Main Components Used in IoT


● Low-power embedded systems: Less battery consumption, high performance are the

inverse factors that play a significant role during the design of electronic systems.

● Sensors: Sensors are the major part of any IoT application. It is a physical device that

measures and detects certain physical quantities and converts it into a signal which

can be provided as an input to a processing or control unit for analysis purposes.

Different types of Sensors


● Temperature Sensors

● Image Sensors

● Gyro Sensors

● Obstacle Sensors

● RF Sensor

● IR Sensor

● MQ-02/05 Gas Sensor

● LDR Sensor

● Ultrasonic Distance Sensor

● Control Units: It is a unit of small computer on a single integrated circuit containing

microprocessor or processing core, memory and programmable input/output


devices/peripherals. It is responsible for major processing work of IoT devices and all

logical operations are carried out here.

● Cloud computing: Data collected through IoT devices is massive, and this data has to

be stored on a reliable storage server. This is where cloud computing comes into play.

The data is processed and learned, giving more room for us to discover where things

like electrical faults/errors are within the system.

● Availability of big data: We know that IoT relies heavily on sensors, especially in

real-time. As these electronic devices spread throughout every field, their usage is

going to trigger a massive flux of big data.

● Networking connection: In order to communicate, internet connectivity is a must,

where each physical object is represented by an IP address. However, there are only a

limited number of addresses available according to the IP naming. Due to the growing

number of devices, this naming system will not be feasible anymore. Therefore,

researchers are looking for another alternative naming system to represent each

physical object.

Ways of Building IOT


There are two ways of building IoT:

● Form a separate internet work including only physical objects.

● Make the Internet ever more expansive, but this requires hard-core technologies such

as rigorous cloud computing and rapid big data storage (expensive).

In the near future, IoT will become broader and more complex in terms of scope. It will change
the world in terms of

“anytime, anyplace, anything in connectivity.”

IoT Enablers
● RFIDs: uses radio waves in order to electronically track the tags attached to each

physical object.

● Sensors: devices that are able to detect changes in an environment (ex: motion

detectors).

● Nanotechnology: as the name suggests, these are tiny devices with dimensions

usually less than a hundred nanometers.

● Smart networks: (ex: mesh topology).

Working with IoT Devices

● Collect and Transmit Data : For this purpose sensors are widely used as per

requirements in different application areas.

● Actuate device based on triggers produced by sensors or processing devices: If

certain conditions are satisfied or according to user’s requirements if a certain trigger is

activated then which action to perform is shown by Actuator devices.

● Receive Information: From network devices, users or devices can take certain

information also for their analysis and processing purposes.

● Communication Assistance: Communication assistance is the phenomenon of

communication between 2 networks or communication between 2 or more IoT devices

of same or different networks. This can be achieved by different communication

protocols like: MQTT, Constrained Application Protocol, ZigBee, FTP, HTTP etc.
Working of IoTac

Characteristics of IoT
● Massively scalable and efficient

● IP-based addressing will no longer be suitable in the upcoming future.


● An abundance of physical objects is present that do not use IP, so IoT is made

possible.

● Devices typically consume less power. When not in use, they should be

automatically programmed to sleep.

● A device that is connected to another device right now may not be connected in

another instant of time.

● Intermittent connectivity – IoT devices aren’t always connected. In order to save

bandwidth and battery consumption, devices will be powered off periodically

when not in use. Otherwise, connections might turn unreliable and thus prove to

be inefficient.

Desired Quality of any IoT Application

Interconnectivity

It is the basic first requirement in any IoT infrastructure. Connectivity should be guaranteed
from any devices on any network then only devices in a network can communicate with
each other.

Heterogeneity

There can be diversity in IoT enabled devices like different hardware and software
configuration or different network topologies or connections, but they should connect and
interact with each other despite so much heterogeneity.

Dynamic in Nature

IoT devices should dynamically adapt themselves to the changing surroundings like
different situations and different prefaces.

Self-adapting and self configuring technology


For example, surveillance camera. It should be flexible to work in different weather
conditions and different light situations (morning, afternoon, or night).

Intelligence

Just data collection is not enough in IoT, extraction of knowledge from the generated data
is very important. For example, sensors generate data, but that data will only be useful if
it is interpreted properly. So intelligence is one of the key characteristics in IoT. Because
data interpretation is the major part in any IoT application because without data
processing we can’t make any insights from data. Hence, big data is also one of the most
enabling technologies in IoT field.

Scalability

The number of elements (devices) connected to IoT zones is increasing day by day.
Therefore, an IoT setup should be capable of handling the expansion. It can be either
expand capability in terms of processing power, storage, etc. as vertical scaling or
horizontal scaling by multiplying with easy cloning.

Identity

Each IoT device has a unique identity (e.g., an IP address). This identity is helpful in
communication, tracking and to know status of the things. If there is no identification then
it will directly affect security and safety of any system because without discrimination we
can’t identify with whom one network is connected or with whom we have to
communicate. So there should be clear and appropriate discrimination technology
available between IoT networks and devices.

Safety

Sensitive personal details of a user might be compromised when the devices are
connected to the Internet. So data security is a major challenge. This could cause a loss to
the user. Equipment in the huge IoT network may also be at risk. Therefore, equipment
safety is also critical.

Architecture
It should be hybrid, supporting different manufacturer’s products to function in the IoT
network.

As a quick note, IoT incorporates trillions of sensors, billions of smart systems, and
millions of applications.

Application Domains

IoT is currently found in four different popular domains:


1) Manufacturing/Industrial business - 40.2%
2) Healthcare - 30.3%
3) Security - 7.7%
4) Retail - 8.3%

Modern Applications
● Smart Grids and energy saving

● Smart cities

● Smart homes/Home automation

● Healthcare

● Earthquake detection

● Radiation detection/hazardous gas detection

● Smartphone detection

● Water flow monitoring

● Traffic monitoring

● Wearables

● Smart door lock protection system

● Robots and Drones

● Healthcare and Hospitals, Telemedicine applications

● Security

● Biochip Transponders (For animals in farms)


● Heart monitoring implants (Example Pacemaker, ECG real time tracking)

● Agriculture

● Industry

Advantages of IoT
● Improved efficiency and automation of tasks.

● Increased convenience and accessibility of information.

● Better monitoring and control of devices and systems.

● Greater ability to gather and analyze data.

● Improved decision-making.

● Cost savings.

Disadvantages of IoT
● Security concerns and potential for hacking or data breaches.

● Privacy issues related to the collection and use of personal data.

● Dependence on technology and potential for system failures.

● Limited standardization and interoperability among devices.

● Complexity and increased maintenance requirements.

● High initial investment costs.

● Limited battery life on some devices.

● Concerns about job displacement due to automation.

● Limited regulation and legal framework for IoT, which can lead to confusion and

uncertainty.
UNIT IV Transport Layer
○ The transport layer is a 4th layer from the top.
○ The main role of the transport layer is to provide the communication services directly to the
application processes running on different hosts.
○ The transport layer provides a logical communication between application processes running
on different hosts. Although the application processes on different hosts are not physically
connected, application processes use the logical communication provided by the transport
layer to send the messages to each other.
○ The transport layer protocols are implemented in the end systems but not in the network
routers.
○ A computer network provides more than one protocol to the network applications. For example,
TCP and UDP are two transport layer protocols that provide a different set of services to the
network layer.
○ All transport layer protocols provide multiplexing/demultiplexing service. It also provides other
services such as reliable data transfer, bandwidth guarantees, and delay guarantees.
○ Each of the applications in the application layer has the ability to send a message by using TCP
or UDP. The application communicates by using either of these two protocols. Both TCP and
UDP will then communicate with the internet protocol in the internet layer. The applications can
read and write to the transport layer. Therefore, we can say that communication is a two-way
process.
Services provided by the Transport Layer
The services provided by the transport layer are similar to those of the data link layer. The data link
layer provides the services within a single network while the transport layer provides the
services across an internetwork made up of many networks. The data link layer controls the
physical layer while the transport layer controls all the lower layers.
The services provided by the transport layer protocols can be divided into five categories:
○ End-to-end delivery
○ Addressing
○ Reliable delivery
○ Flow control
○ Multiplexing

End-to-end delivery:
- The transport layer transmits the entire message to the destination. Therefore, it ensures
the end-to-end delivery of an entire message from a source to the destination.
Reliable delivery:
The transport layer provides reliability services by retransmitting the lost and damaged packets.
The reliable delivery has four aspects:
○ Error control
○ Sequence control
○ Loss control
○ Duplication control

Error Control
○ The primary role of reliability is Error Control. In reality, no transmission will be 100 percent
error-free delivery. Therefore, transport layer protocols are designed to provide error-free
transmission.
○ The data link layer also provides the error handling mechanism, but it ensures only
node-to-node error-free delivery. However, node-to-node reliability does not ensure the
end-to-end reliability.
○ The data link layer checks for the error between each network. If an error is introduced inside
one of the routers, then this error will not be caught by the data link layer. It only detects those
errors that have been introduced between the beginning and end of the link. Therefore, the
transport layer performs the checking for the errors end-to-end to ensure that the packet has
arrived correctly.

Sequence Control
○ The second aspect of reliability is sequence control which is implemented at the transport layer.
○ On the sending end, the transport layer is responsible for ensuring that the packets received
from the upper layers can be used by the lower layers. On the receiving end, it ensures that the
various segments of a transmission can be correctly reassembled.

Loss Control
- Loss Control is a third aspect of reliability. The transport layer ensures that all the
fragments of a transmission arrive at the destination, not some of them. On the sending
end, all the fragments of transmission are given sequence numbers by a transport layer.
These sequence numbers allow the receiver?s transport layer to identify the missing
segment.

Duplication Control
- Duplication Control is the fourth aspect of reliability. The transport layer guarantees that
no duplicate data arrives at the destination. Sequence numbers are used to identify the
lost packets; similarly, it allows the receiver to identify and discard duplicate segments.
Flow Control
Flow control is used to prevent the sender from overwhelming the receiver. If the receiver is
overloaded with too much data, then the receiver discards the packets and asks for the
retransmission of packets. This increases network congestion and thus, reduces the system
performance. The transport layer is responsible for flow control. It uses the sliding window
protocol that makes the data transmission more efficient as well as it controls the flow of data
so that the receiver does not become overwhelmed. Sliding window protocol is byte oriented
rather than frame oriented.

Multiplexing
The transport layer uses the multiplexing to improve transmission efficiency.
Multiplexing can occur in two ways:
○ Upward multiplexing: Upward multiplexing means multiple transport layer connections use the
same network connection. To make it more cost-effective, the transport layer sends several
transmissions bound for the same destination along the same path; this is achieved through
upward multiplexing.

○ Downward multiplexing: Downward multiplexing means one transport layer connection uses
the multiple network connections. Downward multiplexing allows the transport layer to split a
connection among several paths to improve the throughput. This type of multiplexing is used
when networks have a low or slow capacity.

Addressing
○ According to the layered model, the transport layer interacts with the functions of the session
layer. Many protocols combine session, presentation, and application layer protocols into a
single layer known as the application layer. In these cases, delivery to the session layer means
the delivery to the application layer. Data generated by an application on one machine must be
transmitted to the correct application on another machine. In this case, addressing is provided
by the transport layer.
○ The transport layer provides the user address which is specified as a station or port. The port
variable represents a particular TS user of a specified station known as a Transport Service
access point (TSAP). Each station has only one transport entity.
○ The transport layer protocols need to know which upper-layer protocols are communicating.

Transport Layer Protocols


The transport layer is represented majorly by TCP and UDP protocols. Today almost all operating
systems support multiprocessing multi-user environments. This transport layer protocol provides
connections to the individual ports. These ports are known as protocol ports. Transport layer protocols
work above the IP protocols and deliver the data packets from IP serves to destination port and from
the originating port to destination IP services. Below are the protocols used at the transport layer.
1. UDP
UDP stands for User Datagram Protocol. User Datagram Protocol provides a nonsequential
transmission of data. It is a connectionless transport protocol. UDP protocol is used in applications
where the speed and size of data transmitted is considered as more important than the security and
reliability. User Datagram is defined as a packet produced by User Datagram Protocol. UDP protocol
adds checksum error control, transport level addresses, and information of length to the data received
from the layer above it. Services provided by User Datagram Protocol(UDP) are connectionless service,
faster delivery of messages, checksum, and process-to-process communication.

Advantages of UDP
● UDP also provides multicast and broadcast transmission of data.
● UDP protocol is preferred more for small transactions such as DNS lookup.
● It is a connectionless protocol, therefore there is no compulsion to have a connection-oriented
network.
● UDP provides fast delivery of messages.
Disadvantages of UDP
● In UDP protocol there is no guarantee that the packet is delivered.
● UDP protocol suffers from worse packet loss.
● UDP protocol has no congestion control mechanism.
● UDP protocol does not provide the sequential transmission of data.
2. TCP
TCP stands for Transmission Control Protocol. TCP protocol provides transport layer services to
applications. TCP protocol is a connection-oriented protocol. A secured connection is being established
between the sender and the receiver. For a generation of a secured connection, a virtual circuit is
generated between the sender and the receiver. The data transmitted by TCP protocol is in the form of
continuous byte streams. A unique sequence number is assigned to each byte. With the help of this
unique number, a positive acknowledgment is received from receipt. If the acknowledgment is not
received within a specific period the data is retransmitted to the specified destination.

Advantages of TCP
● TCP supports multiple routing protocols.
● TCP protocol operates independently of that of the operating system.
● TCP protocol provides the features of error control and flow control.
● TCP provides a connection-oriented protocol and provides the delivery of data.
Disadvantages of TCP
● TCP protocol cannot be used for broadcast or multicast transmission.
● TCP protocol has no block boundaries.
● No clear separation is being offered by TCP protocol between its interface, services, and
protocols.
● In TCP/IP replacement of protocol is difficult.

3. SCTP
SCTP stands for Stream Control Transmission Protocol. SCTP is a connection-oriented protocol. Stream
Control Transmission Protocol transmits the data from sender to receiver in full duplex mode. SCTP is a
unicast protocol that provides with point to point-to-point connection and uses different hosts for
reaching the destination. SCTP protocol provides a simpler way to build a connection over a wireless
network. SCTP protocol provides a reliable transmission of data. SCTP provides a reliable and easier
telephone conversation over the internet. SCTP protocol supports the feature of multihoming ie. it can
establish more than one connection path between the two points of communication and does not
depend on the IP layer. SCTP protocol also ensures security by not allowing the half-open connections.

Advantages of SCTP
● SCTP provides a full duplex connection. It can send and receive the data simultaneously.
● SCTP protocol possesses the properties of both TCP and UDP protocol.
● SCTP protocol does not depend on the IP layer.
● SCTP is a secure protocol.

TCP Connection Management

The connection is established in TCP using the three-way handshake as discussed earlier to create a
connection. One side, say the server, passively stays for an incoming link by implementing the
LISTEN and ACCEPT primitives, either determining a particular other side or nobody in particular.
The other side performs a connect primitive specifying the I/O port to which it wants to join. The
maximum TCP segment size available, other options are optionally like some private data (example
password).

The CONNECT primitive transmits a TCP segment with the SYN bit on and the ACK bit off and waits
for a response.

The sequence of TCP segments sent in the typical case, as shown in the figure below −

When the segment sent by Host-1 reaches the destination, i.e., host -2, the receiving server checks
to see if there is a process that has done a LISTEN on the port given in the destination port field. If
not, it sends a response with the RST bit on to refuse the connection. Otherwise, it governs the TCP
segment to the listing process, which can accept or decline (for example, if it does not look similar
to the client) the connection.

Call Collision
If two hosts try to establish a connection simultaneously between the same two sockets, then the
events sequence is demonstrated in the figure under such circumstances. Only one connection is
established. It cannot select both the links because their endpoints identify connections.

Suppose the first set up results in a connection identified by (x, y) and the second connection are
also released up. In that case, only tail enter will be made, i.e., for (x, y) for the initial sequence
number, a clock-based scheme is used, with a clock pulse coming after every 4 microseconds. For
ensuring additional safety when a host crashes, it may not reboot for sec, which is the maximum
packet lifetime. This is to make sure that no packets from previous connections are roaming around.
TCP Connection Termination
TCP (Transmission Control Protocol) is a transmission protocol that ensures data transmission in an ordered
and secure manner. It sends and receives the data packets in the same order. TCP is a four-layer protocol
compared to OSI (Open System Interconnection Model), which is a seven-layer transmission process. It is
recommended to transmit data from high-level protocols due to its integrity and security between the
server and client.
TCP needs a 4-way handshake for its termination. To establish a connection, TCP needs a 3-way handshake.
So, here we will discuss the detailed process of TCP to build a 3-way handshake for connection and a
4-way handshake for its termination. Here, we will discuss the following:
What is TCP?
TCP is a connection-oriented protocol, which means that it first establishes the connection between the sender
and receiver in the form of a handshake. After both the connections are verified, it begins transmitting
packets. It makes the transmission process error-free and ensures the delivery of data. It is an important
part of the communication protocols used to interconnect network devices on the internet. The whole
internet system relies on this network.
TCP is one of the most common protocols that ensure end-to-end delivery. It guarantees the security and
integrity of the data being transmitted. It always establishes a secure connection between the sender and
receiver. The transmitter is the server, and the receiver is known as the client. We can also say that the data
transmission occurs between the server and client. Hence, TCP is used in most of the high-level protocols,
such as FTP (File Transfer Protocol), HTTP (Hyper Text Transfer Protocol), and SMTP (Simple Mai Transfer
Protocol).
Layers of TCP
The data is then divided into packets, assigned to the address, transmitted, routed, and received at the
destination. The transmission process comprises four layers, application layer, transport layer, internet
layer, and data link layer. The application layer performs the function similar to the top three layers
(application, presentation, and session) of the OSI model and control user-interface specifications. The
user interacts with the application layer of the TCP model, such as messaging and email systems. The
transport layer provides a reliable and error-free data connection. It divides the data received from the
application layer into packets, which helps in creating ordered sequences. The internet layer controls the
routing of packets and ensures the delivery of a packet at the destination. The data link layer performs the
function similar to the bottom two layers (data link and physical) of the OSI model. It is responsible for
transmitting the data between the applications or devices in the network.
Before proceeding towards the TCP termination, it is essential to understand the concept of TCP
connection. It will help us to better understand the termination process.

TCP Connection (A 3-way handshake)


Handshake refers to the process to establish connection between the client and server. Handshake
is simply defined as the process to establish a communication link. To transmit a packet, TCP
needs a three way handshake before it starts sending data. The reliable communication in TCP is
termed as PAR (Positive Acknowledgement Retransmission). When a sender sends the data to the
receiver, it requires a positive acknowledgement from the receiver confirming the arrival of data. If
the acknowledgement has not reached the sender, it needs to resend that data. The positive
acknowledgement from the receiver establishes a successful connection.
Here, the server is the server and the client is the receiver. The above diagram shows 3 steps for
successful connection. A 3-way handshake is commonly known as SYN-SYN-ACK and requires
both the client and server response to exchange the data. SYN means synchronize Sequence
Number and ACK means acknowledgment. Each step is a type of handshake between the sender
and the receiver.
The diagram of a successful TCP connection showing the three handshakes is shown below:

The three handshakes are discussed in the below steps:


Step 1: SYN - SYN is a segment sent by the client to the server. It acts as a connection request between the
client and server. It informs the server that the client wants to establish a connection. Synchronizing
sequence numbers also helps synchronize sequence numbers sent between any two devices, where the
same SYN segment asks for the sequence number with the connection request.
Step 2: SYN-ACK- It is an SYN-ACK segment or an SYN + ACK segment sent by the server. The ACK segment
informs the client that the server has received the connection request and it is ready to build the
connection. The SYN segment informs the sequence number with which the server is ready to start with
the segments.

Step 3: ACK
ACK (Acknowledgement) is the last step before establishing a successful TCP connection between the client
and server. The ACK segment is sent by the client as the response of the received ACK and SN from the
server. It results in the establishment of a reliable data connection.
After these three steps, the client and server are ready for the data communication process. TCP connection
and termination are full-duplex, which means that the data can travel in both directions simultaneously.
TCP Termination (A 4-way handshake)
Any device establishes a connection before proceeding with the termination. TCP requires 3-way
handshake to establish a connection between the client and server before sending the data. Similarly, to
terminate or stop the data transmission, it requires a 4-way handshake. The segments required for TCP
termination are similar to the segments to build a TCP connection (ACK and SYN) except the FIN segment.
The FIN segment specifies a termination request sent by one device to the other.
The client is the data transmitter and the server is a receiver in a data transmission process between the
sender and receiver. Consider the below TCP termination diagram that shows the exchange of segments
between the client and server.
The diagram of a successful TCP termination showing the four handshakes is shown below:
Let's discuss the TCP termination process with the help of six steps that includes the sent requests
and the waiting states. The steps are as follows:

Step 1: FIN- FIN refers to the termination request sent by the client to the server. The first FIN
termination request is sent by the client to the server. It depicts the start of the termination
process between the client and server.
Step 2: FIN_ACK_WAIT- The client waits for the BACK of the FIN termination request from the server.
It is a waiting state for the client.
Step 3: ACK- The server sends the ACK (Acknowledgement) segment when it receives the FIN
termination request. It depicts that the server is ready to close and terminate the connection.
Step 4: FIN _WAIT_2- The client waits for the FIN segment from the server. It is a type of approved
signal sent by the server that shows that the server is ready to terminate the connection.
Step 5: FIN-The FIN segment is now sent by the server to the client. It is a confirmation signal that the
server sends to the client. It depicts the successful approval for the termination.
Step 6: ACK-The client now sends the ACK (Acknowledgement) segment to the server that it has
received the FIN signal, which is a signal from the server to terminate the connection. As soon as
the server receives the ACK segment, it terminates the connection.
Congestion Control
Congestion may happen because of the availability of many packets in the network. The network
performance is decreased by congestion. As a result, the packet delivery to the receiver is delayed,
and there may even be packet loss. Congestion control is the responsibility of the transport and the
network layer.
The transport layer's packet transmissions into the network cause congestion. Congestion on the
network may be significantly decreased by minimizing the load on the network by the transport
layer. Congestion control may be accomplished via three methods: traffic-aware routing,
provisioning, and admission control.
Provisioning
In the provisioning method, the network may manage and control the traffic.
Traffic-aware routing
In this method, the routers work according to the traffic pattern.
Admission Control
The network becomes congested as a result of admission control rejecting new connections.

Main Differences between the Flow Control and Congestion Control

Here, you will learn the main differences between Flow Control and Congestion Control. Some main
differences between Flow Control and Congestion Control are as follows:
1. The process of regulating the data transmission rate between two nodes is known as flow
control. In contrast, congestion control is the method of regulating traffic entering a
telecommunications network to avoid congested collapse caused by oversubscription.

2. The transport and data link layers are responsible for flow control. In contrast, the transport
and network layers are responsible for congestion control.

3. The feedback-based flow control and the rate-based flow control method are two methods
to control the data flow. In contrast, the Congestion Control method employs three techniques to
reduce network congestion: provisioning, traffic-aware routing, and admission control.

4. Flow control prevents the data transmission from the sender at the faster end from
overloading the receiver at the slower end. On the other hand, the congestion control method
protects against the network becoming congested with data sent through the transport layer.

5. The sender is responsible for creating excess traffic at the receiving end in flow control. In
contrast, the transport layer is responsible for transferring the load via the network in congestion
control.

You might also like