0% found this document useful (0 votes)
20 views27 pages

DCN Unit 3

Networking unit 3

Uploaded by

rohitmarale77
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views27 pages

DCN Unit 3

Networking unit 3

Uploaded by

rohitmarale77
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 27

UNIT III

Network Performance
Transport Layer

o The transport layer is a 4th layer from the top.

o The main role of the transport layer is to provide the communication services
directly to the application processes running on different hosts.

o The transport layer provides a logical communication between application


processes running on different hosts. Although the application processes on
different hosts are not physically connected, application processes use the logical
communication provided by the transport layer to send the messages to each
other.

o The transport layer protocols are implemented in the end systems but not in the
network routers.

o A computer network provides more than one protocol to the network applications.
For example, TCP and UDP are two transport layer protocols that provide a
different set of services to the network layer.

o All transport layer protocols provide multiplexing/demultiplexing service. It also


provides other services such as reliable data transfer, bandwidth guarantees, and
delay guarantees.

o Each of the applications in the application layer has the ability to send a message
by using TCP or UDP. The application communicates by using either of these two
protocols. Both TCP and UDP will then communicate with the internet protocol ithe
internet layer. The applications can read and write to the transport layer.
Therefore, we can say that communication is a two-way process.
Transport Layer Primitives :
A service is specified by a set of primitives. A primitive means operation. To access the service
a user process can access these primitives. These primitives are different for connection
oriented service and connectionless service.
There are five types of service primitives:

1. LISTEN : When a server is ready to accept an incoming connection it executes the LISTEN
primitive. It blocks waiting for an incoming connection.
2. CONNECT : It connects the server by establishing a connection. Response is awaited.
3. RECIEVE: Then the RECIEVE call blocks the server.
4. SEND : Then the client executes SEND primitive to transmit its request followed by the
execution of RECIEVE to get the reply. Send the message.
5. DISCONNECT : This primitive is used for terminating the connection. After this primitive one
can’t send any message. When the client sends DISCONNECT packet then the server also
sends the DISCONNECT packet to acknowledge the client. When the server package is
received by client then the process is terminated.

Connection Oriented Service Primitives

• There are 4 types of primitives for Connection Oriented Service :


CONNECT This primitive makes
a connection

DATA, DATA- Data and information


ACKNOWLEDGE, is sent using thus
EXPEDITED-DATA primitive

CONNECT Primitive for closing


the connection

RESET Primitive for reseting


the connection

The primitives works as follows

1. The client’s CONNECT call cause CONNECTION REQUEST TPDU


2. The server is blocked and needs to be in LISTEN mode
3. Client unblocks the server and sends CONNECTION ACCEPTED TPDU
4. The client is unblocked to receive the TPDU and connection is established
5. Data send using SEND primitive
6. Data received using RECEIVE primitive
7. To disconnect the connection DISCONNECTION REQUEST is send.

Elements of Transport protocol

1. Addressing

2. Connection Establishment

3. Connection Release

4. Flow control and Buffering

5. Multiplexing

6. Crash Recovery

1. Addressing : transport address is used for process to process communication. In internet


the endpoints are called ports or TSAP (Transport Service Access Points).

• The possible scenario for connection

1. A time of day server process on host 2 attaches itself to TSAP1522

to wait for an incoming call.

2. An application process on host 1 wants to find out the time-of-day, so

it issues a CONNECT request specifying TSAP 1208 as the source and

TSAP 1522 as the destination. This action ultimately results in a

transport connection being established between the application process

on host 1 and server 1 on host 2.

3. The application process then sends over a request for the time.

4. The time server process responds with the current time.

5. The transport connection is then released.

2. Connection Establishment : to establish connection send a CONNECTION REQUEST

TPDU to the destination and wait for CONNECTION ACCEPTED reply.

a. Problems : 1) network will lose, store and duplicate packets

i. 2) packet living for long can cause congestion which can be

restricted by using restricted subnet design, putting hop counter, and

time stamping each packet.

b. Three way handshaking used to solve the incorrect connection establishment

Host 1 chooses asequence number, x, and sends a CONNECTION

REQUEST TPDU containing it to host 2. Host 2replies with an ACK

TPDU acknowledging x and announcing its own initial sequence

number, y.Finally, host 1 acknowledges host 2's choice of an initial

sequence number in the first data TPDU that it sends.

3. Connection Release

Two styles of connection release are Asymmetric Release and Symmetric Release: Asymmetric

release is the way the telephone system works: when one party hangs up, the connection is
broken. Symmetric release treats the connection as two separate unidirectional connections and

requires each one to be released separately.

4. Flow control and buffering

• Host have many connections . so it is impractical to implement buffering

strategy

• Otherwise use dynamic buffering


• For flow control sliding window is used
5. Multiplexing
Assume that the network offers only a limited number of virtual circuits. Use a single circuit
for several connections as in figure A. it is upward multiplexing
In downward multiplexing use several circuit for single connection as in figure B
6. Crash Recovery
The sending host can employ four strategies for crash recovery
1) Always retransmit
2) Never retransmit
3) Had send the TPDU , but not received the acknowledgement. Those TPDUs are not
resend.
4) Has some TPDUs to be send .those TPDUs are resend.

6. Crash Recovery
The sending host can employ four strategies for crash recovery
1) Always retransmit
2) Never retransmit
3) Had send the TPDU , but not received the acknowledgement. Those TPDUs are not
resend.
4) Has some TPDUs to be send .those TPDUs are resend.

TCP (Transmission Control Protocol):-TCP (Transmission Control


Protocol) is one of the main protocols of the Internet protocol
suite. It lies between the Application and Network Layers which
are used in providing reliable delivery services. It is a connection-
oriented protocol for communications that helps in the exchange
of messages between different devices over a network. The
Internet Protocol (IP), which establishes the technique for
sending data packets between computers, works with TCP. It is a
connection-oriented protocol that means it establishes the
connection prior to the communication that occurs between the
computing devices in a network. This protocol is used with an IP
protocol, so together, they are referred to as a TCP/IP.

The main functionality of the TCP Is to take the data from the
application layer. Then it divides the data into a several packets,
provides numbering to these packets, and finally transmits these
packets to the destination. The TCP, on the other side, will
reassemble the packets and transmits them to the application
layer. As we know that TCP is a connection-oriented protocol, so
the connection will remain established until the communication is
not completed between the sender and the receiver.

• It is transport layer protocol

• Defined in RFC 793


• Provides connection oriented service
• Provides reliable connection
• Provides sequencing, acknowledgement, recovery or lost packet
• Data transmitted as segments
• TCP ports specify specific location for the delivery of tcp segments. 20(FTP flow
channel),21(FTP data channel),23(Telnet),80(HTTP),139(net BIOS) are common TCP
ports.

Features of TCP/IP
Some of the most prominent features of Transmission control protocol are

1. Segment Numbering System

• TCP keeps track of the segments being transmitted or received by


assigning numbers to each and every single one of them.
• A specific Byte Number is assigned to data bytes that are to be
transferred while segments are assigned sequence numbers.
• Acknowledgment Numbers are assigned to received segments.
2. Connection Oriented

• Itmeans sender and receiver are connected to each other till the
completion of the process.
• The order of the data is maintained i.e. order remains same before and
after transmission.

3. Full Duplex

• In TCP data can be transmitted from receiver to the sender or vice – versa
at the same time.
• It increases efficiency of data flow between sender and receiver.

4. Flow Control

• Flow control limits the rate at which a sender transfers data. This is done
to ensure reliable delivery.
• The receiver continually hints to the sender on how much data can be
received (using a sliding window)

5. Error Control

• TCP implements an error control mechanism for reliable data transfer


• Error control is byte-oriented
• Segments are checked for error detection
• Error Control includes – Corrupted Segment & Lost Segment
Management, Out-of-order segments, Duplicate segments, etc.

6. Congestion Control

• TCP takes into account the level of congestion in the network


• Congestion level is determined by the amount of data sent by a sender

Advantages

• It is a reliable protocol.
• It provides an error-checking mechanism as well as one for recovery.
• It gives flow control.
• It makes sure that the data reaches the proper destination in the exact
order that it was sent.
• Open Protocol, not owned by any organization or individual.
• It assigns an IP address to each computer on the network and a domain
name to each site thus making each device site to be distinguishable
over the network.

Disadvantages

• TCP is made for Wide Area Networks, thus its size can become an issue
for small networks with low resources.
• TCP runs several layers so it can slow down the speed of the network.
• It is not generic in nature. Meaning, it cannot represent any protocol stack
other than the TCP/IP suite. E.g., it cannot work with a Bluetooth
connection.
• No modifications since their development around 30 years ago.

UDP Protocol

• The UDP stands for User Datagram Protocol. The David P. Reed
developed the UDP protocol in 1980. It is defined in RFC 768, and
it is a part of the TCP/IP protocol, so it is a standard protocol over
the internet.

• The UDP protocol allows the computer applications to send the


messages in the form of datagrams from one machine to another
machine over the Internet Protocol (IP) network. The UDP is an
alternative communication protocol to the TCP protocol
(transmission control protocol).

• Like TCP, UDP provides a set of rules that governs how the data
should be exchanged over the internet. The UDP works by
encapsulating the data into the packet and providing its own
header information to the packet. Then, this UDP packet is
encapsulated to the IP packet and sent off to its destination.
• Both the TCP and UDP protocols send the data over the internet
protocol network, so it is also known as TCP/IP and UDP/IP. There
are many differences between these two protocols. UDP enables
the process to process communication, whereas the TCP provides
host to host communication. Since UDP sends the messages in the
form of datagrams, it is considered the best-effort mode of
communication.

• TCP sends the individual packets, so it is a reliable transport


medium. Another difference is that the TCP is a connection-
oriented protocol whereas, the UDP is a connectionless protocol
as it does not require any virtual circuit to transfer the data.

• UDP also provides a different port number to distinguish different


user requests and also provides the checksum capability to verify
whether the complete data has arrived or not; the IP layer does
not provide these two services.
Features of UDP protocol
The following are the features of the UDP protocol:

o Transport layer protocol

UDP is the simplest transport layer communication protocol. It contains a minimum


amount of communication mechanisms. It is considered an unreliable protocol, and it is
based on best-effort delivery services. UDP provides no acknowledgment mechanism,
which means that the receiver does not send the acknowledgment for the received packet,
and the sender also does not wait for the acknowledgment for the packet that it has sent.

o Connectionless

The UDP is a connectionless protocol as it does not create a virtual path to transfer the
data. It does not use the virtual path, so packets are sent in different paths between the
sender and the receiver, which leads to the loss of packets or received out of order.
Ordered delivery of data is not guaranteed.

In the case of UDP, the datagrams are sent in some order will be received in the same
order is not guaranteed as the datagrams are not numbered.

o Ports

The UDP protocol uses different port numbers so that the data can be sent to the correct
destination. The port numbers are defined between 0 and 1023.

o Faster transmission

UDP enables faster transmission as it is a connectionless protocol, i.e., no virtual path is


required to transfer the data. But there is a chance that the individual packet is lost, which
affects the transmission quality. On the other hand, if the packet is lost in TCP connection,
that packet will be resent, so it guarantees the delivery of the data packets.

o Acknowledgment mechanism

The UDP does have any acknowledgment mechanism, i.e., there is no handshaking
between the UDP sender and UDP receiver. If the message is sent in TCP, then the receiver
acknowledges that I am ready, then the sender sends the data. In the case of TCP, the
handshaking occurs between the sender and the receiver, whereas in UDP, there is no
handshaking between the sender and the receiver.

o Segments are handled independently.

Each UDP segment is handled individually of others as each segment takes different path
to reach the destination. The UDP segments can be lost or delivered out of order to reach
the destination as there is no connection setup between the sender and the receiver.

o Stateless

It is a stateless protocol that means that the sender does not get the acknowledgement
for the packet which has been sent.

Network Layer
o The Network Layer is the third layer of the OSI model.
o It handles the service requests from the transport layer and further forwards
the service request to the data link layer.

o The network layer translates the logical addresses into physical addresses

o It determines the route from the source to the destination and also manages
the traffic problems such as switching, routing and controls the congestion of
data packets.

o The main role of the network layer is to move the packets from sending host
to the receiving host.
o The Internet Protocol (IP) is a protocol, or set of rules, for routing and
addressing packets of data so that they can travel across networks and arrive at
the correct destination. Data traversing the Internet is divided into smaller pieces,
called packets. IP information is attached to each packet, and this information
helps routers to send packets to the right place. Every device or domain that
connects to the Internet is assigned an IP address, and as packets are directed to
the IP address attached to them, data arrives where it is needed.
o Once the packets arrive at their destination, they are handled differently
depending on which transport protocol is used in combination with IP. The most
common transport protocols are TCP and UDP.

o IP address is a unique identifier assigned to a device or domain that connects to the


Internet , An Internet Protocol (IP) address is a unique numerical identifier for every
device or network that connects to the internet. Typically assigned by an internet service
provider (ISP), an IP address is an online device address used for communicating across
the internet.
o Each IP address is a series of characters, such as '192.168.1.1'.Via DNS resolvers, which
translate human-readable domain names into IP addresses, users are able to access
websites without memorizing this complex series of characters.
o Each IP packet will contain both the IP address of the device or domain sending the
packet and the IP address of the intended recipient, much like how both the destination
address and the return address are included on a piece of mail.

Connection Oriented :-
A connection-oriented service is one that establishes a dedicated connection between
the communicating entities before data communication commences. It is modeled after
the telephone system. To use a connection-oriented service, the user first establishes a
connection, uses it and then releases it. In connection-oriented services, the data
streams/packets are delivered to the receiver in the same order in which they have been
sent by the sender.

Connection-oriented services may be done in either of the following ways −

• Circuit-switched connection: In circuit switching, a dedicated physical path


or a circuit is established between the communicating nodes and then data
stream is transferred.
• Virtual circuit-switched connection: Here, the data stream is transferred
over a packet switched network, in such a way that it seems to the user that
there is a dedicated path from the sender to the receiver. A virtual path is
established here. However, other connections may also be using this path.
Connection-oriented services may be of the following types −

• Reliable Message Stream: e.g. sequence of pages


• Reliable Byte Stream: e.g. song download
• Unreliable Connection: e.g. VoIP (Voice over Internet Protocol)

Connectionless service :-

• A Connectionless service is a data communication between two nodes where the


sender sends data without ensuring whether the receiver is available to receive
the data. Here, each data packet has the destination address and is routed
independently irrespective of the other packets.
• Thus the data packets may follow different paths to reach the destination.
There’s no need to setup connection before sending a message and relinquish it
after the message has been sent. The data packets in a connectionless service
are usually called datagrams.
• Protocols for connectionless services are −These protocols simply allow data to
be transferred without any link among processes. Some Of data packets may
also be lost during transmission. Some of protocols for connectionless services
are given below:

• Internet Protocol (IP)


:- This protocol is connectionless. In this protocol, all packets in IP network
are routed independently. They might not go through same route.
• User Datagram Protocol (UDP)
:-This protocol does not establish any connection before transferring data. It
just sends data that’s why UDP is known as connectionless.
• Internet Control Message Protocol (ICMP)
:-ICMP is called connectionless simply because it does not need any hosts to
handshake before establishing any connection

• Types of Connectionless Services :

Service Example

Unreliable Datagram Electronic Junk Mail, etc.

Acknowledged Datagram Registered mail, text messages along with delivery report, etc.

Request Reply Queries from remote databases, etc.

Advantages of Connectionless Services


• It has low overhead.
• It enables to broadcast and multicast messages, where the sender sends
messages to multiple recipients.
• It is simpler and has low overhead.
• It does not require any time for circuit setup.
• In case of router failures or network congestions, the data packets are routed
through alternate paths. Hence, communication is not disrupted.

Disadvantages of Connectionless Services


• It is not a reliable connection. It does not guarantee that there will not be a
loss of packets, wrong delivery, out – of – sequence delivery or duplication of
packets.
• Each data packet requires longer data fields since it should hold all the
destination address and the routing information.
• They are prone to network congestions.
Routing algorithm
o In order to transfer the packets from source to the destination, the network
layer must determine the best route through which packets can be
transmitted.

o Whether the network layer provides datagram service or virtual circuit service,
the main job of the network layer is to provide the best route. The routing
protocol provides this job.

o The routing protocol is a routing algorithm that provides the best path from
the source to the destination. The best path is the path that has the "least-
cost path" from source to the destination.

o Routing is the process of forwarding the packets from source to the destination
but the best route to send the packets is determined by the routing algorithm.

Shortest Path Routing


In this algorithm, to select a route, the algorithm discovers the shortest path between two
nodes. It can use multiple hops, the geographical area in kilometres or labelling of arcs for
measuring path length.

The labelling of arcs can be done with mean queuing, transmission delay for a standard test
packet on an hourly basis, or computed as a function of bandwidth, average distance traffic,
communication cost, mean queue length, measured delay or some other factors.

In shortest path routing, the topology communication network is defined using a directed
weighted graph. The nodes in the graph define switching components and the directed arcs in
the graph define communication connection between switching components. Each arc has a
weight that defines the cost of sharing a packet between two nodes in a specific direction.

This cost is usually a positive value that can denote such factors as delay, throughput, error rate,
financial costs, etc. A path between two nodes can go through various intermediary nodes and
arcs. The goal of shortest path routing is to find a path between two nodes that has the lowest
total cost, where the total cost of a path is the sum of arc costs in that path.
Classification of a Routing algorithm
The Routing algorithm is divided into two categories:

o Adaptive Routing algorithm

o Non-adaptive Routing algorithm

Adaptive Routing algorithm


o An adaptive routing algorithm is also known as dynamic routing algorithm.

o This algorithm makes the routing decisions based on the topology and network
traffic.

o The main parameters related to this algorithm are hop count, distance and
estimated transit time.

An adaptive routing algorithm can be classified into three parts:

o Centralized algorithm: It is also known as global routing algorithm as it computes the


least-cost path between source and destination by using complete and global knowledge
about the network. This algorithm takes the connectivity between the nodes and link cost
as input, and this information is obtained before actually performing any calculation. Link
state algorithm is referred to as a centralized algorithm since it is aware of the cost of each
link in the network.
o Isolation algorithm: It is an algorithm that obtains the routing information by using local
information rather than gathering information from other nodes.
o Distributed algorithm: It is also known as decentralized algorithm as it computes the
least-cost path between source and destination in an iterative and distributed manner. In
the decentralized algorithm, no node has the knowledge about the cost of all the network
links. In the beginning, a node contains the information only about its own directly
attached links and through an iterative process of calculation computes the least-cost path
to the destination.
o A Distance vector algorithm is a decentralized algorithm as it never knows the complete
path from source to the destination, instead it knows the direction through which the
packet is to be forwarded along with the least cost path.
Non-Adaptive Routing algorithm
o Non Adaptive routing algorithm is also known as a static routing algorithm.
o When booting up the network, the routing information stores to the routers.
o Non Adaptive routing algorithms do not take the routing decision based on the network
topology or network traffic.

The Non-Adaptive Routing algorithm is of two types:

o Flooding: In case of flooding, every incoming packet is sent to all the outgoing links
except the one from it has been reached. The disadvantage of flooding is that node may
contain several copies of a particular packet.
o Random walks: In case of random walks, a packet sent by the node to one of its neighbors
randomly. An advantage of using random walks is that it uses the alternative routes very
efficiently.

Differences b/w Adaptive and Non-Adaptive Routing


Algorithm
Basis Of Adaptive Non-Adaptive
Comparison Routing Routing
algorithm algorithm

Define Adaptive Routing The Non-Adaptive


algorithm is an Routing algorithm is
algorithm that an algorithm that
constructs the constructs the static
routing table based table to determine
on the network which node to send
conditions. the packet.

Usage Adaptive routing The Non-Adaptive


algorithm is used Routing algorithm is
by dynamic routing. used by static
routing.

Routing Routing decisions Routing decisions are


decision are made based on the static tables.
topology and
network traffic.

Categorization The types of The types of Non


adaptive routing Adaptive routing
algorithm, are algorithm are
Centralized, flooding and random
isolation and walks.
distributed
algorithm.

Complexity Adaptive Routing Non-Adaptive


algorithms are Routing algorithms
more complex. are simple.

Congestion Control :-
A state occurring in network layer when the message traffic is so heavy that it
slows down network response time.

Effects of Congestion

• As delay increases, performance decreases.


• If delay increases, retransmission occurs, making situation worse.

Congestion control algorithms

• Congestion Control is a mechanism that controls the entry of data


packets into the network, enabling a better use of a shared network
infrastructure and avoiding congestive collapse.
• Congestive-Avoidance Algorithms (CAA) are implemented at the TCP
layer as the mechanism to avoid congestive collapse in a network.
• There are two congestion control algorithm which are as follows:

• Leaky Bucket Algorithm


• The leaky bucket algorithm discovers its use in the context of network
traffic shaping or rate-limiting.
• A leaky bucket execution and a token bucket execution are
predominantly used for traffic shaping algorithms.
• This algorithm is used to control the rate at which traffic is sent to the
network and shape the burst traffic to a steady traffic stream.
• The disadvantages compared with the leaky-bucket algorithm are the
inefficient use of available network resources.
• The large area of network resources such as bandwidth is not being
used effectively.

Let us consider an example to understand

Imagine a bucket with a small hole in the bottom.No matter at what rate water
enters the bucket, the outflow is at constant rate.When the bucket is full with
water additional water entering spills over the sides and is lost.

Similarly, each network interface contains a leaky bucket and the


following steps are involved in leaky bucket algorithm:

1. When host wants to send packet, packet is thrown into the bucket.
2. The bucket leaks at a constant rate, meaning the network interface
transmits packets at a constant rate.
3. Bursty traffic is converted to a uniform traffic by the leaky bucket.
4. In practice the bucket is a finite queue that outputs at a finite rate.
• Token bucket Algorithm

• The leaky bucket algorithm has a rigid output design at an average rate
independent of the bursty traffic.
• In some applications, when large bursts arrive, the output is allowed to
speed up. This calls for a more flexible algorithm, preferably one that
never loses information. Therefore, a token bucket algorithm finds its
uses in network traffic shaping or rate-limiting.
• It is a control algorithm that indicates when traffic should be sent. This
order comes based on the display of tokens in the bucket.
• The bucket contains tokens. Each of the tokens defines a packet of
predetermined size. Tokens in the bucket are deleted for the ability to
share a packet.
• When tokens are shown, a flow to transmit traffic appears in the display
of tokens.
• No token means no flow sends its packets. Hence, a flow transfers
traffic up to its peak burst rate in good tokens in the bucket.

Need of token bucket Algorithm:-

The leaky bucket algorithm enforces output pattern at the average rate, no
matter how bursty the traffic is. So in order to deal with the bursty traffic we
need a flexible algorithm so that the data is not lost. One such algorithm is
token bucket algorithm.

Steps of this algorithm can be described as follows:

1. In regular intervals tokens are thrown into the bucket. ƒ


2. The bucket has a maximum capacity. ƒ
3. If there is a ready packet, a token is removed from the bucket, and the
packet is sent.
4. If there is no token in the bucket, the packet cannot be sent.

Let’s understand with an example,


In figure (A) we see a bucket holding three tokens, with five packets waiting to
be transmitted. For a packet to be transmitted, it must capture and destroy
one token. In figure (B) We see that three of the five packets have gotten
through, but the other two are stuck waiting for more tokens to be generated.

Ways in which token bucket is superior to leaky bucket: The leaky bucket
algorithm controls the rate at which the packets are introduced in the network,
but it is very conservative in nature. Some flexibility is introduced in the token
bucket algorithm. In the token bucket, algorithm tokens are generated at each
tick (up to a certain limit). For an incoming packet to be transmitted, it must
capture a token and the transmission takes place at the same rate. Hence
some of the busty packets are transmitted at the same rate if tokens are
available and thus introduces some amount of flexibility in the system.

Formula: M * s = C + ρ * s where S – is time taken M – Maximum output rate


ρ – Token arrival rate C – Capacity of the token bucket in byte

Let’s understand with an example,


Data Link Layer
o In the OSI model, the data link layer is a 4 th layer from the top and 2nd layer from the
bottom.
o The communication channel that connects the adjacent nodes is known as links, and in
order to move the datagram from source to the destination, the datagram must be moved
across an individual link.
o The main responsibility of the Data Link Layer is to transfer the datagram across an
individual link.
o The Data link layer protocol defines the format of the packet exchanged across the nodes
as well as the actions such as Error detection, retransmission, flow control, and random
access.
o The Data Link Layer protocols are Ethernet, token ring, FDDI and PPP.
o An important characteristic of a Data Link Layer is that datagram can be handled by
different link layer protocols on different links in a path. For example, the datagram is
handled by Ethernet on the first link, PPP on the second link.

Stop and Wait Protocol


Before understanding the stop and Wait protocol, we first know about the error
control mechanism. The error control mechanism is used so that the received
data should be exactly same whatever sender has sent the data. The error
control mechanism is divided into two categories, i.e., Stop and Wait ARQ and
sliding window.

The sliding window is further divided into two categories, i.e., Go Back N, and
Selective Repeat. Based on the usage, the people select the error control
mechanism whether it is stop and wait or sliding window.

What is Stop and Wait protocol?


Here stop and wait means, whatever the data that sender wants to send, he
sends the data to the receiver. After sending the data, he stops and waits until
he receives the acknowledgment from the receiver. The stop and wait protocol
is a flow control protocol where flow control is one of the services of the data
link layer.
It is a data-link layer protocol which is used for transmitting the data over the
noiseless channels. It provides unidirectional data transmission which means
that either sending or receiving of data will take place at a time. It provides
flow-control mechanism but does not provide any error control mechanism.

The idea behind the usage of this frame is that when the sender sends the
frame then he waits for the acknowledgment before sending the next frame.

Primitives of Stop and Wait Protocol


The primitives of stop and wait protocol are:

Sender side

Rule 1: Sender sends one data packet at a time.

Rule 2: Sender sends the next packet only when it receives the
acknowledgment of the previous packet.

Therefore, the idea of stop and wait protocol in the sender's side is very
simple, i.e., send one packet at a time, and do not send another packet before
receiving the acknowledgment.

Receiver side
Rule 1: Receive and then consume the data packet.

Rule 2: When the data packet is consumed, receiver sends the


acknowledgment to the sender.

Therefore, the idea of stop and wait protocol in the receiver's side is also very
simple, i.e., consume the packet, and once the packet is consumed, the
acknowledgment is sent. This is known as a flow control mechanism.
Working of Stop and Wait protocol
The above figure shows the working of the stop and wait protocol. If there is

a sender and receiver, then sender sends the packet and that packet is
known as a data packet. The sender will not send the second packet without
receiving the acknowledgment of the first packet. The receiver sends the
acknowledgment for the data packet that it has received. Once the
acknowledgment is received, the sender sends the next packet. This process
continues until all the packet are not sent. The main advantage of this
protocol is its simplicity but it has some disadvantages also. For example, if
there are 1000 data packets to be sent, then all the 1000 packets cannot be
sent at a time as in Stop and Wait protocol, one packet is sent at a time.

Sliding Window Protocol


The sliding window is a technique for sending multiple frames at a time. It
controls the data packets between the two devices where reliable and gradual
delivery of data frames is needed. It is also used in TCP (Transmission Control
Protocol).

In this technique, each frame has sent from the sequence number. The
sequence numbers are used to find the missing data in the receiver end. The
purpose of the sliding window technique is to avoid duplicate data, so it uses
the sequence number.
Types of Sliding Window Protocol
Sliding window protocol has two types:

1. Go-Back-N ARQ

2. Selective Repeat ARQ

Go-Back-N ARQ
Go-Back-N ARQ protocol is also known as Go-Back-N Automatic Repeat
Request. It is a data link layer protocol that uses a sliding window method. In
this, if any frame is corrupted or lost, all subsequent frames have to be sent
again.

The size of the sender window is N in this protocol. For example, Go-Back-8,
the size of the sender window, will be 8. The receiver window size is always 1

If the receiver receives a corrupted frame, it cancels it. The receiver does not
accept a corrupted frame. When the timer expires, the sender sends the
correct frame again. The design of the Go-Back-N ARQ protocol is shown below

.
Selective Repeat ARQ
Selective Repeat ARQ is also known as the Selective Repeat Automatic Repeat
Request. It is a data link layer protocol that uses a sliding window method.
The Go-back-N ARQ protocol works well if it has fewer errors. But if there is a
lot of error in the frame, lots of bandwidth loss in sending the frames again.
So, we use the Selective Repeat ARQ protocol. In this protocol, the size of the
sender window is always equal to the size of the receiver window. The size of
the sliding window is always greater than 1.

If the receiver receives a corrupt frame, it does not directly discard it. It sends
a negative acknowledgment to the sender. The sender sends that frame again
as soon as on the receiving negative acknowledgment. There is no waiting for
any time-out to send that frame. The design of the Selective Repeat ARQ
protocol is shown below.
Difference between the Go-Back-N ARQ and Selective Repeat
ARQ?
Go-Back-N ARQ Selective Repeat ARQ

If a frame is In this, only the frame


corrupted or lost in is sent again, which is
it,all subsequent corrupted or lost.
frames have to be
sent again.

If it has a high error There is a loss of low


rate,it wastes a lot bandwidth.
of bandwidth.

It is less complex. It is more complex


because it has to do
sorting and searching
as well. And it also
requires more storage.

It does not require In this, sorting is done


sorting. to get the frames in the
correct order.

It does not require The search operation is


searching. performed in it.

It is used more. It is used less because


it is more complex.
Services are provided by the Data Link Layer: The primary
service of the data link layer is to support error-free transmission.
The physical layer sends the data from the sender’s hub to the
receiver’s hub as raw bits.
The data link layer should recognize and correct some errors in
the communicated data.
o Framing & Link access: Data Link Layer protocols encapsulate each network

frame within a Link layer frame before the transmission across the link. A
frame consists of a data field in which network layer datagram is inserted and
a number of data fields. It specifies the structure of the frame as well as a
channel access protocol by which frame is to be transmitted over the link.

o Error detection: Errors can be introduced by signal attenuation and noise. Data

Link Layer protocol provides a mechanism to detect one or more errors. This
is achieved by adding error detection bits in the frame and then receiving node
can perform an error check.

o Error correction: Error correction is similar to the Error detection, except that

receiving node not only detect the errors but also determine where the errors
have occurred in the frame.

o Flow control: A receiving node can receive the frames at a faster rate than it

can process the frame. Without flow control, the receiver's buffer can overflow,
and frames can get lost. To overcome this problem, the data link layer uses
the flow control to prevent the sending node on one side of the link from
overwhelming the receiving node on another side of the link.

You might also like