0% found this document useful (0 votes)
2 views15 pages

Data Link Layer

The document discusses the Data Link Layer, focusing on concepts like bit stuffing and byte stuffing, flow control, error control, and various ARQ protocols. It explains the importance of framing in data transmission, the differences between Stop and Wait, Go-Back-N, and Selective Repeat protocols, and provides an overview of HDLC and CSMA methods for managing network communication. Key features of these protocols include error detection, flow management, and efficiency in data transmission.

Uploaded by

Ritam Ghosh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views15 pages

Data Link Layer

The document discusses the Data Link Layer, focusing on concepts like bit stuffing and byte stuffing, flow control, error control, and various ARQ protocols. It explains the importance of framing in data transmission, the differences between Stop and Wait, Go-Back-N, and Selective Repeat protocols, and provides an overview of HDLC and CSMA methods for managing network communication. Key features of these protocols include error detection, flow management, and efficiency in data transmission.

Uploaded by

Ritam Ghosh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

DATA LINK LAYER

1. Define bit stuffing and byte stuffing.


Byte stuffing is a process where extra bytes (usually an escape character) are
inserted into the data stream to differentiate data from control information. If a
data byte matches a control character (like a frame delimiter), it is "stuffed" by
preceding it with a special escape byte, and possibly modifying the byte to
indicate it's part of the data.
Bit stuffing is the process of inserting a non-information bit (usually a 0) into a
data stream after a specific number of consecutive bits of the same value
(typically five 1s), to prevent confusion with control sequences (like frame flags).

2. Difference between bit stuffing and byte stuffing.

Feature Byte stuffing Bit stuffing

Byte stuffing is used to


Bit stuffing is used to prevent
prevent the occurrence of
the occurrence of a specific bit
Definition a specific byte in a data
sequence in a data stream by
stream by adding an
inserting an additional bit.
extra byte.

Adds an extra byte to the Adds an extra bit to the data


Concept data when a special byte when a special bit pattern is
is found. found.

Special
Flag byte (01111110) Flag bit pattern (01111110)
byte/bit

Extra byte is inserted Extra bit is inserted after each


Insertion after each occurrence of occurrence of the flag bit
the flag byte. pattern.

The extra byte is


The extra bit is removed at the
Removal removed at the receiver
receiver end.
end.
Feature Byte stuffing Bit stuffing

Less efficient due to the More efficient as only an extra


Efficiency
addition of an extra byte. bit is added.

Adopted in protocols like Adopted in protocols like


Applications
PPP and HDLC. Ethernet and Token Ring.

To prevent data loss due to


To prevent data loss due
errors caused by the presence of
to errors caused by the
Objective a specific byte in the data
presence of a specific
stream.
byte in the data stream.

The specific byte is


The specific bit sequence is
replaced with a unique
replaced with a unique bit
How it escape byte, followed by
sequence that indicates the
works a second byte that
original bit sequence’s value.
indicates the original
byte’s value.

Adds overhead to the Adds overhead to the data


data stream by adding an stream by adding an extra bit
Overhead extra byte for every for every occurrence of the
occurrence of the specific specific bit sequence.
byte.

Less efficient than bit More efficient than byte stuffing,


stuffing, as it requires the as it requires the addition of
Efficiency
addition of an entire byte only one bit to the data stream.
to the data stream.

Typically used in asynchronous


Typically used in transmission, such as in serial
protocols that use fixed- communication, and protocols
Usage
length frames or packets, that use variable-length frames
such as PPP and HDLC. or packets, such as Ethernet.
3.What is Flow Control?
It is an important function of the Data Link Layer. It refers to a set of procedures that
tells the sender how much data it can transmit before waiting for acknowledgment
from the receiver.
Purpose of Flow Control
Any receiving device has a limited speed at which it can process incoming data and
also a limited amount of memory to store incoming data. If the source is sending the
data at a faster rate than the capacity of the receiver, there is a possibility of the
receiver being swamped. The receiver will keep losing some of the frames simply
because they are arriving too quickly and the buffer is also getting filled up.
This will generate waste frames on the network. Therefore, the receiving device
must have some mechanism to inform the sender to send fewer frames or stop
transmission temporarily. In this way, flow control will control the rate of frame
transmission to a value that can be handled by the receiver.
Example –Stop & Wait Protocol

4. What is Error Control?


The error control function of the data link layer detects the errors in transmitted
frames and re-transmits all the erroneous frames.
Purpose of Error Control
The function of error control function of the data link layer helps in dealing with
data frames that are damaged in transit, data frames lost in transit and
acknowledged frames that are lost in transmission. The method used for error
control is called Automatic Repeat Request (ARQ) which is used for the noisy
channel.
Example –Stop & Wait ARQ and Sliding Window ARQ

Difference Between Flow Control and Error


5.
Control
Flow control Error control

Flow control is meant only for


Error control is meant for the transmission
the transmission of data from
of error free data from sender to receiver.
sender to receiver.
Flow control Error control

To detect error in data, the approaches are


: Checksum, Cyclic Redundancy
For Flow control there are two
Check and Parity Checking.
approaches : Feedback-based
To correct error in data, the approaches are
Flow Control and Rate-based
: Hamming code, Binary Convolution
Flow Control.
codes, Reed-Solomon code, Low-Density
Parity Check codes.

It prevents the loss of data and


It is used to detect and correct the error
avoid over running of receive
occurred in the code.
buffers.

Example of Flow Control Example of Error Control techniques are :


techniques are : Stop & Wait Stop & Wait ARQ and Sliding Window
Protocol and Sliding Window ARQ (Go-back-N ARQ, Selected Repeat
Protocol. ARQ).

6. ARQ

Definition:

ARQ is a communication protocol in which the receiver detects errors in the


received data and automatically requests the sender to retransmit the
erroneous or lost data packets.

Key Features:

• Uses acknowledgements (ACKs) and negative acknowledgements


(NAKs/NACKs).
• If the sender receives an ACK, it knows the data was received correctly.
• If it receives a NAK or no response within a timeout period, it resends
the data.

Types of ARQ:

1. Stop-and-Wait ARQ – Sender sends one frame and waits for an ACK
before sending the next.
2. Go-Back-N ARQ – Sender can send multiple frames before needing an
ACK, but must retransmit from the error onward if a problem occurs.
3. Selective Repeat ARQ – Only the specific erroneous frames are
retransmitted, improving efficiency.

Purpose:

To ensure error-free and reliable communication, especially over unreliable or


noisy networks.

7. What do you mean by framing?

Definition:

Framing is the technique used at the Data Link Layer (Layer 2) of the OSI model to divide
a data stream into manageable pieces (frames) for reliable transmission and
synchronization between sender and receiver.

Why Is Framing Needed?

1. Data separation: To mark the start and end of a message.


2. Error detection: Frames often include checksums or CRCs.
3. Flow control: Frames help manage how fast data is sent.
4. Synchronization: Helps the receiver know where each frame begins and ends.

8. Difference between Stop and Wait, Go-Back-N, and


Selective Repeat
Stop and Wait Selective Repeat
Key Go-Back-N protocol
protocol protocol

The Sender The Sender The Sender


Sender window window size in the window size in the window size in the
size Stop and Wait Go-Back-N Selective Repeat
protocol is 1. protocol is N. technique is N.

The Receiver The Receiver The Receiver


Receiver Window window size in the window size in the window size in the
size Stop and Wait Go-Back-N Selective Repeat
protocol is 1. protocol is 1. technique is N.

The Minimum The Minimum


Minimum The minimum
Sequence Number Sequence Number
Sequence sequence number
in the Go-Back-N in the Selective
Number in the Stop and
protocol is N+1, Repeat protocol is
Wait procedure is where N is the 2N, where N is the
2. number of packets number ofpackets
sent. transmitted.

In Go-Back-N In Selective
In Stop and Wait protocol, Efficiency Repeat protocol,
protocol, Efficiency formular is Efficiency formular
formular is N/(1+2*a) where is N/(1+2*a)
1/(1+2*a) where a is ratio of where a is ratio of
Efficiency
a is ratio of propagation delay propagation delay
propagation delay vs transmission vs transmission
vs transmission delay and N is delay and N is
delay. number of packets number of packets
sent. sent.

In Stop and Wait In Go-Back-N In Selective


Acknowledgement protocol, protocol, Repeat protocol,
Type Acknowledgement Acknowledgement Acknowledgement
type is individual. type is cumulative. type is individual.

Only in-order In Selective


At the receiver
delivery is Repeat protocol,
end of the Stop
accepted at the out-of-order
Supported Order and Wait protocol,
receiver end of the deliveries can also
no specific order is
Go-Back-N be accepted at
required.
protocol. receiver end.

In Stop and Wait In Go-Back-N In Selective


protocol, in case protocol, in case of Repeat protocol, in
of packet drop, packet drop, case of packet
Retransmissions
number of number of drop, number of
retransmission is retransmission is retransmission is
1. N. 1.

9. High-Level Data Link Control (HDLC)


Defination

High-level Data Link Control (HDLC) is a group of communication protocols of


the data link layer for transmitting data between network points or nodes. Since
it is a data link protocol, data is organized into frames. A frame is transmitted
via the network to the destination that verifies its successful arrival. It is a bit
- oriented protocol that is applicable for both point - to - point and multipoint
communications.
Transfer Modes
HDLC supports two types of transfer modes, normal response mode and
asynchronous balanced mode.
• Normal Response Mode (NRM) − Here, two types of stations are there, a primary
station that send commands and secondary station that can respond to received
commands. It is used for both point - to - point and multipoint communications.
• Asynchronous Balanced Mode (ABM) − Here, the configuration is balanced, i.e. each
station can both send commands and respond to commands. It is used for only point
- to - point communications.

HDLC Frame
HDLC is a bit - oriented protocol where each frame contains up to six fields.
The structure varies according to the type of frame. The fields of a HDLC frame
are −
• Flag − It is an 8-bit sequence that marks the beginning and the end of the frame. The
bit pattern of the flag is 01111110.
• Address − It contains the address of the receiver. If the frame is sent by the primary
station, it contains the address(es) of the secondary station(s). If it is sent by the
secondary station, it contains the address of the primary station. The address field
may be from 1 byte to several bytes.
• Control − It is 1 or 2 bytes containing flow and error control information.
• Payload − This carries the data from the network layer. Its length may vary from one
network to another.
• FCS − It is a 2 byte or 4 bytes frame check sequence for error detection. The standard
code used is CRC (cyclic redundancy code)

Types of HDLC Frames


There are three types of HDLC frames. The type of frame is determined by the
control field of the frame −
• I-frame − I-frames or Information frames carry user data from the network layer.
They also include flow and error control information that is piggybacked on user data.
The first bit of control field of I-frame is 0.
• S-frame − S-frames or Supervisory frames do not contain information field. They are
used for flow and error control when piggybacking is not required. The first two bits
of control field of S-frame is 10.
• U-frame − U-frames or Un-numbered frames are used for myriad miscellaneous
functions, like link management. It may contain an information field, if required. The
first two bits of control field of U-frame is 11.

10. Carrier Sense Multiple Access (CSMA)


Carrier Sense Multiple Access (CSMA) is a method used in computer networks to
manage how devices share a communication channel to transfer the data
between two devices. In this protocol, each device first sense the channel before
sending the data. If the channel is busy, the device waits until it is free. This helps
reduce collisions, where two devices send data at the same time, ensuring
smoother communication across the network. CSMA is commonly used in
technologies like Ethernet and Wi-Fi.
This method was developed to decrease the chances of collisions when two or
more stations start sending their signals over the data link layer. Carrier Sense
multiple access requires that each station first check the state of the
medium before sending.
Types of CSMA Protocol
There are two main types of Carrier Sense Multiple Access
(CSMA) protocols, each designed to handle how devices manage
potential data collisions on a shared communication channel. These types
differ based on how they respond to the detection of a busy network:
1. CSMA/CD
2. CSMA/CA
Carrier Sense Multiple Access with Collision
Detection (CSMA/CD)
In this method, a station monitors the medium after it sends a frame to see
if the transmission was successful. If successful, the transmission is
finished, if not, the frame is sent again.

In the diagram, starts sending the first bit of its frame at t1 and since C
sees the channel idle at t2, starts sending its frame at t2. C detects A’s
frame at t3 and aborts transmission. A detects C’s frame at t4 and aborts
its transmission. Transmission time for C’s frame is, therefore, t3-t2 and
for A’s frame is t4-t1
So, the frame transmission time (Tfr) should be at least twice the
maximum propagation time (Tp). This can be deduced when the two
stations involved in a collision are a maximum distance apart.
Process: The entire process of collision detection can be explained as
follows:

Throughput and Efficiency: The throughput of CSMA/CD is much greater


than pure or slotted ALOHA.
• For the 1-persistent method, throughput is 50% when G=1.
• For the non-persistent method, throughput can go up to 90%.

Carrier Sense Multiple Access with Collision


Avoidance (CSMA/CA)
The basic idea behind CSMA/CA is that the station should be able to
receive while transmitting to detect a collision from different stations. In
wired networks, if a collision has occurred then the energy of the received
signal almost doubles, and the station can sense the possibility of collision.
In the case of wireless networks, most of the energy is used for
transmission, and the energy of the received signal increases by only 5-
10% if a collision occurs. It can’t be used by the station to sense collision.
Therefore CSMA/CA has been specially designed for wireless
networks.
These are three types of strategies:
1. InterFrame Space (IFS): When a station finds the channel busy
it senses the channel again, when the station finds a channel to
be idle it waits for a period of time called IFS time. IFS can also
be used to define the priority of a station or a frame. Higher the
IFS lower is the priority.
2. Contention Window: It is the amount of time divided into slots.
A station that is ready to send frames chooses a random number
of slots as wait time.
3. Acknowledgments: The positive acknowledgments and time-
out timer can help guarantee a successful transmission of the
frame.
Advantages of CSMA
• Increased Efficiency: CSMA ensures that only one device
communicates on the network at a time, reducing collisions and
improving network efficiency.
• Simplicity: CSMA is a simple protocol that is easy to implement
and does not require complex hardware or software.
• Flexibility: CSMA is a flexible protocol that can be used in a
wide range of network environments, including wired and
wireless networks.
• Low cost: CSMA does not require expensive hardware or
software, making it a cost-effective solution for network
communication.
Disadvantages of CSMA
• Limited Scalability: CSMA is not a scalable protocol and can
become inefficient as the number of devices on the network
increases.
• Delay: In busy networks, the requirement to sense the medium
and wait for an available channel can result in delays and
increased latency.
• Limited Reliability: CSMA can be affected by interference,
noise, and other factors, resulting in unreliable communication.
• Vulnerability to Attacks: CSMA can be vulnerable to certain
types of attacks, such as jamming and denial-of-service attacks,
which can disrupt network communication.
• Comparison of Various Protocols
Collision
Transmission detection
Protocol behavior method Efficiency Use cases

Low-traffic
Pure Sends frames No collision
Low networks
ALOHA immediately detection

Sends frames Better Low-traffic


Slotted No collision
at specific time than pure networks
ALOHA detection
slots ALOHA

Monitors
Wired
CSMA/CD medium after Collision High networks
sending a detection by
with
frame,
Collision
Transmission detection
Protocol behavior method Efficiency Use cases

retransmits if monitoring moderate to


necessary transmissions high traffic

Monitors Wireless
medium while Collision networks
transmitting, avoidance with
CSMA/CA adjusts through random High moderate to
behavior to backoff time high traffic
avoid intervals and high
collisions error rates

Difference Between CSMA/CA and CSMA/CD


CSMA/CA CSMA/CD

CSMA / CA is effective before a CSMA / CD is effective after a


collision. collision.

CSMA / CA is commonly used in CSMA / CD is used in wired


wireless networks. networks.

CSMA/ CA minimizes the possibility


It only reduces the recovery time.
of collision.

CSMA / CA will first transmit the CSMA / CD resends the data frame
intent to send for data transmission. whenever a conflict occurs.

CSMA / CA is used in 802.11 CSMA / CD is used in 802.3


standard. standard.
CSMA/CA CSMA/CD

It is more efficient than simple


It is similar to simple CSMA(Carrier
CSMA(Carrier Sense Multiple
Sense Multiple Access).
Access).

It is the type of CSMA to avoid It is the type of CSMA to detect the


collision on a shared channel. collision on a shared channel.

It is work in MAC layer. It also work in MAC layer.

11.ALOHA Protocol
1. Introduction

ALOHA was developed at the University of Hawaii in the 1970s to enable communication
between different islands via radio. It is one of the earliest random access protocols, and
forms the basis of modern protocols like Ethernet and Wi-Fi.

2. Types of ALOHA

There are two main types:

(a) Pure ALOHA

• Nodes can transmit data at any time.


• If two frames collide, both are lost.
• A retransmission is scheduled after a random time.

Efficiency:
Max Efficiency=18.4% (12e)\text{Max Efficiency} = 18.4\% \
(\frac{1}{2e})Max Efficiency=18.4% (2e1)

(b) Slotted ALOHA

• Time is divided into slots equal to the frame transmission time.


• A device can only send data at the beginning of a time slot, reducing chances of
collision.
Efficiency:
Max Efficiency=36.8% (1e)\text{Max Efficiency} = 36.8\% \
(\frac{1}{e})Max Efficiency=36.8% (e1)

3. Working Mechanism

1. A station sends data without checking the channel.


2. If an ACK is received, the transmission was successful.
3. If no ACK is received, the station waits a random time and retransmits.

4. Advantages

• Simple and easy to implement.


• Suitable for low-traffic environments.

5. Disadvantages

• High collision rate, especially in Pure ALOHA.


• Low efficiency in high-traffic situations.
• No carrier sensing.

6. Applications

• Satellite communication
• Initial design of Ethernet
• Basis for wireless protocols

Conclusion

ALOHA laid the foundation for modern random access protocols. While it's not efficient in
high-traffic networks, its simplicity made it valuable for early wireless systems and for
understanding network access principles.

Difference between Pure Aloha and Slotted Aloha


Key Pure Aloha Slotted Aloha

In Slotted Aloha, any station


In Pure Aloha, any station can
Time Slot can transmit data only at the
transmit data at any time.
beginning of a time slot.

In Slotted Aloha, time is


In Pure Aloha, time is continuous
Time discrete and is globally
and is not globally synchronized.
synchronized.

The vulnerable time or In Slotted Aloha, the


Vulnerable
susceptible time in Pure Aloha is vulnerable time is equal to
time
equal to (2×Tt). (Tt).

The probability of successful The probability of successful


Probability transmission of a data transmission of data
packet S=G×e−2GS=G×e−2G? packet S=G×e−GS=G×e−G

Maximum
Maximum efficiency = 18.4%. Maximum efficiency = 36.8%.
efficiency

Slotted Aloha reduces the


Number of Does not reduce the number of
number of collisions to half,
collisions collisions.
thus doubles the efficiency.

Piggybacking in Computer Networks


Piggybacking is the technique of delaying outgoing acknowledgment temporarily
and attaching it to the next data packet. When a data frame arrives, the receiver
waits and does not send the control frame (acknowledgment) back immediately.
The receiver waits until its network layer moves to the next data packet.
Acknowledgment is associated with this outgoing data frame. Thus the
acknowledgment travels along with the next data frame.

Working of Piggybacking
As we can see in the figure, we can see with piggybacking, a single
message (ACK + DATA) over the wire in place of two separate messages.
Piggybacking improves the efficiency of the bidirectional protocols.
• If Host A has both acknowledgment and data, which it wants to
send, then the data frame will be sent with the ack field which
contains the sequence number of the frame.
• If Host A contains only one acknowledgment, then it will wait
for some time, then in the case, if it finds any data frame, it
piggybacks the acknowledgment, otherwise, it will send the
ACK frame.
• If Host A left with only a data frame, then it will add the last
acknowledgment to it. Host A can send a data frame with an ack
field containing no acknowledgment bit.
Advantages of Piggybacking
1. The major advantage of piggybacking is the better use of
available channel bandwidth. This happens because an
acknowledgment frame does not need to be sent separately.
2. Usage cost reduction.
3. Improves latency of data transfer.
4. To avoid the delay and rebroadcast of frame transmission,
piggybacking uses a very short-duration timer.
Disadvantages of Piggybacking
1. The disadvantage of piggybacking is the additional complexity.
2. If the data link layer waits long before transmitting the
acknowledgment (blocks the ACK for some time), the frame will
rebroadcast.

You might also like