Unit 2 Data Link Layer (CN)
Unit 2 Data Link Layer (CN)
Stop-and-wait ARQ
In the case of stop-and-wait ARQ after the frame is sent, the
sender maintains a timeout counter.
If acknowledgment of the frame comes in time, the sender
transmits the next frame in the queue.
Else, the sender retransmits the frame and starts the timeout
counter.
In case the receiver receives a negative acknowledgment, the
sender retransmits the frame.
Sliding Window ARQ
To deal with the retransmission of lost or damaged frames, a few
changes are made to the sliding window mechanism used in flow
control.
Go-Back-N ARQ :
In Go-Back-N ARQ, if the sent frames are suspected or damaged, all
the frames are re-transmitted from the lost packet to the last packet
transmitted.
Selective Repeat ARQ:
Selective repeat ARQ/ Selective Reject ARQ is a type of Sliding
Window ARQ in which only the suspected or damaged frames are re-
transmitted
Differences between Flow Control and Error Control
Examples of Flow
Control techniques
are : Examples of Error Control techniques are :
1. Stop and Wait for 1. Stop and Wait for ARQ,
Protocol, 2. Sliding Window ARQ.
2. Sliding Window
Protocol.
Conclusion
Data frames are transmitted from the sender to the receiver.
For the transmission to be reliable, error-free, and efficient flow
control and error control techniques are implemented.
Both these techniques are implemented in the Data Link Layer.
Flow Control is used to maintain the proper flow of data from
the sender to the receiver.
Error Control is used to find whether the data delivered to the
receiver is error-free and reliable.
Piggybacking:
Piggybacking is the technique of delaying outgoing acknowledgment
temporarily and attaching it to the next data packet. When a data
frame arrives, the receiver waits and does not send the control frame
(acknowledgment) back immediately. The receiver waits until its
network layer moves to the next data packet. Acknowledgment is
associated with this outgoing data frame. Thus the acknowledgment
travels along with the next data frame.
Why Piggybacking?
Efficiency can also be improved by making use of full-duplex
transmission. Full Duplex transmission is a transmission that happens
with the help of two half-duplex transmissions which helps in
communication in both directions. Full Duplex Transmission is better
than both simplex and half-duplex transmission modes.
Piggybacking: A preferable solution would be to use each channel to
transmit the frame (front and back) both ways, with both channels
having the same capacity. Assume that A and B are users. Then the
data frames from A to B are interconnected with the
acknowledgment from A to B. and can be identified as a data frame
or acknowledgment by checking the sort field in the header of the
received frame
Pure ALOHA?
ure ALOHA refers to the original ALOHA protocol. The idea is
that each station sends a frame whenever one is available.
Because there is only one channel to share, there is a chance
that frames from different stations will collide.
The pure ALOHA protocol utilizes acknowledgments from the
receiver to ensure successful transmission. When a user sends a
frame, it expects confirmation from the receiver. If no
acknowledgment is received within a designated time period,
the sender assumes that the frame was not received and
retransmits the frame.
When two frames attempt to occupy the channel
simultaneously, a collision occurs and both frames become
garbled. If the first bit of a new frame overlaps with the last bit
of a frame that is almost finished, both frames will be
completely destroyed and will need to be retransmitted. If all
users retransmit their frames at the same time after a time-out,
the frames will collide again.
To prevent this, the pure ALOHA protocol dictates that each
user waits a random amount of time, known as the back-off
time, before retransmitting the frame. This randomness helps
to avoid further collisions
The time-out period is equal to the maximum possible round-
trip propagation delay, which is twice the amount of time
required to send a frame between the two most widely
separated stations (2 x Tp).
Let all the packets have the same length. And each requires a
one-time unit for transmission (tp). Consider any user to send
packet A at a time. If any other user B has generated a packet
between time (to), and (to + tp), the end of packet B will collide
with the beginning of packet A. Since in a pure ALOHA packet, a
station does not listen to the channel before transmitting, it has
no way of knowing that the above frame was already underway.
High Collision Rate: The high collision rate in slotted ALOHA can result in a high packet loss
rate, which can negatively impact the overall performance of the network.
Inefficiency: The protocol is inefficient at high loads, as the efficiency decreases as the
number of nodes attempting to transmit increases.
Conclusion
In conclusion, Slotted ALOHA is a method used in
communication networks where time is divided into equal
slots, and devices can only send data at the beginning of a
time slot. This approach reduces the chances of data collisions
compared to pure ALOHA, making it more efficient for
transmitting data in a shared communication channel. By
using time slots, Slotted ALOHA improves the overall network
performance, especially when multiple devices are trying to
communicate simultaneously
In the diagram, starts sending the first bit of its frame at t1 and since
C sees the channel idle at t2, starts sending its frame at t2. C detects
A’s frame at t3 and aborts transmission. A detects C’s frame at t4 and
aborts its transmission. Transmission time for C’s frame is,
therefore, t3-t2 and for A’s frame is t4-t1
So, the frame transmission time (Tfr) should be at least twice the
maximum propagation time (Tp). This can be deduced when the two
stations involved in a collision are a maximum distance apart.
Throughput and Efficiency: The throughput of CSMA/CD is much
greater than pure or slotted ALOHA.
For the 1-persistent method, throughput is 50% when G=1.
For the non-persistent method, throughput can go up to 90%
Carrier Sense Multiple Access with Collision
Avoidance (CSMA/CA)
The basic idea behind CSMA/CA is that the station should be able to
receive while transmitting to detect a collision from different
stations. In wired networks, if a collision has occurred then the
energy of the received signal almost doubles, and the station can
sense the possibility of collision. In the case of wireless networks,
most of the energy is used for transmission, and the energy of the
received signal increases by only 5-10% if a collision occurs. It can’t
be used by the station to sense collision. Therefore CSMA/CA has
been specially designed for wireless networks.
These are three types of strategies:
1. InterFrame Space (IFS): When a station finds the channel busy
it senses the channel again, when the station finds a channel to
be idle it waits for a period of time called IFS time. IFS can also
be used to define the priority of a station or a frame. Higher the
IFS lower is the priority.
2. Contention Window: It is the amount of time divided into slots.
A station that is ready to send frames chooses a random
number of slots as wait time.
3. Acknowledgments: The positive acknowledgments and time-
out timer can help guarantee a successful transmission of the
frame
Characteristics of CSMA/CA
1. Carrier Sense: The device listens to the channel before
transmitting, to ensure that it is not currently in use by another
device.
2. Multiple Access: Multiple devices share the same channel and
can transmit simultaneously.
3. Collision Avoidance: If two or more devices attempt to transmit
at the same time, a collision occurs. CSMA/CA uses random
backoff time intervals to avoid collisions.
4. Acknowledgment (ACK): After successful transmission, the
receiving device sends an ACK to confirm receipt.
5. Fairness: The protocol ensures that all devices have equal
access to the channel and no single device monopolizes it.
6. Binary Exponential Backoff: If a collision occurs, the device
waits for a random period of time before attempting to
retransmit. The backoff time increases exponentially with each
retransmission attempt.
7. Interframe Spacing: The protocol requires a minimum amount
of time between transmissions to allow the channel to be clear
and reduce the likelihood of collisions.
8. RTS/CTS Handshake: In some implementations, a Request-To-
Send (RTS) and Clear-To-Send (CTS) handshake is used to
reserve the channel before transmission. This reduces the
chance of collisions and increases efficiency.
9. Wireless Network Quality: The performance of CSMA/CA is
greatly influenced by the quality of the wireless network, such
as the strength of the signal, interference, and network
congestion.
10. Adaptive Behavior: CSMA/CA can dynamically
adjust its behavior in response to changes in network
conditions, ensuring the efficient use of the channel and
avoiding congestion.
Overall, CSMA/CA balances the need for efficient use of the shared
channel with the need to avoid collisions, leading to reliable and fair
communication in a wireless network
Comparisonof Various Protocols
Collision
Transmissio detection Use
Protocol n behavior method Efficiency cases
Low-
Sends
Pure No collision traffic
frames Low
ALOHA detection networks
immediately
Sends Low-
Better
Slotted frames at No collision traffic
than pure
ALOHA specific time detection networks
ALOHA
slots
Monitors Wired
medium Collision networks
after sending detection by with
CSMA/CD High
a frame, monitoring moderat
retransmits transmissions e to high
if necessary traffic
and high
avoid
error
collisions
rates
Conclusion
In conclusion, Carrier Sense Multiple Access (CSMA) is a method
used by devices in a network to share the communication channel
without causing too many collisions. It works by having each device
listen to the channel before sending data. CSMA/CD (Collision
Detection) is used mostly in wired networks like Ethernet. It listens
for collisions during transmission and, if a collision happens, devices
stop sending, wait, and try again. CSMA/CA (Collision Avoidance) is
commonly used in wireless networks like Wi-Fi. It focuses on
preventing collisions before they happen by having devices wait for a
random time or send signals before transmitting data
Transfer Modes
HDLC supports two types of transfer modes, normal response mode
and asynchronous balanced mode.
Normal Response Mode (NRM) − Here, two types of stations
are there, a primary station that send commands and
secondary station that can respond to received commands. It is
used for both point - to - point and multipoint communications.
2. Establish –
Link then proceeds towards this phase after the presence of
peer is being detected. When one of nodes starts
communication, then connection goes into this phase. By the
exchange of LCP Frames or packets, all of configuration
parameters are negotiated. If somehow negotiation meets at a
point, link is developed and then system goes either into
authentication protocol or network layer protocol. The end of
this phase simply indicates open state of LCP.
3. Authenticate –
In PPP, authentication is optional. Peer authentication can be
requested by one or both of the endpoints. PPP enters
authentication phase if Password Authentication Protocol (PAP)
or Challenge-Handshake Authentication Protocol (CHAP) is
configured.
4. Network –
PPP basically sends or transmits NCP packets to choose and
configure one or more network-layer protocols such as IP, IPX,
etc. once LCP state is being open and link or connection is
established. This is especially required to configure the
appropriate network layer.
In this phase, Each of Network control protocols might be opened
and closed at any time and negotiation for these protocols also takes
place. At network layer, PPP also supports various protocols due to
which PPP specifies that two of nodes establish or develop a network
layer agreement before data is exchanged at network layer.
5. Open –
Usually transferring of data takes place in this phase. Once
endpoints want to end the connection, connection is then
transferred to terminate phase, till then connection remains in
this phase.
6. Terminate –
Connection can be terminated at any point of time as per the
request of either of the endpoints. LCP is basically required to
close or terminate link through the exchange of terminate
packets.