0% found this document useful (0 votes)
15 views42 pages

Unit 2 Data Link Layer (CN)

The document discusses the Data Link Layer and Medium Access Sub Layer in computer networks, focusing on error detection and correction mechanisms, including types of errors, error detection methods like Parity Check and CRC, and error correction techniques such as Forward and Backward Error Correction. It also covers flow control methods, including Stop-and-Wait and Sliding Window protocols, and introduces the concept of piggybacking for efficient acknowledgment transmission. Additionally, it explains multiple access protocols like ALOHA and Pure ALOHA, emphasizing their random access nature and collision handling.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views42 pages

Unit 2 Data Link Layer (CN)

The document discusses the Data Link Layer and Medium Access Sub Layer in computer networks, focusing on error detection and correction mechanisms, including types of errors, error detection methods like Parity Check and CRC, and error correction techniques such as Forward and Backward Error Correction. It also covers flow control methods, including Stop-and-Wait and Sliding Window protocols, and introduces the concept of piggybacking for efficient acknowledgment transmission. Additionally, it explains multiple access protocols like ALOHA and Pure ALOHA, emphasizing their random access nature and collision handling.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 42

Unit 2- Data Link Layer and Medium Access Sub Layer:

Computer Network - Error Detection & Correction


Data-link layer uses some error control mechanism to ensure that
frames (data bit streams) are transmitted with certain level of
accuracy. But to understand how errors is controlled, it is essential to
know what types of errors may occur
Types of Errors
There may be three types of errors:
 Single bit error

In a frame, there is only one bit, anywhere though, which is corrupt.


 Multiple bits error

Frame is received with more than one bits in corrupted state.


 Burst error

Frame contains more than1 consecutive bits corrupted


Error control mechanism may involve two possible ways:
 Error detection
 Error correction
Error Detection
Parity Check
One extra bit is sent along with the original bits to make number of
1s either even in case of even parity, or odd in case of odd parity.

Cyclic Redundancy Check (CRC)


CRC is a different approach to detect if the received frame contains
valid data. This technique involves binary division of the data bits
being sent. The divisor is generated using polynomials. The sender
performs a division operation on the bits being sent and calculates
the remainder. Before sending the actual bits, the sender adds the
remainder at the end of the actual bits. Actual data bits plus the
remainder is called a codeword. The sender transmits data bits as
codewords.

At the other end, the receiver performs division operation on


codewords using the same CRC divisor. If the remainder contains all
zeros the data bits are accepted, otherwise it is considered as there
some data corruption occurred in transit.
Checksum
This method adds up all the bits and adds the sum to the message
while transmitting. This sum is called the checksum. The sender
calculates the checksum before transmitting the data, and the
recipient recalculates it upon receiving the data. If the two checksum
values do not match with each other, there is some error in the
network. However if they match, there is no error. An example of
checksum is shown below.

Error Correction in Computer Networks


 Forward Error Correction: In this Error Correction Scenario, the
receiving end is responsible for correcting the network error.
There is no need for retransmission of the data from the
sender’s side.
 Backward Error Correction: the sender is responsible for
retransmitting the data if errors are detected by the receiver.
The receiver signals the sender to resend the corrupted data or
the entire message to ensure accurate delivery
Hamming Code Error Correction
In this method extra parity bits are appended to the message which
are used by the receiver to correct the single bit error and multiple
bit error. Consider the below example to understand this method in a
better way.
Suppose the sender wants to transmit the message whose bit
representation is ‘1011001.’ In this message:
 Total number of bits (d) = 7
 Total of redundant bits (r) = 4 (This is because the message has
four 1’s in it)
 Thus, total bits (d+r) = 7 + 4 = 11

Flow Control and Error control protocols:


Data Link Layer is responsible for reliable point-to-point data transfer
over a physical medium. To implement this data link layer provides
three functions :
 Line Discipline:
Line discipline is the functionality used to establish coordination
between link systems. It decides which device sends data and
when.
 Flow Control:
Flow control is an essential function that coordinates the
amount of data the sender can send before waiting for
acknowledgment from the receiver.
 Error Control:
Error control is functionality used to detect erroneous
transmissions in data frames and retransmit them

What is Flow Control in the Data Link Layer?


Flow control is a set of procedures that restrict the amount of data a
sender should send before it waits for some acknowledgment from
the receiver.
 Flow Control is an essential function of the data link layer.
 It determines the amount of data that a sender can send.
 It makes the sender wait until an acknowledgment is received
from the receiver’s end.
 Methods of Flow Control are Stop-and-wait , and Sliding
window
Purpose of Flow Control
The device on the receiving end has a limited amount of
memory (to store incoming data) and limited speed (to process
incoming data). The receiver might get overwhelmed if the rate at
which the sender sends data is faster or the amount of data sent is
more than its capacity.
Buffers are blocks in the memory that store data until it is
processed. If the buffer is overloaded and there is more incoming
data, then the receiver will start losing frames.
flow control is the method of controlling the rate of transmission
of data to a value that the receiver can handle.

Methods to Control the Flow of Data


Stop-and-wait Protocol
Stop-and-wait protocol works under the assumption that the
communication channel is noiseless and transmissions are error-
free.
Working:
 The sender sends data to the receiver.
 The sender stops and waits for the acknowledgment.
 The receiver receives the data and processes it.
 The receiver sends an acknowledgment for the above data to
the sender.
 The sender sends data to the receiver after receiving the
acknowledgment of previously sent data.
 The process is unidirectional and continues until the sender
sends the End of Transmission (EoT) frame.

Sliding Window Protocol


The sliding window protocol is the flow control protocol for noisy
channels that allows the sender to send multiple frames even
before acknowledgments are received. It is called a Sliding
window because the sender slides its window upon receiving the
acknowledgments for the sent frames.
Working:
 The sender and receiver have a “window” of frames. A window
is a space that consists of multiple bytes. The size of the
window on the receiver side is always 1.
 Each frame is sequentially numbered from 0 to n – 1, where n is
the window size at the sender side.
 The sender sends as many frames as would fit in a window.
 After receiving the desired number of frames, the receiver
sends an acknowledgment. The acknowledgment (ACK)
includes the number of the next expected frame.
1. The sender sends the frames 0 and 1 from the first window
(because the window size is 2).
2. The receiver after receiving the sent frames, sends an
acknowledgment for frame 2 (as frame 2 is the next expected
frame).
3. The sender then sends frames 2 and 3. Since frame 2 is lost on
the way, the receiver sends back a “NAK” signal (a non-
acknowledgment) to inform the sender that frame 2 has been
lost. So, the sender retransmits frame 2.
………………………………..

What is Error Control in the Data Link Layer?


Error Control is a combination of both error detection and error
correction. It ensures that the data received at the receiver end is
the same as the one sent by the sender.
Error detection is the process by which the receiver informs the
sender about any erroneous frame (damaged or lost) sent during
transmission.
Error correction refers to the retransmission of those frames by
the sender.
Purpose of Error Control
Error control is a vital function of the data link layer that detects
errors in transmitted frames and retransmits all the erroneous
frames. Error discovery and amendment deal with data frames
damaged or lost in transit and the acknowledgment frames lost
during transmission. The method used in noisy channels to control
these errors is ARQ or Automatic Repeat Request.
Categories of Error Control

Stop-and-wait ARQ
 In the case of stop-and-wait ARQ after the frame is sent, the
sender maintains a timeout counter.
 If acknowledgment of the frame comes in time, the sender
transmits the next frame in the queue.
 Else, the sender retransmits the frame and starts the timeout
counter.
 In case the receiver receives a negative acknowledgment, the
sender retransmits the frame.
Sliding Window ARQ
To deal with the retransmission of lost or damaged frames, a few
changes are made to the sliding window mechanism used in flow
control.

Go-Back-N ARQ :
In Go-Back-N ARQ, if the sent frames are suspected or damaged, all
the frames are re-transmitted from the lost packet to the last packet
transmitted.
Selective Repeat ARQ:
Selective repeat ARQ/ Selective Reject ARQ is a type of Sliding
Window ARQ in which only the suspected or damaged frames are re-
transmitted
Differences between Flow Control and Error Control

Flow control Error control

Flow control refers to


Error control refers to the transmission of
the transmission of data
error-free and reliable data frames from
frames from sender to
sender to receiver.
receiver.

Approaches for error detection are


Approaches for Flow
Checksum, Cyclic Redundancy Check, and
Control : Feedback-
Parity Checking. Approaches for error
based Flow Control and
correction are Hamming code, Binary
Rate-based Flow
Convolution codes, Reed-Solomon code, and
Control.
Low-Density Parity-Check codes.

Flow control focuses on Error control focuses on the detection and


the proper flow of data
Flow control Error control

and data loss


correction of errors.
prevention.

Examples of Flow
Control techniques
are : Examples of Error Control techniques are :
1. Stop and Wait for 1. Stop and Wait for ARQ,
Protocol, 2. Sliding Window ARQ.
2. Sliding Window
Protocol.

Conclusion
 Data frames are transmitted from the sender to the receiver.
 For the transmission to be reliable, error-free, and efficient flow
control and error control techniques are implemented.
 Both these techniques are implemented in the Data Link Layer.
 Flow Control is used to maintain the proper flow of data from
the sender to the receiver.
 Error Control is used to find whether the data delivered to the
receiver is error-free and reliable.

Piggybacking:
Piggybacking is the technique of delaying outgoing acknowledgment
temporarily and attaching it to the next data packet. When a data
frame arrives, the receiver waits and does not send the control frame
(acknowledgment) back immediately. The receiver waits until its
network layer moves to the next data packet. Acknowledgment is
associated with this outgoing data frame. Thus the acknowledgment
travels along with the next data frame.
Why Piggybacking?
Efficiency can also be improved by making use of full-duplex
transmission. Full Duplex transmission is a transmission that happens
with the help of two half-duplex transmissions which helps in
communication in both directions. Full Duplex Transmission is better
than both simplex and half-duplex transmission modes.
Piggybacking: A preferable solution would be to use each channel to
transmit the frame (front and back) both ways, with both channels
having the same capacity. Assume that A and B are users. Then the
data frames from A to B are interconnected with the
acknowledgment from A to B. and can be identified as a data frame
or acknowledgment by checking the sort field in the header of the
received frame

As we can see in the figure, we can see with piggybacking, a single


message (ACK + DATA) over the wire in place of two separate
messages. Piggybacking improves the efficiency of the bidirectional
protocols.
 If Host A has both acknowledgment and data, which it wants to
send, then the data frame will be sent with the ack field which
contains the sequence number of the frame.
 If Host A contains only one acknowledgment, then it will wait
for some time, then in the case, if it finds any data frame, it
piggybacks the acknowledgment, otherwise, it will send the
ACK frame.
 If Host A left with only a data frame, then it will add the last
acknowledgment to it. Host A can send a data frame with an
ack field containing no acknowledgment bit
Advantages of Piggybacking
1. The major advantage of piggybacking is the better use of
available channel bandwidth. This happens because an
acknowledgment frame does not need to be sent separately.
2. Usage cost reduction.
3. Improves latency of data transfer.
4. To avoid the delay and rebroadcast of frame transmission,
piggybacking uses a very short-duration timer.
Disadvantages of Piggybacking
1. The disadvantage of piggybacking is the additional complexity.
2. If the data link layer waits long before transmitting the
acknowledgment (blocks the ACK for some time), the frame will
rebroadcast

Multiple access protocols-


What is ALOHA?
 multiple devices share a common communication channel.
 This protocol allows devices to transmit data at any time,
without a set schedule. This is known as a random access
technique.
 it is asynchronous because there is no coordination between
devices.
 When multiple devices attempt to transmit data at the same
time, it can result in a collision, where the data becomes
garbled.
 In this case, each device will simply wait a random amount of
time before attempting to transmit again.
 The basic concept of the ALOHA protocol can be applied to any
system where uncoordinated users are competing for the use of
a shared channel.

Pure ALOHA?
 ure ALOHA refers to the original ALOHA protocol. The idea is
that each station sends a frame whenever one is available.
Because there is only one channel to share, there is a chance
that frames from different stations will collide.
 The pure ALOHA protocol utilizes acknowledgments from the
receiver to ensure successful transmission. When a user sends a
frame, it expects confirmation from the receiver. If no
acknowledgment is received within a designated time period,
the sender assumes that the frame was not received and
retransmits the frame.
 When two frames attempt to occupy the channel
simultaneously, a collision occurs and both frames become
garbled. If the first bit of a new frame overlaps with the last bit
of a frame that is almost finished, both frames will be
completely destroyed and will need to be retransmitted. If all
users retransmit their frames at the same time after a time-out,
the frames will collide again.
 To prevent this, the pure ALOHA protocol dictates that each
user waits a random amount of time, known as the back-off
time, before retransmitting the frame. This randomness helps
to avoid further collisions
 The time-out period is equal to the maximum possible round-
trip propagation delay, which is twice the amount of time
required to send a frame between the two most widely
separated stations (2 x Tp).
 Let all the packets have the same length. And each requires a
one-time unit for transmission (tp). Consider any user to send
packet A at a time. If any other user B has generated a packet
between time (to), and (to + tp), the end of packet B will collide
with the beginning of packet A. Since in a pure ALOHA packet, a
station does not listen to the channel before transmitting, it has
no way of knowing that the above frame was already underway.

 Similarly, if another user wants to transmit between (to, +tp) and


(to +2tp) i.e. packet C, the beginning of packet C will collide with
the end of packet A. Thus if two packets overlap by even the
smallest amount in the vulnerable period both packets will be
corrupted and need to be retransmitted.
 Maximum Throughput of Pure ALOHA
 The maximum throughput occurs when G=0.5
 Smax=0.5 × e-1 ≈ 0.184
 This means the maximum throughput of Pure ALOHA is
approximately 18.4%. In other words, only about 18.4% of
the time is used for successful transmissions, and the rest is
lost due to collisions.
Key Features of Pure ALOHA
 Random Access: Devices can send data whenever they have
something to transmit, without needing to wait for a
predetermined time slot.
 Uncoordinated Transmission: Devices do not coordinate with
each other before transmitting. They simply attempt to send
data whenever they have data to send.
 Simple Implementation: Pure ALOHA is straightforward to
implement, making it suitable for early network experiments
and scenarios with low traffic.
 Persistent Approach: Devices continue to attempt transmission
even after a collision, using a form of exponential backoff. This
means they introduce random delays before retrying, which
helps reduce the chances of repeated collisions.
 Contention-Based: Since devices transmit without
coordination, collisions may occur if two or more devices
transmit simultaneously. Collisions are detected through
feedback from the receiver or by the transmitting device itself.
For more details please refer Derive the efficiency of Pure
ALOHA protocol and Differences between Pure and Slotted
Aloha article.
Conclusion
Pure ALOHA is an early and simple method for sending data
over a shared network, where devices transmit whenever they
have data to send. It doesn't check if the channel is free,
leading to frequent collisions when two devices send data at
the same time. This results in a maximum efficiency of about
18.4%, meaning many transmissions are lost due to these
collisions
What is Slotted ALOHA?
Slotted ALOHA is an improved version of the pure ALOHA
protocol that aims to make communication networks more
efficient. In this version, the channel is divided into small, fixed-
length time slots and users are only allowed to transmit data at
the beginning of each time slot. This synchronization of
transmissions reduces the chances of collisions between
devices, increasing the overall efficiency of the network.
How Does Slotted ALOHA work?
The channel time is separated into time slots in slotted ALOHA,
and stations are only authorized to transmit at particular times.
These time slots correspond to the packet transmission time
exactly. All users are then synchronized to these time slots so
that whenever a user sends a packet, it must precisely match
the next available channel slot. As a result, wasted time due to
collisions can be reduced to one packet time or the susceptible
period can be half.
When a user wants to transmit a frame, it waits until the next
time slot and then sends the frame. If the frame is received
successfully, the receiver sends an acknowledgment. If the
acknowledgment is not received within a time-out period, the
sender assumes that the frame was not received and
retransmits the frame in the next time slot
Slotted ALOHA increases channel utilization by reducing the
number of collisions. However, it also increases the delay for
users, as they have to wait for the next time slot to transmit
their frames. It's also worth noting that there is a variant of
slotted ALOHA called "non-persistent slotted ALOHA" which is a
variation of slotted ALOHA, in this variant the station that wants
to send data, first listens to the channel before sending the
data. If the channel is busy it waits for a certain time before
trying

Throughput (S) = G x e-G


The maximum Throughput occurs at G = 1,
i.e. S = 1/e = 0.368

Assumption of Slotted ALOHA


 All frames are of the same size.
 Time is divided into equal-sized slots, a slot equals the time to
transmit one frame
 Nodes start to transmit frames only at beginning of slots.
 Nodes are synchronized.
 If two or more nodes transmit in a slot, all nodes detect
collision before the slot ends.
Advantages of Slotted ALOHA
 Simplicity: The Slotted Aloha protocol is relatively simple to
implement and understand, making it an easy option for low-
complexity networks.
 Flexibility: Slotted Aloha can be used in a wide range of
network environments, including those with varying numbers
of nodes and varying traffic loads
 Low Overhead: Slotted Aloha does not require complex
management or control mechanisms, which can help to reduce
the overhead and complexity of the network

Disadvantages of Slotted ALOHA


 Low Throughput: The maximum throughput of the Slotted Aloha protocol is relatively low at
around 18.4%, which can be limiting for high-bandwidth applications.

 High Collision Rate: The high collision rate in slotted ALOHA can result in a high packet loss
rate, which can negatively impact the overall performance of the network.

 Inefficiency: The protocol is inefficient at high loads, as the efficiency decreases as the
number of nodes attempting to transmit increases.

Conclusion
In conclusion, Slotted ALOHA is a method used in
communication networks where time is divided into equal
slots, and devices can only send data at the beginning of a
time slot. This approach reduces the chances of data collisions
compared to pure ALOHA, making it more efficient for
transmitting data in a shared communication channel. By
using time slots, Slotted ALOHA improves the overall network
performance, especially when multiple devices are trying to
communicate simultaneously

Carrier Sense Multiple Access (CSMA)


Carrier Sense Multiple Access (CSMA) is a method used in computer
networks to manage how devices share a communication channel to
transfer the data between two devices. In this protocol, each device
first sense the channel before sending the data. If the channel is busy,
the device waits until it is free. This helps reduce collisions, where
two devices send data at the same time, ensuring smoother
communication across the network. CSMA is commonly used in
technologies like Ethernet and Wi-Fi

What is Vulnerable Time in CSMA?


Vulnerable time is the short period when there’s a chance that two
devices on a network might send data at the same time and cause a
collision.
Vulnerable time = Propagation time (Tp)

Types of CSMA Protocol


1. CSMA/CD
2. CSMA/CA

Carrier Sense Multiple Access with


Collision Detection (CSMA/CD)
In this method, a station monitors the medium after it sends a
frame to see if the transmission was successful. If successful,
the transmission is finished, if not, the frame is sent again

In the diagram, starts sending the first bit of its frame at t1 and since
C sees the channel idle at t2, starts sending its frame at t2. C detects
A’s frame at t3 and aborts transmission. A detects C’s frame at t4 and
aborts its transmission. Transmission time for C’s frame is,
therefore, t3-t2 and for A’s frame is t4-t1
So, the frame transmission time (Tfr) should be at least twice the
maximum propagation time (Tp). This can be deduced when the two
stations involved in a collision are a maximum distance apart.
Throughput and Efficiency: The throughput of CSMA/CD is much
greater than pure or slotted ALOHA.
 For the 1-persistent method, throughput is 50% when G=1.
 For the non-persistent method, throughput can go up to 90%
Carrier Sense Multiple Access with Collision
Avoidance (CSMA/CA)
The basic idea behind CSMA/CA is that the station should be able to
receive while transmitting to detect a collision from different
stations. In wired networks, if a collision has occurred then the
energy of the received signal almost doubles, and the station can
sense the possibility of collision. In the case of wireless networks,
most of the energy is used for transmission, and the energy of the
received signal increases by only 5-10% if a collision occurs. It can’t
be used by the station to sense collision. Therefore CSMA/CA has
been specially designed for wireless networks.
These are three types of strategies:
1. InterFrame Space (IFS): When a station finds the channel busy
it senses the channel again, when the station finds a channel to
be idle it waits for a period of time called IFS time. IFS can also
be used to define the priority of a station or a frame. Higher the
IFS lower is the priority.
2. Contention Window: It is the amount of time divided into slots.
A station that is ready to send frames chooses a random
number of slots as wait time.
3. Acknowledgments: The positive acknowledgments and time-
out timer can help guarantee a successful transmission of the
frame
Characteristics of CSMA/CA
1. Carrier Sense: The device listens to the channel before
transmitting, to ensure that it is not currently in use by another
device.
2. Multiple Access: Multiple devices share the same channel and
can transmit simultaneously.
3. Collision Avoidance: If two or more devices attempt to transmit
at the same time, a collision occurs. CSMA/CA uses random
backoff time intervals to avoid collisions.
4. Acknowledgment (ACK): After successful transmission, the
receiving device sends an ACK to confirm receipt.
5. Fairness: The protocol ensures that all devices have equal
access to the channel and no single device monopolizes it.
6. Binary Exponential Backoff: If a collision occurs, the device
waits for a random period of time before attempting to
retransmit. The backoff time increases exponentially with each
retransmission attempt.
7. Interframe Spacing: The protocol requires a minimum amount
of time between transmissions to allow the channel to be clear
and reduce the likelihood of collisions.
8. RTS/CTS Handshake: In some implementations, a Request-To-
Send (RTS) and Clear-To-Send (CTS) handshake is used to
reserve the channel before transmission. This reduces the
chance of collisions and increases efficiency.
9. Wireless Network Quality: The performance of CSMA/CA is
greatly influenced by the quality of the wireless network, such
as the strength of the signal, interference, and network
congestion.
10. Adaptive Behavior: CSMA/CA can dynamically
adjust its behavior in response to changes in network
conditions, ensuring the efficient use of the channel and
avoiding congestion.
Overall, CSMA/CA balances the need for efficient use of the shared
channel with the need to avoid collisions, leading to reliable and fair
communication in a wireless network
Comparisonof Various Protocols

Collision
Transmissio detection Use
Protocol n behavior method Efficiency cases

Low-
Sends
Pure No collision traffic
frames Low
ALOHA detection networks
immediately

Sends Low-
Better
Slotted frames at No collision traffic
than pure
ALOHA specific time detection networks
ALOHA
slots

Monitors Wired
medium Collision networks
after sending detection by with
CSMA/CD High
a frame, monitoring moderat
retransmits transmissions e to high
if necessary traffic

CSMA/CA Monitors Collision High Wireless


medium avoidance networks
while through with
transmitting, random moderat
adjusts backoff time e to high
behavior to intervals traffic
Collision
Transmissio detection Use
Protocol n behavior method Efficiency cases

and high
avoid
error
collisions
rates

Conclusion
In conclusion, Carrier Sense Multiple Access (CSMA) is a method
used by devices in a network to share the communication channel
without causing too many collisions. It works by having each device
listen to the channel before sending data. CSMA/CD (Collision
Detection) is used mostly in wired networks like Ethernet. It listens
for collisions during transmission and, if a collision happens, devices
stop sending, wait, and try again. CSMA/CA (Collision Avoidance) is
commonly used in wireless networks like Wi-Fi. It focuses on
preventing collisions before they happen by having devices wait for a
random time or send signals before transmitting data

high level data link control(HDLC):


High-level Data Link Control (HDLC) is a group of communication
protocols of the data link layer for transmitting data between
network points or nodes. Since it is a data link protocol, data is
organized into frames. A frame is transmitted via the network to the
destination that verifies its successful arrival. It is a bit - oriented
protocol that is applicable for both point - to - point and multipoint
communications

Transfer Modes
HDLC supports two types of transfer modes, normal response mode
and asynchronous balanced mode.
 Normal Response Mode (NRM) − Here, two types of stations
are there, a primary station that send commands and
secondary station that can respond to received commands. It is
used for both point - to - point and multipoint communications.

 Asynchronous Balanced Mode (ABM) − Here, the configuration


is balanced, i.e. each station can both send commands and
respond to commands. It is used for only point - to - point
communications.
HDLC Frame
HDLC is a bit - oriented protocol where each frame contains up to six
fields. The structure varies according to the type of frame. The fields
of a HDLC frame are

 Flag − It is an 8-bit sequence that marks the beginning and the


end of the frame. The bit pattern of the flag is 01111110.
 Address − It contains the address of the receiver. If the frame is
sent by the primary station, it contains the address(es) of the
secondary station(s). If it is sent by the secondary station, it
contains the address of the primary station. The address field
may be from 1 byte to several bytes.
 Control − It is 1 or 2 bytes containing flow and error control
information.
 Payload − This carries the data from the network layer. Its
length may vary from one network to another.
 FCS − It is a 2 byte or 4 bytes frame check sequence for error
detection. The standard code used is CRC (cyclic redundancy
code)
Types of HDLC Frames
There are three types of HDLC frames. The type of frame is
determined by the control field of the frame −
 I-frame − I-frames or Information frames carry user data from
the network layer. They also include flow and error control
information that is piggybacked on user data. The first bit of
control field of I-frame is 0.
 S-frame − S-frames or Supervisory frames do not contain
information field. They are used for flow and error control when
piggybacking is not required. The first two bits of control field of
S-frame is 10.
 U-frame − U-frames or Un-numbered frames are used for
myriad miscellaneous functions, like link management. It may
contain an information field, if required. The first two bits of
control field of U-frame is 11.
, Point To Point protocol (PPP)
Point-to-Point Protocol (PPP) is a data link layer protocol used to
establish a direct connection between two network nodes, often for
point-to-point links like dial-up modems or DSL connections. It
provides a standard way to encapsulate network layer protocol
information for transmission over these links, enabling the exchange
of data packets. PPP also supports authentication, encryption, and
data compression.
1. Dead –
In this phase, link basically starts and stops. Carrier Detection is
an event that is used to indicate that physical layer is ready, and
now the PPP will proceed towards establishment phase.
Disconnection from modem line must bring back the line or
connection to this phase. LCP automation is usually in the initial
or starting phase during this phase.

2. Establish –
Link then proceeds towards this phase after the presence of
peer is being detected. When one of nodes starts
communication, then connection goes into this phase. By the
exchange of LCP Frames or packets, all of configuration
parameters are negotiated. If somehow negotiation meets at a
point, link is developed and then system goes either into
authentication protocol or network layer protocol. The end of
this phase simply indicates open state of LCP.
3. Authenticate –
In PPP, authentication is optional. Peer authentication can be
requested by one or both of the endpoints. PPP enters
authentication phase if Password Authentication Protocol (PAP)
or Challenge-Handshake Authentication Protocol (CHAP) is
configured.
4. Network –
PPP basically sends or transmits NCP packets to choose and
configure one or more network-layer protocols such as IP, IPX,
etc. once LCP state is being open and link or connection is
established. This is especially required to configure the
appropriate network layer.
In this phase, Each of Network control protocols might be opened
and closed at any time and negotiation for these protocols also takes
place. At network layer, PPP also supports various protocols due to
which PPP specifies that two of nodes establish or develop a network
layer agreement before data is exchanged at network layer.
5. Open –
Usually transferring of data takes place in this phase. Once
endpoints want to end the connection, connection is then
transferred to terminate phase, till then connection remains in
this phase.
6. Terminate –
Connection can be terminated at any point of time as per the
request of either of the endpoints. LCP is basically required to
close or terminate link through the exchange of terminate
packets.

You might also like