0% found this document useful (0 votes)
21 views32 pages

CN Unit-2

The document discusses design issues in the data link layer. It describes the logical link control and media access control sublayers and their functions. It also covers framing, addressing, error control, flow control, and access control functions of the data link layer.

Uploaded by

Sainadh Challa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views32 pages

CN Unit-2

The document discusses design issues in the data link layer. It describes the logical link control and media access control sublayers and their functions. It also covers framing, addressing, error control, flow control, and access control functions of the data link layer.

Uploaded by

Sainadh Challa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 32

UNIT-2

Design Issues in Data Link Layer


Data Link Layer is the second layer after the physical layer. The data link
layer is responsible for maintaining the data link between two hosts or nodes.
The data link layer is classified into two sub-layers,

1. Logical Link Control (LLC) or Data Link Control (DLC)


Sublayer:
2. Media Access Control (MAC) Sublayer:

Logical Link Control Sub-layer (LLC) –

This sub layer runs above the data link layer and provides flow control and error
information. It is responsible for assigning the frame sequence number. Thus it
controls the synchronization, flow control, and error checking functions of the
data link layer. Functions are –
• (i) Error Recovery.
• (ii) It performs the flow control operations.
• (iii) User addressing.
Media Access Control Sub-layer (MAC) –
The bottom sub layer of the Data Link Layer is the Media Access Control. It is
also known as Medium Access Control. It provides multiplexing and flow
control for the transmission media. The main responsibility of this sub layer is
to encapsulate the frame, check for transmission errors, and then allow the
frame to be forwarded to the upper layer..
Functions are –
• (i) To perform the control of access to media.
• (ii) It performs the unique addressing to stations directly connected to LAN.
• (iii) Detection of errors.

Functions of the Data-link Layer:

1. Framing: The packet received from the Network layer is known as a frame
in the Data link layer. At the sender’s side, DLL receives packets from the
Network layer and divides them into small frames, then, sends each frame bit-
by-bit to the physical layer. It also attaches some special bits (for error control
and addressing) at the header and end of the frame. At the receiver’s end, DLL
takes bits from the Physical layer organize them into the frame, and sends
them to the Network layer.
2. Addressing: The data link layer encapsulates the source and destination’s
MAC address / physical address in the header of each frame to ensure node-to-
node delivery. MAC address is the unique hardware address that is assigned to
the device while manufacturing.
3. Error Control: Data can get corrupted due to various reasons like noise,
attenuation, etc. So, it is the responsibility of the data link layer, to detect the
error in the transmitted data and correct it using error detection and correction
techniques respectively. DLL adds error detection bits into the frame’s header,
so that receiver can check received data is correct or not.
4. Flow Control: If the receiver’s receiving speed is lower than the sender’s
sending speed, then this can lead to an overflow in the receiver’s buffer and
some frames may get lost. So, it’s the responsibility of DLL to synchronize the
sender’s and receiver’s speeds and establish flow control between them.
5. Access Control: When multiple devices share the same communication
channel there is a high probability of collision, so it’s the responsibility of
DLL to check which device has control over the channel and CSMA/CD and
CSMA/CA can be used to avoid collisions and loss of frames in the channel.
Framing In Data Link Layer

• Framing in data link layer is a point -to-point connection


between the sender and receiver. The framing is the primary
function of the data link layer and it provides a way to transmit
data between the connected devices.
• Framing uses frames to send or receive data. The data link layer
receives packets from the network layer and converts them into
frames.
• If the frame size is too large, then the packet can be divided
into smaller frames. Small frames are more efficient for flow
control and error control.
• The data link layer needs to pack bits into frames so that each
frame is distinguishable from another. The simple act of
inserting a letter into an envelope separates one piece of
information from another.

Parts of a Frame

• The frame is consist of four parts as follows.


• Each frame in data link layers comprises four parts header,
payload field, trailers and flag.
1. Header–The source and destination address is placed into the
header part of the frame.
2. Payload field–It contains the actual message or information
that the sender wants to transmit to the destination machine.
3. Trailer–The trailer comprises error detection and error
correction bits.
4. Flag–It shows the beginning and end of a particular frame.
Types of Framing in Computer Networks

Based on the size, the following are the types of framing in computer
networks,
• Fixed Size Framing
• Variable Size Framing

Fixed Size Framing

• The frame has a fixed size. In fixed-size framing, there is no need for
defining the boundaries of the frames to mark the beginning and end of a
frame.
For example- This type of framing is used in ATMs, Wide area networks. They
use frames of fixed size called cells.
Variable – Sized Framing
• Here, the size of each frame to be transmitted may be different.
• So additional mechanisms are used, to mark the end of one frame and the
beginning of the next frame.

Two ways to define frame delimiters in variable sized framing are −


• Length Field − Here, a length field is used that determines the size of the
frame. It is used in Ethernet (IEEE 802.3).
• End Delimiter − Here, a pattern is used as a delimiter to determine the
size of frame. It is used in Token Rings. If the pattern occurs in the
message, then two approaches are used to avoid the situation −
o Byte – Stuffing − A byte is stuffed in the message to differentiate
from the delimiter. This is also called character-oriented framing.
o Bit – Stuffing − A pattern of bits of arbitrary length is stuffed in the
message to differentiate from the delimiter. This is also called bit –
oriented framing.
o Byte – Stuffing − A byte is stuffed in the message to differentiate from the
delimiter. This is also called character-oriented framing. The main purpose
of character stuffing is to prevent conflicts between the actual data being
transmitted and the control characters used to mark at the beginning and
end of a data frame.
• To separate one frame from the next, an 8-bit (or 1-byte) flag is added at
the beginning and the end of a frame. Flags are the frame delimiters
signalling the start and end of the frame.
• But the problem with that is, any pattern used for the flag could also be
part of the original information. During such situation, Byte stuffing is
a byte (usually escape character(ESC)), is added to the data section
of the frame when there is a character with the same pattern as the
flag.
• Whenever the receiver encounters the ESC character, it removes it from
the data section and treats the next character as data, not a flag. But the
problem arises when the text contains one or more escape characters
followed by a flag. To solve this problem, the escape characters that are
part of the text are marked by another escape character i.e., if the escape
character is part of the text, an extra one is added to show that the
second one is part of the text.
• By adding escape characters before these special characters in the data,
character stuffing ensures that they are not misinterpreted as frame
delimiters and that the original data is transmitted accurately.

• Common Control Characters used in Character Stuffing are :


o Start of Frame (SOF) or Start Character: It marks the beginning of a data

frame and indicates the start of a message.


o End of Frame (EOF) or End Character: It marks the end of a data frame and

indicates the end of a message.


• Escape Character: It is used to escape or indicate the

following character(s) as data rather than control characters.:


Bit Stuffing Mechanism
• Here, the delimiting flag sequence generally contains six or more consecutive
1s. Most protocols use the 8-bit pattern 01111110 as flag. In order to
differentiate the message from the flag in case of same sequence, a single bit is
stuffed in the message.
• Whenever a 0 bit is followed by five consecutive 1bits in the message, an
extra 0 bit is stuffed at the end of the five 1s. When the receiver receives the
message, it removes the stuffed 0s after each sequence of five 1s. The un-
stuffed message is then sent to the upper layers.
Protocols in Noiseless and Noisy Channel
The study of protocols is divided into two categories: those that can be applied
to channels with no noise or errors and those that can be applied to channels
with noise or errors. Although the first group of protocols cannot be applied in
real-world situations, they provide a foundation for protocols for noisy
channels.

Simplest Protocol
Simplest Protocol is a protocol that neither has flow control nor has error
control( as we have already told you that it lies under the category of Noiseless
channels).

• The simplest protocol is basically a unidirectional protocol in which


data frames only travel in one direction; from the sender to the receiver.
• In this, the receiver can immediately handle the frame it receives whose
processing time is small enough to be considered as negligible.
• Basically, the data link layer of the receiver immediately removes the
header from the frame and then hand over the data packet to the network
layer that also accepts the data packet immediately.

Design of the Simplest Protocol

• The flow control is not needed by the Simplest Protocol. The data link
layer at the sender side mainly gets the data from the network layer and
then makes the frame out of data and sends it.
• On the Receiver site, the data link layer receives the frame from the
physical layer and then extracts the data from the frame, and then delivers
the data to its network layer.
• The Data link layers of both sender and receiver mainly provide
transmission services for their Network layers.
• The data link layer also uses the services provided by the physical layer
such as signaling, multiplexing, etc for the physical transmission of the
bits.

The procedure used by the data link layer

• Let us now take a look at the procedure used by the data link layer at both
sides (sender as well as the receiver).
• There is no frame send by the data link layer of the sender site until its
network layer has a data packet to send.
• Similarly, the receiver site cannot deliver a data packet to its network
layer until a frame arrives.
Design of the Simplest Protocol
• The design of the simplest protocol is very simple as there is no error and
a flow control mechanism.
• The sender end (present at the data link layer) gets the data from the
network layer and converts the data into the form of frames so that it can
be easily transmitted.
• Now on the receiver's end, the data frame is taken from the physical layer and
then the data link layer extracts the actual data (by removing the header from

the data frame) from the data frame.


• The data link layers of the sender and receiver provide

communication/transmission services for their network layers.


• The data link layers utilization the services provided by their physical layers for

the physical transmission of bits.

• Sender-site and Receivers algorithms :


Sender-site algorithm –
while(true) //Repeat forever
{
waitForEvent(); //sleep until an event occur
if (Event(RequestToSend)) //there is a packet to send
{
GetData();
MakeFrame();
SendFrame(); //send the frame
}
}

• Receivers algorithm –

while(true) //Repeat forever


{
waitForEvent(); //sleep until an event occur
if (Event(ArrivalNotification)) //data frame arrived
{
ReceiveFrame();
ExtractData();
DeliverData(); //Deliver data to network
layer
}
}
The basic flow of the data frame

• The idea is very simple, and the sender sends the sequence of data frames
without thinking about the receiver.
• Whenever sending request comes from the network layer, the sender
sends the data frames. Similarly, whenever the receiving request comes
from the physical layer, the receiver receives the data frames.

Important points related to the data transfer:

• The sender cannot send the data frame until the network layer has a data
packet to be transmitted.
• The receiver is always ready to receive data frames, and it is constantly
running, but no action takes place until notified by the physical layer.
• Similarly, the sender is always ready to send data frames, and it is constantly
running, but no action takes place until notified by the network layer.
Stop and Wait Protocol

• Stop and wait means, whatever the data that sender wants to send, he sends
the data to the receiver.
• After sending the data, sender stops and waits until receive the
acknowledgment from the receiver.
• The stop and wait protocol is a flow control protocol where flow control is
one of the services of the data link layer.
• It provides unidirectional data transmission which means that either sending
or receiving of data will take place at a time.
• It provides flow-control mechanism but does not provide any error control
mechanism.

Primitives of Stop and Wait Protocol

The primitives of stop and wait protocol is:


Sender side
Rule 1: Sender sends one data packet at a time.
Rule 2: Sender sends the next packet only when it receives the acknowledgment
of the previous packet.
Therefore, the idea of stop and wait protocol in the sender's side is very simple,
i.e., send one packet at a time, and do not send another packet before receiving
the acknowledgment.

Receiver side

Rule 1: Receive and then consume the data packet.


Rule 2: When the data packet is consumed, receiver sends the acknowledgment
to the sender.
Therefore, the idea of stop and wait protocol in the receiver's side is also very
simple, i.e., consume the packet, and once the packet is consumed, the
acknowledgment is sent. This is known as a flow control mechanism.
• If there is a sender and receiver, then sender sends the packet and that
packet is known as a data packet. The sender will not send the second
packet without receiving the acknowledgment of the first packet.
• The receiver sends the acknowledgment for the data packet that it has
received. Once the acknowledgment is received, the sender sends the next
packet. This process continues until the entire packet is not sent.
• The main advantage of this protocol is its simplicity but it has some
disadvantages also. For example, if there are 1000 data packets to be sent,
then all the 1000 packets cannot be sent at a time as in Stop and Wait
protocol, one packet is sent at a time.

Disadvantages of Stop and Wait protocol

The following are the problems associated with a stop and wait protocol:
1. Problems occur due to lost data
Suppose the sender sends the data and the data is lost. The receiver is
waiting for the data for a long time. Since the data is not received by the
receiver, so it does not send any acknowledgment. Since the sender does not
receive any acknowledgment so it will not send the next packet. This problem
occurs due to the lost data.

In this case, two problems occur:

o Sender waits for an infinite amount of time for an acknowledgment.


o Receiver waits for an infinite amount of time for a data.

2. Problems occur due to lost acknowledgment

Suppose the sender sends the data and it has also been received by the
receiver. On receiving the packet, the receiver sends the acknowledgment. In
this case, the acknowledgment is lost in a network, so there is no chance for the
sender to receive the acknowledgment. There is also no chance for the sender to
send the next packet as in stop and wait protocol, the next packet cannot be sent
until the acknowledgment of the previous packet is received.

In this case, one problem occurs:

o Sender waits for an infinite amount of time for an acknowledgment.

3. Problem due to the delayed data or acknowledgment

Suppose the sender sends the data and it has also been received by the
receiver. The receiver then sends the acknowledgment but the acknowledgment
is received after the timeout period on the sender's side. As the acknowledgment
is received late, so acknowledgment can be wrongly considered as the
acknowledgment of some other data packet.
Stop and Wait for ARQ (Automatic Repeat Request)
The above 3 problems are resolved by Stop and Wait for ARQ (Automatic
Repeat Request) that does both error control and flow control.

• The sender sends the data frame with a sequence number. The sequence
numbers are 1 and 0
• The sender also maintains a copy of the data frame that is being currently
sent so that if the ACK is not received then the sender can re-transmit the
frame.
• The sender can send only one frame at a time and the receiver can also
receive only one frame at a time.
• The stop and wait ARQ is a connection-oriented protocol.
• In the stop and wait ARQ, a time tracker is used.
• Stop and Wait ARQ has very less efficiency , it can be improved by increasing
the window size. Also , for better efficiency , Go back N and Selective Repeat
Protocols are used.
• The Stop and Wait ARQ solves the main three problems but may cause big
performance issues as the sender always waits for acknowledgement even if it
has the next packet ready to send, where we have a high propagation delay.
• To solve this problem, we can send more than one packet at a time with a
larger sequence number. This can be solved with Sliding Window Protocols,
Working of Stop and Wait for ARQ:
Sender A sends a data frame or packet with sequence number 0.
sequence number 1

Sliding Window Protocol

• The sliding window is a technique for sending multiple frames at a time.


It controls the data packets between the two devices where reliable and
gradual delivery of data frames is needed. It is also used in TCP
(Transmission Control Protocol).
• In this technique, each frame has sent from the sequence number. The
sequence numbers are used to find the missing data in the receiver end.
The purpose of the sliding window technique is to avoid duplicate data,
so it uses the sequence number.

Types of Sliding Window Protocol


Sliding window protocol has two types:

1. Go-Back-N ARQ
2. Selective Repeat ARQ

Go-Back-N ARQ
• Before understanding the working of Go-Back-N ARQ, we first look at
the sliding window protocol. As we know that the sliding window
protocol is different from the stop-and-wait protocol.
• In the stop-and-wait protocol, the sender can send only one frame at a
time and cannot send the next frame without receiving the
acknowledgment of the previously sent frame, whereas, in the case of
sliding window protocol, the multiple frames can be sent at a time.

• In Go-Back-N ARQ, N is the sender's window size. Suppose we say that


Go-Back-3, which means that the three frames can be sent at a time
before expecting the acknowledgment from the receiver.
• It uses the principle of protocol pipelining in which the multiple frames
can be sent before receiving the acknowledgment of the first frame. If we
have five frames and the concept is Go-Back-3, which means that the
three frames can be sent, i.e., frame no 1, frame no 2, frame no 3 can be
sent before expecting the acknowledgment of frame no 1.
• In Go-Back-N ARQ, the frames are numbered sequentially as Go-Back-N
ARQ sends the multiple frames at a time that requires the numbering
approach to distinguish the frame from another frame, and these numbers
are known as the sequential numbers.
• The number of frames that can be sent at a time totally depends on the
size of the sender's window. So, we can say that 'N' is the number of
frames that can be sent at a time before receiving the acknowledgment
from the receiver.
• If the acknowledgment of a frame is not received within an agreed-upon
time period, then all the frames available in the current window will be
retransmitted. Suppose we have sent the frame no 5, but we didn't receive
the acknowledgment of frame no 5, and the current window is holding
three frames, then these three frames will be retransmitted.
• The sequence number of the outbound frames depends upon the size of
the sender's window. Suppose the sender's window size is 2, and we have
ten frames to send, then the sequence numbers will not be
1,2,3,4,5,6,7,8,9,10. Let's understand through an example.
• N is the sender's window size.
• If the size of the sender's window is 4 then the sequence number will be
0,1,2,3,0,1,2,3,0,1,2, and so on.
• The number of bits in the sequence number is 2 to generate the binary
sequence 00,01,10,11.

Working of Go-Back-N ARQ


• Suppose there are a sender and a receiver, and let's assume that there are
11 frames to be sent. These frames are represented as
0,1,2,3,4,5,6,7,8,9,10, and these are the sequence numbers of the frames.
• Mainly, the sequence number is decided by the sender's window size.
But, for the better understanding, we took the running sequence numbers,
i.e., 0,1,2,3,4,5,6,7,8,9,10. Let's consider the window size as 4, which
means that the four frames can be sent at a time before expecting the
acknowledgment of the first frame.
• Step 1: Firstly, the sender will send the first four frames to the receiver,
i.e., 0,1,2,3, and now the sender is expected to receive the
acknowledgment of the 0th frame.
Let's assume that the receiver has sent the acknowledgment for the 0 frame, and
the receiver has successfully received it.

The sender will then send the next frame, i.e., 4, and the window slides containing

four frames (1,2,3,4).


• The receiver will then send the acknowledgment for the frame no 1. After
receiving the acknowledgment, the sender will send the next frame, i.e.,
frame no 5, and the window will slide having four frames (2,3,4,5).

• Now, let's assume that the receiver is not acknowledging the frame no 2,
either the frame is lost, or the acknowledgment is lost. Instead of sending
the frame no 6, the sender Go-Back to 2, which is the first frame of the
current window, retransmits all the frames in the current window, i.e.,
2,3,4,5.
Go-Back-N ARQ protocol is also known as Go-Back-N Automatic Repeat
Request. It is a data link layer protocol that uses a sliding window method.

• In Go-Back-N, N determines the sender's window size, and the size of


the receiver's window is always 1.
• It does not consider the corrupted frames and simply discards them.
• It does not accept the frames which are out of order and discards them.
• If the sender does not receive the acknowledgment, it leads to the
retransmission of all the current window frames.

Selective Repeat ARQ


• Selective Repeat ARQ is also known as the Selective Repeat Automatic
Repeat Request.
• It is a data link layer protocol that uses a sliding window method.
• The Go-back-N ARQ protocol works well if it has fewer errors. But if
there is a lot of error in the frame, lots of bandwidth loss in sending the
frames again. So, we use the Selective Repeat ARQ protocol.
• In this protocol, the size of the sender window is always equal to the size
of the receiver window. The size of the sliding window is always greater
than 1.
• If the receiver receives a corrupt frame, it does not directly discard it. It
sends a negative acknowledgment to the sender.
• The sender sends that frame again as soon as on the receiving negative
acknowledgment.
• There is no waiting for any time-out to send that frame. The design of the
Selective Repeat ARQ protocol is shown below.
Difference between the Go-Back-N ARQ and Selective Repeat
ARQ?
Go-Back-N ARQ Selective Repeat ARQ

If a frame is corrupted or lost in it,all In this, only the frame is sent again, which is
subsequent frames have to be sent corrupted or lost.
again.

If it has a high error rate,it wastes a lot There is a loss of low bandwidth.
of bandwidth.

It is less complex. It is more complex because it has to do sorting and


searching as well. And it also requires more storage.

It does not require sorting. In this, sorting is done to get the frames in the
correct order.

It does not require searching. The search operation is performed in it.

It is used more. It is used less because it is more complex.


Flow Control :
It is an important function of the Data Link Layer. It refers to a set of
procedures that tells the sender how much data it can transmit before waiting
for acknowledgment from the receiver.
• Purpose of Flow Control :
Any receiving device has a limited speed at which it can process incoming
data and also a limited amount of memory to store incoming data. If the
source is sending the data at a faster rate than the capacity of the receiver,
there is a possibility of the receiver being swamped.
• The receiver will keep losing some of the frames simply because they are
arriving too quickly and the buffer is also getting filled up.
• This will generate waste frames on the network. Therefore, the receiving
device must have some mechanism to inform the sender to send fewer
frames or stop transmission temporarily. In this way, flow control will
control the rate of frame transmission to a value that can be handled by the
receiver.

Example – Stop & Wait Protocol

Error Control :
The error control function of the data link layer detects the errors in
transmitted frames and re-transmits all the erroneous frames.
Purpose of Error Control :
The function of error control function of the data link layer helps in dealing
with data frames that are damaged in transit, data frames lost in transit and
acknowledged frames that are lost in transmission. The method used for error
control is called Automatic Repeat Request (ARQ) which is used for the noisy
channel.
Example – Stop & Wait ARQ and Sliding Window ARQ
Difference between Flow Control and Error Control :

S.NO. Flow control Error control

Flow control is meant only Error control is meant for the


1. for the transmission of data transmission of error free data from
from sender to receiver. sender to receiver.

To detect error in data, the approaches


are : Checksum, Cyclic Redundancy
For Flow control there are
Check and Parity Checking.
two approaches : Feedback-
2. To correct error in data, the approaches
based Flow Control and
are : Hamming code, Binary
Rate-based Flow Control.
Convolution codes, Reed-Solomon code,
Low-Density Parity Check codes.

It prevents the loss of data


It is used to detect and correct the error
3. and avoid over running of
occurred in the code.
receive buffers.

Example of Flow Control Example of Error Control techniques are


techniques are : Stop & : Stop & Wait ARQ and Sliding
4.
Wait Protocol and Sliding Window ARQ (Go-back-N ARQ,
Window Protocol. Selected Repeat ARQ).

Transmission Modes
Transmission mode means transferring data between two devices. It is also
known as a communication mode. Buses and networks are designed to allow
communication to occur between individual devices that are interconnected.
There are three types of transmission mode:-
1. Simplex Mode –
In Simplex mode, the communication is unidirectional, as on a one-way street.
Only one of the two devices on a link can transmit, the other can only receive.
The simplex mode can use the entire capacity of the channel to send data in
one direction.
Example: Keyboard and traditional monitors. The keyboard can only introduce
input, the monitor can only give the output.

Advantages:
• Simplex mode is the easiest and most reliable mode of communication.
• It is the most cost-effective mode, as it only requires one communication
channel.
• There is no need for coordination between the transmitting and receiving
devices, which simplifies the communication process.
• Simplex mode is particularly useful in situations where feedback or
response is not required, such as broadcasting or surveillance.
Disadvantages:
• Only one-way communication is possible.
• There is no way to verify if the transmitted data has been received
correctly.
• Simplex mode is not suitable for applications that require bidirectional
communication.
2. Half-Duplex Mode –
In half-duplex mode, each station can both transmit and receive, but not at the
same time. When one device is sending, the other can only receive, and vice
versa. The half-duplex mode is used in cases where there is no need for
communication in both directions at the same time. The entire capacity of the
channel can be utilized for each direction.
Example: Walkie-talkie in which message is sent one at a time and messages
are sent in both directions.
Channel capacity=Bandwidth * Propagation Delay

Advantages:
• Half-duplex mode allows for bidirectional communication, which is useful
in situations where devices need to send and receive data.
• It is a more efficient mode of communication than simplex mode, as the
channel can be used for both transmission and reception.
• Half-duplex mode is less expensive than full-duplex mode, as it only
requires one communication channel.
Disadvantages:
• Half-duplex mode is less reliable than Full-Duplex mode, as both devices
cannot transmit at the same time.
• There is a delay between transmission and reception, which can cause
problems in some applications.
• There is a need for coordination between the transmitting and receiving
devices, which can complicate the communication process.
3. Full-Duplex Mode –
In full-duplex mode, both stations can transmit and receive simultaneously. In
full_duplex mode, signals going in one direction share the capacity of the link
with signals going in another direction, this sharing can occur in two ways:
• Either the link must contain two physically separate transmission paths, one
for sending and the other for receiving.
• Or the capacity is divided between signals traveling in both directions.

• Full-duplex mode is used when communication in both directions is


required all the time. The capacity of the channel, however, must be divided
between the two directions.
Example: Telephone Network in which there is communication between
two persons by a telephone line, through which both can talk and listen at
the same time.

Advantages:
• Full-duplex mode allows for simultaneous bidirectional communication,
which is ideal for real-time applications such as video conferencing or
online gaming.
• It is the most efficient mode of communication, as both devices can
transmit and receive data simultaneously.
• Full-duplex mode provides a high level of reliability and accuracy, as there
is no need for error correction mechanisms.
Disadvantages:
• Full-duplex mode is the most expensive mode, as it requires two
communication channels.
• It is more complex than simplex and half-duplex modes, as it requires two
physically separate transmission paths or a division of channel capacity.
• Full-duplex mode may not be suitable for all applications, as it requires a
high level of bandwidth and may not be necessary for some types of
communication.
Connection-Oriented vs Connectionless Service

A data communication network is a telecommunication network that allows two


or more computers to send and receive data across the same or distinct
networks.
Connection-Oriented Service and Connectionless Service are the two methods
for establishing a connection before delivering data from one device to another.
Connection-oriented service entails the establishment and termination of a
connection for the transmission of data between two or more devices.
In contrast, connectionless service does not need the establishment of any
connection or termination procedure in order to transport data across a network.

Connection-Oriented Service

A connection-oriented service is a network service that was designed and


developed after the telephone system. A connection-oriented service is used to
create an end to end connection between the sender and the receiver before
transmitting the data over the same or different networks.
In connection-oriented service, packets are transmitted to the receiver in the
same order the sender has sent them.
It uses a handshake method that creates a connection between the user and
sender for transmitting the data over the network. Hence it is also known as a
reliable network service.
Suppose, a sender wants to send data to the receiver. Then, first, the sender
sends a request packet to a receiver in the form of an SYN packet.
After that, the receiver responds to the sender's request with an (SYN-ACK)
signal/packets. That represents the confirmation is received by the receiver to
start the communication between the sender and the receiver. Now a sender can
send the message or data to the receiver.

Similarly, a receiver can respond or send the data to the sender in the form of
packets. After successfully exchanging or transmitting data, a sender can
terminate the connection by sending a signal to the receiver. In this way, we can
say that it is a reliable network service.
TCP (Transmission Control Protocol) is a connection-oriented protocol that
allows communication between two or more computer devices by establishing
connections in the same or different networks.
Benefits of Connection-Oriented Services:
• Connection-Oriented Services are reliable.
• There is no duplication of data packets.
• There are no chances of Congestion.
• These are Suitable for long connections.
• Sequencing of data packets is guaranteed.
Disadvantages

Drawbacks of Connection-Oriented Service are as follows:


• This allocation of resources is mandatory before communication.
• The speed of connection is slower. As much time is taken for establishing
and relinquishing the connection.
• In the case of Network Congestion or router failures, there are no
alternative ways to continue with communication.

You might also like