2nd Module
2nd Module
Multiplexing is a technique used to combine and send the multiple data streams over a
single medium. The process of combining the data streams is known as multiplexing and
hardware used for multiplexing is known as a multiplexer.
Multiplexing is achieved by using a device called Multiplexer (MUX) that combines n input
lines to generate a single output line. Multiplexing follows many-to-one, i.e., n input lines
and one output line.
Why Multiplexing?
o More than one signal can be sent over a single medium.
o The bandwidth of a medium can be utilized effectively.
o When multiple signals share the common medium, there is a possibility of collision.
Multiplexing concept is used to avoid such collision.
Concept of Multiplexing
Multiplexing techniques can be classified as: Types and Drawings
Advantages Of FDM:
Disadvantages Of FDM:
Applications Of FDM:
Synchronous TDM
o A Synchronous TDM is a technique in which time slot is preassigned to every device.
o In Synchronous TDM, each device is given some time slot irrespective of the fact that
the device contains the data or not.
o If the device does not have any data, then the slot will remain empty.
o In Synchronous TDM, signals are sent in the form of frames. Time slots are organized
in the form of frames. If a device does not have data for a particular time slot, then
the empty slot will be transmitted.
o The most popular Synchronous TDM are T-1 multiplexing, ISDN multiplexing, and
SONET multiplexing.
o If there are n devices, then there are n slots.
o The capacity of the channel is not fully utilized as the empty slots are also transmitted
which is having no data. In the above figure, the first frame is completely filled, but in
the last two frames, some slots are empty. Therefore, we can say that the capacity of
the channel is not utilized efficiently.
o The speed of the transmission medium should be greater than the total speed of the
input lines. An alternative approach to the Synchronous TDM is Asynchronous Time
Division Multiplexing.
Asynchronous TDM
o An asynchronous TDM is also known as Statistical TDM.
o An asynchronous TDM is a technique in which time slots are not fixed as in the case of
Synchronous TDM. Time slots are allocated to only those devices which have the data
to send. Therefore, we can say that Asynchronous Time Division multiplexor transmits
only the data from active workstations.
o An asynchronous TDM technique dynamically allocates the time slots to the devices.
o In Asynchronous TDM, total speed of the input lines can be greater than the capacity
of the channel.
o Asynchronous Time Division multiplexor accepts the incoming data streams and
creates a frame that contains only data with no empty slots.
o In Asynchronous TDM, each slot contains an address part that identifies the source of
the data.
o The difference between Asynchronous TDM and Synchronous TDM is that many slots
in Synchronous TDM are unutilized, but in Asynchronous TDM, slots are fully utilized.
This leads to the smaller transmission time and efficient utilization of the capacity of
the channel.
o In Synchronous TDM, if there are n sending devices, then there are n time slots. In
Asynchronous TDM, if there are n sending devices, then there are m time slots where
m is less than n (m<n).
o The number of slots in a frame depends on the statistical analysis of the number of
input lines.
o In the OSI model, the data link layer is a 4th layer from the top and 2nd layer from the
bottom.
o The communication channel that connects the adjacent nodes is known as links, and
in order to move the datagram from source to the destination, the datagram must be
moved across an individual link.
o The main responsibility of the Data Link Layer is to transfer the datagram across an
individual link.
o The Data link layer protocol defines the format of the packet exchanged across the
nodes as well as the actions such as Error detection, retransmission, flow control, and
random access.
o The Data Link Layer protocols are Ethernet, token ring, FDDI and PPP.
Error Detection
An Error is a situation when the message received at the receiver end is not identical to the
message transmitted.
Types Of Errors
1. Single-Bit Error:
❖ The only one bit of a given data unit is changed from 1 to 0 or from 0 to 1.
❖ In the above figure, the message which is sent is corrupted as single-bit, i.e., 0 bit is
changed to 1.
❖ Single-Bit Error does not appear more likely in Serial Data Transmission. For example,
Sender sends the data at 10 Mbps, this means that the bit lasts only for 1 ?s and for a
single-bit error to occurred, a noise must be more than 1 ?s.
❖ Single-Bit Error mainly occurs in Parallel Data Transmission. For example, if eight wires
are used to send the eight bits of a byte, if one of the wire is noisy, then single-bit is
corrupted per byte.
2. Burst Error:
❖ The two or more bits are changed from 0 to 1 or from 1 to 0 is known as Burst Error.
❖ The Burst Error is determined from the first corrupted bit to the last corrupted bit.
❖ The duration of noise in Burst Error is more than the duration of noise in Single-Bit.
❖ Burst Errors are most likely to occurr in Serial Data Transmission.
❖ The number of affected bits depends on the duration of the noise and data rate.
Error Detecting Techniques:
o Single parity check
o Two-dimensional parity check
o Checksum
o Cyclic redundancy check
Checksum
Checksum Generator
A Checksum is generated at the sending side. Checksum generator subdivides the data into
equal segments of n bits each, and all these segments are added together by using one's
complement arithmetic. The sum is complemented and appended to the original data, known
as checksum field. The extended data is transmitted across the network.
Suppose L is the total sum of the data segments, then the checksum would be ?L
A Checksum is verified at the receiving side. The receiver subdivides the incoming data into
equal segments of n bits each, and all these segments are added together, and then this sum
is complemented. If the complement of the sum is zero, then the data is accepted otherwise
data is rejected.
If the resultant of this division is zero which means that it has no error, and the data is
accepted.
If the resultant of this division is not zero which means that the data consists of an error.
Therefore, the data is discarded.
CRC Checker
o The functionality of the CRC checker is similar to the CRC generator.
o When the string 11100111 is received at the receiving end, then CRC checker
performs the modulo-2 division.
o A string is divided by the same divisor, i.e., 1001.
o In this case, CRC checker generates the remainder of zero. Therefore, the data is
accepted.
Error Correction
Error Correction codes are used to detect and correct the errors when data is transmitted
from the sender to the receiver.
o Backward error correction: Once the error is discovered, the receiver requests the
sender to retransmit the entire data unit.
o Forward error correction: In this case, the receiver uses the error-correcting code
which automatically corrects the errors.
A single additional bit can detect the error, but cannot correct it.
Just read this:
/* For correcting the errors, one has to know the exact position of the error. For example, If
we want to calculate a single-bit error, the error correction code will determine which one of
seven bits is in error. To achieve this, we have to add some additional redundant bits.
Suppose r is the number of redundant bits and d is the total number of the data bits. The
number of redundant bits r can be calculated by using the formula:
2r>=d+r+1
The value of r is calculated by using the above formula. For example, if the value of d is 4, then
the possible smallest value that satisfies the above relation would be 3.
To determine the position of the bit which is in error, a technique developed by R.W Hamming
is Hamming code which can be applied to any length of the data unit and uses the relationship
between data units and redundant units.
*/
Hamming Code:
Hamming code is a set of error-correction codes that can be used to detect and correct
the errors that can occur when the data is moved or stored from the sender to the
receiver. It is a technique developed by R.W. Hamming for error correction.
Parity bits: The bit which is appended to the original data of binary bits so that the total
number of 1s in the data is even or odd.
Even parity: To check for even parity, if the total number of 1s is even, then the value of the
parity bit is 0. If the total number of 1s occurrences is odd, then the value of the parity bit is
1, making the total count of occurrences of 1’s an even number.
Odd Parity: To check for odd parity, if the total number of 1s is even, then the value of parity
bit is 1, making the total count of occurrences of 1’s an odd number. If the total number of
1s is odd, then the value of parity bit is 0.
Algorithm of Hamming code:
Hamming Code is simply the use of extra parity bits to allow the identification of an error.
• Write the bit positions starting from 1 in binary form (1, 10, 11, 100, etc).
• All the bit positions that are a power of 2 are marked as parity bits (1, 2, 4, 8,
etc).
• All the other bit positions are marked as data bits.
• Each data bit is included in a unique set of parity bits, as determined its bit
position in binary form. a. Parity bit 1 covers all the bits positions whose binary
representation includes a 1 in the least significant position (1, 3, 5, 7, 9, 11,
etc). b. Parity bit 2 covers all the bits positions whose binary representation
includes a 1 in the second position from the least significant bit (2, 3, 6, 7, 10,
11, etc). c. Parity bit 4 covers all the bits positions whose binary representation
includes a 1 in the third position from the least significant bit (4–7, 12–15, 20–
23, etc). d. Parity bit 8 covers all the bits positions whose binary representation
includes a 1 in the fourth position from the least significant bit bits (8–15, 24–
31, 40–47, etc). e. In general, each parity bit covers all bits where the bitwise
AND of the parity position and the bit position is non-zero.
• Since we check for even parity set a parity bit to 1 if the total number of ones in
the positions it checks is odd.
• Set a parity bit to 0 if the total number of ones in the positions it checks is even.
Determining the Position of Redundant Bits
A redundancy bits are placed at positions that correspond to the power of 2. As in the
above example:
• The number of data bits = 7
• The number of redundant bits = 4
• The total number of bits = 7+4=>11
• The redundant bits are placed at positions corresponding to power of 2- 1, 2, 4,
and 8
Determining the Parity bits According to Even Parity
R1 bit is calculated using parity check at all the bits positions whose binary representation
includes a 1 in the least significant position. R1: bits 1, 3, 5, 7, 9, 11. To find the redundant
bit R1, we check for even parity. Since the total number of 1’s in all the bit positions
corresponding to R1 is a odd number the value of R1 (parity bit’s value) = 1
R2 bit is calculated using parity check at all the bits positions whose binary representation
includes a 1 in the second position from the least significant bit. R2: bits 2,3,6,7,10,11. To
find the redundant bit R2, we check for even parity. Since the total number of 1’s in all the
bit positions corresponding to R2 is odd the value of R2(parity bit’s value)=1
R4 bit is calculated using parity check at all the bits positions whose binary representation
includes a 1 in the third position from the least significant bit. R4: bits 4, 5, 6, 7. To find the
redundant bit R4, we check for even parity. Since the total number of 1’s in all the bit
positions corresponding to R4 is an even the value of R4(parity bit’s value) = 0.
R8 bit is calculated using parity check at all the bits positions whose binary representation
includes a 1 in the fourth position from the least significant bit. R8: bit 8,9,10,11. To find
the redundant bit R8, we check for even parity. Since the total number of 1’s in all the bit
positions corresponding to R8 is an even number the value of R8(parity bit’s value)=0.
Features of Hamming Code:
Error Detection and Correction: Hamming code is designed to detect and correct single-
bit errors that may occur during the transmission of data.
Redundancy: Hamming code uses redundant bits to add additional information to the
data being transmitted.
Data Link Control is the service provided by the Data Link Layer to provide reliable data
transfer over the physical medium. For example, In the half-duplex transmission mode, one
device can only transmit the data at a time. If both the devices at the end of the links transmit
the data simultaneously, they will collide and leads to the loss of the information. The Data
link layer provides the coordination among the devices so that no collision occurs.
o Line discipline
o Flow Control
o Error Control
Line Discipline
o Line Discipline is a functionality of the Data link layer that provides the coordination
among the link systems. It determines which device can send, and when it can send
the data.
o ENQ/ACK
o Poll/select
END/ACK
Poll/Select
The Poll/Select method of line discipline works with those topologies where one device is
designated as a primary station, and other devices are secondary stations.
Flow Control
o It is a set of procedures that tells the sender how much data it can transmit before the
data overwhelms the receiver.
o The receiving device has limited speed and limited memory to store the data.
Therefore, the receiving device must be able to inform the sending device to stop the
transmission temporarily before the limits are reached.
o It requires a buffer, a block of memory for storing the information until they are
processed.
o Stop-and-wait
o Sliding window
Stop-and-wait
o In the Stop-and-wait method, the sender waits for an acknowledgement after every
frame it sends.
o When acknowledgement is received, then only next frame is sent. The process of
alternately sending and waiting of a frame continues until the sender transmits the
EOT (End of transmission) frame.
Advantage of Stop-and-wait
The Stop-and-wait method is simple as each frame is checked and acknowledged before the
next frame is sent.
Problems :
Sliding Window
o The Sliding Window is a method of flow control in which a sender can transmit the
several frames before getting an acknowledgement.
o In Sliding Window Control, multiple frames can be sent one after the another due to
which capacity of the communication channel can be utilized efficiently.
o The window can hold the frames at either end, and it provides the upper limit on the
number of frames that can be transmitted before the acknowledgement.
o The size of the window is represented as n-1. Therefore, maximum n-1 frames can be
sent before acknowledgement.
Sender Window
o At the beginning of a transmission, the sender window contains n-1 frames, and when
they are sent out, the left boundary moves inward shrinking the size of the window.
For example, if the size of the window is w if three frames are sent out, then the
number of frames left out in the sender window is w-3.
o Once the ACK has arrived, then the sender window expands to the number which will
be equal to the number of frames acknowledged by ACK.
Receiver Window
o At the beginning of transmission, the receiver window does not contain n frames, but
it contains n-1 spaces for frames.
o When the new frame arrives, the size of the window shrinks.
o The receiver window does not represent the number of frames received, but it
represents the number of frames that can be received before an ACK is sent. For
example, the size of the window is w, if three frames are received then the number of
spaces available in the window is (w-3).
o Once the acknowledgement is sent, the receiver window expands by the number
equal to the number of frames acknowledged.
Error Control
Error Control is a technique of error detection and retransmission.
Stop-and-wait ARQ:
• Used in Connection-oriented communication.
• It offers error and flows control
• It is used in Data Link and Transport Layers
• Stop and Wait for ARQ mainly implements the Sliding Window Protocol concept
with Window Size 1
Note:
• The sliding window protocol uses the full-duplex for data frame transmission.
• The sliding window protocol is also used as the TCP or Transmission Control Protocol.
1. Go-Back-N ARQ: The Go-Back-N ARQ is one of the Sliding Window Protocol strategies that
is used where reliable in-order delivery of the data packets is required. In the Go-Back-N
ARQ, we use the concept of a timer. When the receiver receives the correct frame, it sends
back an acknowledgment or ACK. Once the sender gets an ACK for a data frame, it shifts its
window forward and sends the next frame. If the ACK is not received within the specified
period then all the data frames starting from the lost frame are retransmitted.
In the Go-Back-N ARQ, the sender's window size is taken as N but the receiver's window size
is always 1. Hence, the sender can send N data frames at a time but the receiver can receive
only 1 frame at a time. The Go-Back-N ARQ is used for noisy channels or links and it
manages the flow and error control between the sender and the receiver.
• In the Go-Back-N ARQ, we retransmit all the data frames starting from the lost or
damaged frames.
• The ACK has the sequence number of the frame that helps the sender to identify the
lost frame.
• The sender sets a timer for each frame so whenever the timer is over and the sender
has not received any acknowledgment, then the sender knows that the particular
frame is either lost or damaged.
• The ACK has the sequence number of the frame that helps the sender to identify the
lost frame.
• The sender's window size is N.
• The receiver's window size is 1.
• The sender waits for the period specified by the timer before sending the frame
again. Hence, the Go-Back-N ARQ is a slower protocol than the Selective Repeat
ARQ.
• In Go-Back-N ARQ protocol, the efficiency is N/(1+2xa) where a is ratio of
propagation delay vs the transmission delay and N is number of packets sent.
• GoBackN is quite easy to implement but here lot of bandwidth is used in case of high
error rate for the retransimmion of the entire window every time.
2. Selective Repeat ARQ: The selective repeat ARQ is one of the Sliding Window Protocol
strategies that is used where reliable in-order delivery of the data packets is required. The
selective repeat ARQ is used for noisy channels or links and it manages the flow and error
control between the sender and the receiver.
In the selective repeat ARQ, we only retransmit the data frames that are damaged or lost. If
any frame is lost or damaged then the receiver sends a negative acknowledgment (NACK) to
the sender and if the frame is correctly received, it sends back an acknowledgment (ACK). As
we only resend the selected damaged frames so we name this technique the Selective
Repeat ARQ technique. The ACK and the NACK have the sequence number of the frame that
helps the sender to identify the lost frame.
• In the selective repeat ARQ, we only resend the data frames that are damaged or
lost.
• If any frame is lost or damaged then the receiver sends a negative acknowledgment
(NACK) to the sender and if the frame is correctly received, it sends back an
acknowledgment (ACK).
• The sender sets a timer for each frame so whenever the timer is over and the sender
has not received any acknowledgment, then the sender knows that the particular
frame is either lost or damaged.
• As the sender needs to wait for the timer to expire before retransmission. So, we use
negative acknowledgment or NACK.
• The ACK and the NACK have the sequence number of the frame that helps the
sender to identify the lost frame.
• The receiver has the capability of sorting the frames present in the memory buffer
using the sequence numbers.
• The sender must be capable enough to search for the lost frame for which the NACK
has been received.
• The size of the sender's window is 2^(m-1), where m is the number of bits used in
the header of the packet to express the packet's sequence number.
• The window size of the receiver is the same as that of the sender i.e. 2^(m-1).
• In Selective Repeat protocol, the efficiency is N/(1+2xa) where a is ratio of
propagation delay vs the transmission delay and N is number of packets sent.
• It is efficient as only the lost or broken packets need retransmission.
Note: ARQ stands for Automatic Repeat Request. ARQ is an error-control strategy that
ensures that a sequence of information is delivered in order and without any errors or
duplications despite transmission errors and losses.
Working of Sliding Window Protocols
The sender sends the data frames according to the specified window size of the sender and
the receiver receives the data frames according to its specified window size. The range of
the sequence number depends on the size of the data frame i.e. N. So, the range of the
sequence numbers can be 0 to 2^N −1.
The data link layer is used in a computer network to transmit the data between two devices
or nodes. It divides the layer into parts such as data link control and the multiple access
resolution/protocol.
If there is a dedicated link between the sender and the receiver then data link control
layer is sufficient, however if there is no dedicated link present then multiple stations can
access the channel simultaneously. Hence multiple access protocols are required to
decrease collision and avoid crosstalk.
1. Random Access Protocol: In this, all stations have same superiority that is no station
has more priority than another station. Any station can send data depending on medium’s
state( idle or busy). It has two features:
1. There is no fixed time for sending data
2. There is no fixed sequence of stations sending data
The Random access protocols are further subdivided as:
(a) ALOHA – It was designed for wireless LAN but is also applicable for shared medium. In
this, multiple stations can transmit data at the same time and can hence lead to collision
and data being garbled.
• Pure Aloha:
When a station sends data it waits for an acknowledgement. If the
acknowledgement doesn’t come within the allotted time then the station waits
for a random amount of time called back-off time (Tb) and re-sends the data.
Since different stations wait for different amount of time, the probability of
further collision decreases. The maximum efficiency of Pure Aloha is 18.4%. The
Collision status of Pure Aloha is that it does not reduce the total number to half.
• Vulnerable Time = 2* Frame transmission time
• Throughput = G exp{-2*G}
• Maximum throughput = 0.184 for G=0.5
• Slotted Aloha:
It is similar to pure aloha, except that we divide time into slots and sending of data
is allowed only at the beginning of these slots. If a station misses out the allowed
time, it must wait for the next slot. This reduces the probability of collision. The
maximum efficiency of Slotted Aloha is 36.8%. The Collision status of Slotted Aloha is
that it basically reduces the total number to half and doubles the efficiency of pure
Aloha.
Vulnerable Time = Frame transmission time
Throughput = G exp{-*G}
Maximum throughput = 0.368 for G=1
(b) CSMA – Carrier Sense Multiple Access ensures fewer collisions as the station is
required to first sense the medium (for idle or busy) before transmitting data. If it is idle
then it sends data, otherwise it waits till the channel becomes idle. However there is still
chance of collision in CSMA due to propagation delay.
CSMA access modes-
• 1-persistent: The node senses the channel, if idle it sends the data, otherwise it
continuously keeps on checking the medium for being idle and transmits
unconditionally(with 1 probability) as soon as the channel gets idle.
• Non-Persistent: The node senses the channel, if idle it sends the data,
otherwise it checks the medium after a random amount of time (not
continuously) and transmits when found idle.
• P-persistent: The node senses the medium, if idle it sends the data with p
probability. If the data is not transmitted ((1-p) probability) then it waits for
some time and checks the medium again, now if it is found idle then it send
with p probability. This repeat continues until the frame is sent. It is used in
Wifi and packet radio systems.
• O-persistent: Superiority of nodes is decided beforehand and transmission
occurs in that order. If the medium is idle, node waits for its time slot to send
data.
(c) CSMA/CD – Carrier sense multiple access with collision detection. The CSMA/CD
protocol works with a medium access control layer. Therefore, it first senses the shared
channel before broadcasting the frames, and if the channel is idle, it transmits a frame to
check whether the transmission was successful. If the frame is successfully received, the
station sends another frame. If any collision is detected in the CSMA/CD, the station sends a
jam/ stop signal to the shared channel to terminate data transmission. After that, it waits
for a random time before sending a frame to a channel.
(d) CSMA/CA – Carrier sense multiple access with collision avoidance. The process of
collisions detection involves sender receiving acknowledgement signals. If there is just one
signal(its own) then the data is successfully sent but if there are two signals(its own and
the one with which it has collided) then it means a collision has occurred. To distinguish
between these two cases, collision must have a lot of impact on received signal. However
it is not so in wired networks, so CSMA/CA is used in this case.
CSMA/CA avoids collision by:
1. Interframe space – Station waits for medium to become idle and if found idle it
does not immediately send data (to avoid collision due to propagation delay)
rather it waits for a period of time called Interframe space or IFS. After this
time it again checks the medium for being idle. The IFS duration depends on
the priority of station.
2. Contention Window – It is the amount of time divided into slots. If the sender
is ready to send data, it chooses a random number of slots as wait time which
doubles every time medium is not found idle. If the medium is found busy it
does not restart the entire process, rather it restarts the timer when the
channel is found idle again.
3. Acknowledgement – The sender re-transmits the data if acknowledgement is
not received before time-out.
B. Controlled Access Protocol
It is a method of reducing data frame collision on a shared channel. In the controlled access
method, each station interacts and decides to send a data frame by a particular station
approved by all other stations. It means that a single station cannot send the data frames
unless all other stations are not approved. It has three types of controlled
access: Reservation, Polling, and Token Passing.
C. Channelization Protocols
It is a channelization protocol that allows the total usable bandwidth in a shared channel to
be shared across multiple stations based on their time, distance and codes. It can access all
the stations at the same time to send the data frames to the channel.
Following are the various methods to access the channel based on their time, distance and
codes:
Frequency Division Multiple Access (FDMA) – The available bandwidth is divided into
equal bands so that each station can be allocated its own band. Guard bands are also
added so that no two bands overlap to avoid crosstalk and noise.
Time Division Multiple Access (TDMA) – In this, the bandwidth is shared between
multiple stations. To avoid collision time is divided into slots and stations are allotted
these slots to transmit data. However there is a overhead of synchronization as each
station needs to know its time slot. This is resolved by adding synchronization bits to each
slot. Another issue with TDMA is propagation delay which is resolved by addition of guard
bands.
Code Division Multiple Access (CDMA) – One channel carries all transmissions
simultaneously. There is neither division of bandwidth nor division of time. For example, if
there are many people in a room all speaking at the same time, then also perfect
reception of data is possible if only two person speak the same language. Similarly, data
from different stations can be transmitted simultaneously in different code languages.