0% found this document useful (0 votes)
13 views26 pages

CN unit-II

Computer network

Uploaded by

obamaamerican40
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views26 pages

CN unit-II

Computer network

Uploaded by

obamaamerican40
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 26

Unit-II Data Link Layer

Error Detection

When data is transmitted from one device to another device, the system does not
guarantee whether the data received by the device is identical to the data transmitted
by another device. An Error is a situation when the message received at the receiver
end is not identical to the message transmitted.

Types Of Errors

Errors can be classified into two categories:

o Single-Bit Error
o Burst Error

Single-Bit Error:
The only one bit of a given data unit is changed from 1 to 0 or from 0 to 1.

In the above figure, the message which is sent is corrupted as single-bit, i.e., 0 bit is
changed to 1.

Single-Bit Error does not appear more likely in Serial Data Transmission. For example,
Sender sends the data at 10 Mbps, this means that the bit lasts only for 1 ?s and for a
single-bit error to occurred, a noise must be more than 1 ?s.
Single-Bit Error mainly occurs in Parallel Data Transmission. For example, if eight wires
are used to send the eight bits of a byte, if one of the wire is noisy, then single-bit is
corrupted per byte.

Burst Error:
The two or more bits are changed from 0 to 1 or from 1 to 0 is known as Burst Error.

The Burst Error is determined from the first corrupted bit to the last corrupted bit.

The duration of noise in Burst Error is more than the duration of noise in Single-Bit.

Burst Errors are most likely to occurr in Serial Data Transmission.

The number of affected bits depends on the duration of the noise and data rate.

Error Detecting Techniques:


The most popular Error Detecting Techniques are:

Single Parity Check

o Single Parity checking is the simple mechanism and inexpensive to detect the
errors.
o In this technique, a redundant bit is also known as a parity bit which is appended
at the end of the data unit so that the number of 1s becomes even. Therefore,
the total number of transmitted bits would be 9 bits.
o If the number of 1s bits is odd, then parity bit 1 is appended and if the number
of 1s bits is even, then parity bit 0 is appended at the end of the data unit.
o At the receiving end, the parity bit is calculated from the received data bits and
compared with the received parity bit.
o This technique generates the total number of 1s even, so it is known as even-
parity checking.

Drawbacks Of Single Parity Checking


o It can only detect single-bit errors which are very rare.
o If two bits are interchanged, then it cannot detect the errors.

Checksum
A Checksum is an error detection technique based on the concept of redundancy.

It is divided into two parts:


Checksum Generator

A Checksum is generated at the sending side. Checksum generator subdivides the data into
equal segments of n bits each, and all these segments are added together by using one's
complement arithmetic. The sum is complemented and appended to the original data, known
as checksum field. The extended data is transmitted across the network.

Suppose L is the total sum of the data segments, then the checksum would be ?L

1. The Sender follows the given steps:


2. The block unit is divided into k sections, and each of n bits.
3. All the k sections are added together by using one's complement to get the sum.
4. The sum is complemented and it becomes the checksum field.
5. The original data and checksum field are sent across the network.
Checksum Checker

A Checksum is verified at the receiving side. The receiver subdivides the incoming
data into equal segments of n bits each, and all these segments are added together,
and then this sum is complemented. If the complement of the sum is zero, then the
data is accepted otherwise data is rejected.

1. The Receiver follows the given steps:


2. The block unit is divided into k sections and each of n bits.
3. All the k sections are added together by using one's complement algorithm t
o get the sum.
4. The sum is complemented.
5. If the result of the sum is zero, then the data is accepted otherwise the data i
s discarded.

Cyclic Redundancy Check (CRC)


CRC is a redundancy error technique used to determine the error.

Following are the steps used in CRC for error detection:

o In CRC technique, a string of n 0s is appended to the data unit, and this n


number is less than the number of bits in a predetermined number, known as
division which is n+1 bits.
o Secondly, the newly extended data is divided by a divisor using a process is
known as binary division. The remainder generated from this division is known
as CRC remainder.
o Thirdly, the CRC remainder replaces the appended 0s at the end of the original
data. This newly generated unit is sent to the receiver.
o The receiver receives the data followed by the CRC remainder. The receiver will
treat this whole unit as a single unit, and it is divided by the same divisor that
was used to find the CRC remainder.

If the resultant of this division is zero which means that it has no error, and the data is
accepted.

If the resultant of this division is not zero which means that the data consists of an
error. Therefore, the data is discarded.

Let's understand this concept through an example:

Suppose the original data is 11100 and divisor is 1001.


CRC Generator
o A CRC generator uses a modulo-2 division. Firstly, three zeroes are appended at the
end of the data as the length of the divisor is 4 and we know that the length of the
string 0s to be appended is always one less than the length of the divisor.
o Now, the string becomes 11100000, and the resultant string is divided by the divisor
1001.
o The remainder generated from the binary division is known as CRC remainder. The
generated value of the CRC remainder is 111.
o CRC remainder replaces the appended string of 0s at the end of the data unit, and the
final string would be 11100111 which is sent across the network.

CRC Checker
o The functionality of the CRC checker is similar to the CRC generator.
o When the string 11100111 is received at the receiving end, then CRC checker performs
the modulo-2 division.
o A string is divided by the same divisor, i.e., 1001.
o In this case, CRC checker generates the remainder of zero. Therefore, the data is
accepted.

Error-correction
Error-correcting codes (ECC) are a sequence of numbers generated by specific algorithms for
detecting and removing errors in data that has been transmitted over noisy channels. Error
correcting codes ascertain the exact number of bits that has been corrupted and the location
of the corrupted bits, within the limitations in algorithm.
ECCs can be broadly categorized into two types −
• Block codes − The message is divided into fixed-sized blocks of bits, to which
redundant bits are added for error detection or correction.
• Convolutional codes − The message comprises of data streams of arbitrary
length and parity symbols are generated by the sliding application of a Boolean
function to the data stream.

Hamming Code
Hamming code is a block code that is capable of detecting up to two simultaneous bit errors
and correcting single-bit errors. It was developed by R.W. Hamming for error correction.
In this coding method, the source encodes the message by inserting redundant bits within
the message. These redundant bits are extra bits that are generated and inserted at specific
positions in the message itself to enable error detection and correction. When the
destination receives this message, it performs recalculations to detect errors and find the bit
position that has error.

Encoding a message by Hamming Code


The procedure used by the sender to encode the message encompasses the following steps

• Step 1 − Calculation of the number of redundant bits.
• Step 2 − Positioning the redundant bits.
• Step 3 − Calculating the values of each redundant bit.
Once the redundant bits are embedded within the message, this is sent to the user.

Step 1 − Calculation of the number of redundant bits.


If the message contains m𝑚number of data bits, r𝑟number of redundant bits are added to it
so that m𝑟 is able to indicate at least (m + r+ 1) different states. Here, (m + r) indicates
location of an error in each of (𝑚 + 𝑟) bit positions and one additional state indicates no error.
Since, r𝑟 bits can indicate 2r𝑟 states, 2r𝑟 must be at least equal to (m + r + 1). Thus the
following equation should hold 2r ≥ m+r+1

Step 2 − Positioning the redundant bits.


The r redundant bits placed at bit positions of powers of 2, i.e. 1, 2, 4, 8, 16 etc. They are
referred in the rest of this text as r1 (at position 1), r2 (at position 2), r3 (at position 4), r4 (at
position 8) and so on.

Step 3 − Calculating the values of each redundant bit.


The redundant bits are parity bits. A parity bit is an extra bit that makes the number of 1s
either even or odd. The two types of parity are −
• Even Parity − Here the total number of bits in the message is made even.
• Odd Parity − Here the total number of bits in the message is made odd.
Each redundant bit, ri, is calculated as the parity, generally even parity, based upon its bit
position. It covers all bit positions whose binary representation includes a 1 in the ith position
except the position of ri. Thus −
• r1 is the parity bit for all data bits in positions whose binary representation
includes a 1 in the least significant position excluding 1 (3, 5, 7, 9, 11 and so on)
• r2 is the parity bit for all data bits in positions whose binary representation
includes a 1 in the position 2 from right except 2 (3, 6, 7, 10, 11 and so on)
• r3 is the parity bit for all data bits in positions whose binary representation
includes a 1 in the position 3 from right except 4 (5-7, 12-15, 20-23 and so on)
Decoding a message in Hamming Code
Once the receiver gets an incoming message, it performs recalculations to detect errors and
correct them. The steps for recalculation are −
• Step 1 − Calculation of the number of redundant bits.
• Step 2 − Positioning the redundant bits.
• Step 3 − Parity checking.
• Step 4 − Error detection and correction

Step 1 − Calculation of the number of redundant bits


Using the same formula as in encoding, the number of redundant bits are ascertained.
2r ≥ m + r + 1 where m is the number of data bits and r is the number of redundant bits.

Step 2 − Positioning the redundant bits


The r redundant bits placed at bit positions of powers of 2, i.e. 1, 2, 4, 8, 16 etc.

Step 3 − Parity checking


Parity bits are calculated based upon the data bits and the redundant bits using the same
rule as during generation of c1,c2 ,c3 ,c4 etc. Thus
c1 = parity(1, 3, 5, 7, 9, 11 and so on)
c2 = parity(2, 3, 6, 7, 10, 11 and so on)
c3 = parity(4-7, 12-15, 20-23 and so on)

Step 4 − Error detection and correction


The decimal equivalent of the parity bits binary values is calculated. If it is 0, there is no error.
Otherwise, the decimal value gives the bit position which has error. For example, if c 1c2c3c4 =
1001, it implies that the data bit at position 9, decimal equivalent of 1001, has error. The bit
is flipped to get the correct message.

Error Control
Error control is concerned with ensuring that all frames are delivered to destination possibly
in an order.
To ensure the delivery it requires three items, which are explained below −
Acknowledgement
Typically, reliable delivery is achieved using the “acknowledgement with retransmission”
paradigm, whereas the receiver returns a special ACK frame to the sender indicating the
correct receipt of a frame.
In some systems the receiver also returns a negative ACK (NACK) for incorrectly received
frames. So, it tells the sender to retransmit a frame without waiting for a timer to expire.

Timers
One problem that simple ACK/NACK schemes fail to address is recovering from a frame that
is lost, and as a result, fails to solicit an ACK or NACK.
What happens if ACK or NACK becomes lost?
Retransmission timers are used to resend frames that don’t produce an ACK. When we are
sending a frame, schedule a timer so that it expires at some time after the ACK should have
been returned. If the timer goes 0, then retransmit the frame.

Sequence Number
Retransmission introduces the possibility of duplicate frames. To reduce duplicates, we must
add sequence numbers to each frame, so that a receiver can distinguish between new frames
and old frames.

Flow Control
It deals with throttling speed of the sender to match to the speed of the receiver. There are
two approaches for flow control −

Feedback-based flow control


The receiver sends back information to the sender giving permission to send more data or at
least the sender has to tell how the receiver is doing.

Differences
The major differences between Flow Control and Error Control are as follows −

Flow Control Error Control

It is a method used to maintain proper transmission of It is used to ensure that error- free data is delivered
the data from sender to the receiver. from sender to receiver.

Feedback-based flow control and rate-based flow Many methods can be used here like Cyclic Reduction
control are the various approaches used to achieve Check, Parity Checking, checksum.
Flow control.

It avoids overrunning and prevents data loss. It detects and corrects errors that might have occurre
in transmission.
Flow Control Error Control

Examples are Stop and Wait and Sliding Window. Examples are Stop-and-Wait ARQ, Go-Back-N ARQ,
Selective-Repeat ARQ.

Stop and Wait protocol


Here stop and wait means, whatever the data that sender wants to send, he sends the
data to the receiver. After sending the data, he stops and waits until he receives the
acknowledgment from the receiver. The stop and wait protocol is a flow control
protocol where flow control is one of the services of the data link layer.

It is a data-link layer protocol which is used for transmitting the data over the noiseless
channels. It provides unidirectional data transmission which means that either sending
or receiving of data will take place at a time. It provides flow-control mechanism but
does not provide any error control mechanism.

The idea behind the usage of this frame is that when the sender sends the frame then
he waits for the acknowledgment before sending the next frame.

The primitives of stop and wait protocol are:

Sender side

Rule 1: Sender sends one data packet at a time.

Rule 2: Sender sends the next packet only when it receives the acknowledgment of
the previous packet.

Therefore, the idea of stop and wait protocol in the sender's side is very simple, i.e.,
send one packet at a time, and do not send another packet before receiving the
acknowledgment.

Receiver side
Rule 1: Receive and then consume the data packet.

Rule 2: When the data packet is consumed, receiver sends the acknowledgment to the
sender.

Therefore, the idea of stop and wait protocol in the receiver's side is also very simple,
i.e., consume the packet, and once the packet is consumed, the acknowledgment is
sent. This is known as a flow control mechanism.
Working of Stop and Wait protocol

The above figure shows the working of the stop and wait protocol. If there is a sender
and receiver, then sender sends the packet and that packet is known as a data packet.
The sender will not send the second packet without receiving the acknowledgment of
the first packet. The receiver sends the acknowledgment for the data packet that it has
received. Once the acknowledgment is received, the sender sends the next packet. This
process continues until all the packet are not sent. The main advantage of this protocol
is its simplicity but it has some disadvantages also. For example, if there are 1000 data
packets to be sent, then all the 1000 packets cannot be sent at a time as in Stop and
Wait protocol, one packet is sent at a time.

Disadvantages of Stop and Wait protocol


The following are the problems associated with a stop and wait protocol:

1. Problems occur due to lost data


Suppose the sender sends the data and the data is lost. The receiver is waiting for the data for
a long time. Since the data is not received by the receiver, so it does not send any
acknowledgment. Since the sender does not receive any acknowledgment so it will not send
the next packet. This problem occurs due to the lost data.

In this case, two problems occur:

o Sender waits for an infinite amount of time for an acknowledgment.


o Receiver waits for an infinite amount of time for a data.

2. Problems occur due to lost acknowledgment


Suppose the sender sends the data and it has also been received by the receiver. On
receiving the packet, the receiver sends the acknowledgment. In this case, the
acknowledgment is lost in a network, so there is no chance for the sender to receive
the acknowledgment. There is also no chance for the sender to send the next packet
as in stop and wait protocol, the next packet cannot be sent until the acknowledgment
of the previous packet is received.

In this case, one problem occurs:

o Sender waits for an infinite amount of time for an acknowledgment.

3. Problem due to the delayed data or acknowledgment

Suppose the sender sends the data and it has also been received by the receiver.
The receiver then sends the acknowledgment but the acknowledgment is received
after the timeout period on the sender's side. As the acknowledgment is received
late, so acknowledgment can be wrongly considered as the acknowledgment of
some other data packet.
Sliding Window Protocol
The sliding window is a technique for sending multiple frames at a time. It controls the
data packets between the two devices where reliable and gradual delivery of data
frames is needed. It is also used in TCP (Transmission Control Protocol).

In this technique, each frame has sent from the sequence number. The sequence
numbers are used to find the missing data in the receiver end. The purpose of the
sliding window technique is to avoid duplicate data, so it uses the sequence number.

Types of Sliding Window Protocol


Sliding window protocol has two types:

1. Go-Back-N ARQ
2. Selective Repeat ARQ

Go-Back-N ARQ
Go-Back-N ARQ protocol is also known as Go-Back-N Automatic Repeat Request. It is
a data link layer protocol that uses a sliding window method. In this, if any frame is
corrupted or lost, all subsequent frames have to be sent again.

The size of the sender window is N in this protocol. For example, Go-Back-8, the size
of the sender window, will be 8. The receiver window size is always 1.

If the receiver receives a corrupted frame, it cancels it. The receiver does not accept a
corrupted frame. When the timer expires, the sender sends the correct frame again.
The design of the Go-Back-N ARQ protocol is shown below.
The example of Go-Back-N ARQ is shown below in the figure.
Selective Repeat ARQ
Selective Repeat ARQ is also known as the Selective Repeat Automatic Repeat Request.
It is a data link layer protocol that uses a sliding window method. The Go-back-N ARQ
protocol works well if it has fewer errors. But if there is a lot of error in the frame, lots
of bandwidth loss in sending the frames again. So, we use the Selective Repeat ARQ
protocol. In this protocol, the size of the sender window is always equal to the size of
the receiver window. The size of the sliding window is always greater than 1.

If the receiver receives a corrupt frame, it does not directly discard it. It sends a negative
acknowledgment to the sender. The sender sends that frame again as soon as on the
receiving negative acknowledgment. There is no waiting for any time-out to send that
frame. The design of the Selective Repeat ARQ protocol is shown below.

The example of the Selective Repeat ARQ protocol is shown below in the figure.
Difference between the Go-Back-N ARQ and Selective
Repeat ARQ?
Go-Back-N ARQ Selective Repeat ARQ

If a frame is corrupted or lost in it,all subsequent In this, only the frame is sent again, which is corrupted or
frames have to be sent again.

If it has a high error rate,it wastes a lot of There is a loss of low bandwidth.
bandwidth.

It is less complex. It is more complex because it has to do sorting and

searching as well. And it also requires more storage.

It does not require sorting. In this, sorting is done to get the frames in the correct

order.

It does not require searching. The search operation is performed in it.

It is used more. It is used less because it is more complex.


Multiple access protocol
When a sender and receiver have a dedicated link to transmit data packets, the data
link control is enough to handle the channel. Suppose there is no dedicated path to
communicate or transfer the data between two devices. In that case, multiple stations
access the channel and simultaneously transmits the data over the channel. It may
create collision and cross talk. Hence, the multiple access protocol is required to reduce
the collision and avoid crosstalk between the channels.

For example, suppose that there is a classroom full of students. When a teacher asks a
question, all the students (small channels) in the class start answering the question at
the same time (transferring the data simultaneously). All the students respond at the
same time due to which data is overlap or data lost. Therefore it is the responsibility
of a teacher (multiple access protocol) to manage the students and make them one
answer.

Following are the types of multiple access protocol that is subdivided into the different
process as:

A. Random Access Protocol


In this protocol, all the station has the equal priority to send the data over a channel.
In random access protocol, one or more stations cannot depend on another station
nor any station control another station. Depending on the channel's state (idle or
busy), each station transmits the data frame. However, if more than one station sends
the data over a channel, there may be a collision or data conflict. Due to the collision,
the data frame packets may be lost or changed. And hence, it does not receive by the
receiver end.

Following are the different methods of random-access protocols for broadcasting


frames on the channel.

o Aloha
o CSMA
o CSMA/CD
o CSMA/CA

ALOHA Random Access Protocol

It is designed for wireless LAN (Local Area Network) but can also be used in a shared
medium to transmit data. Using this method, any station can transmit data across a
network simultaneously when a data frameset is available for transmission.

Aloha Rules

1. Any station can transmit data to a channel at any time.


2. It does not require any carrier sensing.
3. Collision and data frames may be lost during the transmission of data through multiple
stations.
4. Acknowledgment of the frames exists in Aloha. Hence, there is no collision detection.
5. It requires retransmission of data after some random amount of time.

Pure Aloha

Whenever data is available for sending over a channel at stations, we use Pure Aloha.
In pure Aloha, when each station transmits data to a channel without checking whether
the channel is idle or not, the chances of collision may occur, and the data frame can
be lost. When any station transmits the data frame to a channel, the pure Aloha waits
for the receiver's acknowledgment. If it does not acknowledge the receiver end within
the specified time, the station waits for a random amount of time, called the backoff
time (Tb). And the station may assume the frame has been lost or destroyed. Therefore,
it retransmits the frame until all the data are successfully transmitted to the receiver.

1. The total vulnerable time of pure Aloha is 2 * Tfr.


2. Maximum throughput occurs when G = 1/ 2 that is 18.4%.
3. Successful transmission of data frame is S = G * e ^ - 2 G.

As we can see in the figure above, there are four stations for accessing a shared channel
and transmitting data frames. Some frames collide because most stations send their
frames at the same time. Only two frames, frame 1.1 and frame 2.2, are successfully
transmitted to the receiver end. At the same time, other frames are lost or destroyed.
Whenever two frames fall on a shared channel simultaneously, collisions can occur,
and both will suffer damage. If the new frame's first bit enters the channel before
finishing the last bit of the second frame. Both frames are completely finished, and
both stations must retransmit the data frame.

Slotted Aloha

The slotted Aloha is designed to overcome the pure Aloha's efficiency because pure
Aloha has a very high possibility of frame hitting. In slotted Aloha, the shared channel
is divided into a fixed time interval called slots. So that, if a station wants to send a
frame to a shared channel, the frame can only be sent at the beginning of the slot, and
only one frame is allowed to be sent to each slot. And if the stations are unable to send
data to the beginning of the slot, the station will have to wait until the beginning of
the slot for the next time. However, the possibility of a collision remains when trying
to send a frame at the beginning of two or more station time slot.

1. Maximum throughput occurs in the slotted Aloha when G = 1 that is 37%.


2. The probability of successfully transmitting the data frame in the slotted Aloha is S =
G * e ^ - 2 G.
3. The total vulnerable time required in slotted Aloha is Tfr.

CSMA (Carrier Sense Multiple Access)

It is a carrier sense multiple access based on media access protocol to sense the
traffic on a channel (idle or busy) before transmitting the data. It means that if the
channel is idle, the station can send data to the channel. Otherwise, it must wait until
the channel becomes idle. Hence, it reduces the chances of a collision on a
transmission medium.

CSMA Access Modes

1-Persistent: In the 1-Persistent mode of CSMA that defines each node, first sense the
shared channel and if the channel is idle, it immediately sends the data. Else it must
wait and keep track of the status of the channel to be idle and broadcast the frame
unconditionally as soon as the channel is idle.

Non-Persistent: It is the access mode of CSMA that defines before transmitting the
data, each node must sense the channel, and if the channel is inactive, it immediately
sends the data. Otherwise, the station must wait for a random time (not continuously),
and when the channel is found to be idle, it transmits the frames.

P-Persistent: It is the combination of 1-Persistent and Non-persistent modes. The P-


Persistent mode defines that each node senses the channel, and if the channel is
inactive, it sends a frame with a P probability. If the data is not transmitted, it waits for
a (q = 1-p probability) random time and resumes the frame with the next time slot.

O- Persistent: It is an O-persistent method that defines the superiority of the station


before the transmission of the frame on the shared channel. If it is found that the
channel is inactive, each station waits for its turn to retransmit the data.
CSMA/ CD

It is a carrier sense multiple access/ collision detection network protocol to transmit


data frames. The CSMA/CD protocol works with a medium access control layer.
Therefore, it first senses the shared channel before broadcasting the frames, and if the
channel is idle, it transmits a frame to check whether the transmission was successful.
If the frame is successfully received, the station sends another frame. If any collision is
detected in the CSMA/CD, the station sends a jam/ stop signal to the shared channel
to terminate data transmission. After that, it waits for a random time before sending a
frame to a channel.

CSMA/ CA

It is a carrier sense multiple access/collision avoidance network protocol for carrier


transmission of data frames. It is a protocol that works with a medium access control
layer. When a data frame is sent to a channel, it receives an acknowledgment to check
whether the channel is clear. If the station receives only a single (own)
acknowledgments, that means the data frame has been successfully transmitted to the
receiver. But if it gets two signals (its own and one more in which the collision of
frames), a collision of the frame occurs in the shared channel. Detects the collision of
the frame when a sender receives an acknowledgment signal.

Following are the methods used in the CSMA/ CA to avoid the collision:

Interframe space: In this method, the station waits for the channel to become idle,
and if it gets the channel is idle, it does not immediately send the data. Instead of this,
it waits for some time, and this time period is called the Interframe space or IFS.
However, the IFS time is often used to define the priority of the station.

Contention window: In the Contention window, the total time is divided into different
slots. When the station/ sender is ready to transmit the data frame, it chooses a
random slot number of slots as wait time. If the channel is still busy, it does not restart
the entire process, except that it restarts the timer only to send data packets when the
channel is inactive.

Acknowledgment: In the acknowledgment method, the sender station sends the data
frame to the shared channel if the acknowledgment is not received ahead of time.

B. Controlled Access Protocol


It is a method of reducing data frame collision on a shared channel. In the controlled
access method, each station interacts and decides to send a data frame by a particular
station approved by all other stations. It means that a single station cannot send the
data frames unless all other stations are not approved. It has three types of controlled
access: Reservation, Polling, and Token Passing.

C. Channelization Protocols
It is a channelization protocol that allows the total usable bandwidth in a shared
channel to be shared across multiple stations based on their time, distance and codes.
It can access all the stations at the same time to send the data frames to the channel.

Following are the various methods to access the channel based on their time, distance
and codes:

1. FDMA (Frequency Division Multiple Access)


2. TDMA (Time Division Multiple Access)
3. CDMA (Code Division Multiple Access)

FDMA

It is a frequency division multiple access (FDMA) method used to divide the available
bandwidth into equal bands so that multiple users can send data through a different
frequency to the subchannel. Each station is reserved with a particular band to prevent
the crosstalk between the channels and interferences of stations.
TDMA

Time Division Multiple Access (TDMA) is a channel access method. It allows the same
frequency bandwidth to be shared across multiple stations. And to avoid collisions in
the shared channel, it divides the channel into different frequency slots that allocate
stations to transmit the data frames. The same frequency bandwidth into the shared
channel by dividing the signal into various time slots to transmit it. However, TDMA
has an overhead of synchronization that specifies each station's time slot by adding
synchronization bits to each slot.

CDMA

The code division multiple access (CDMA) is a channel access method. In CDMA, all
stations can simultaneously send the data over the same channel. It means that it
allows each station to transmit the data frames with full frequency on the shared
channel at all times. It does not require the division of bandwidth on a shared channel
based on time slots. If multiple stations send data to a channel simultaneously, their
data frames are separated by a unique code sequence. Each station has a different
unique code for transmitting the data over a shared channel. For example, there are
multiple users in a room that are continuously speaking. Data is received by the users
if only two-person interact with each other using the same language. Similarly, in the
network, if different stations communicate with each other simultaneously with
different code language.

1.3K

C++ vs Java

Next

You might also like