0% found this document useful (0 votes)
7 views

Module 2 Notes

This document covers the Data Link Layer's error detection and correction techniques, including parity checks, cyclic redundancy checks (CRC), and flow control methods like Stop-and-Wait and multiple access protocols such as CSMA/CD and CSMA/CA. It explains various types of errors, redundancy concepts, and the mechanics of error detection methods, along with flow control to manage data transmission. Additionally, it details multiple access protocols, including ALOHA and CSMA, to prevent data collisions in shared communication channels.

Uploaded by

20ammar02
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views

Module 2 Notes

This document covers the Data Link Layer's error detection and correction techniques, including parity checks, cyclic redundancy checks (CRC), and flow control methods like Stop-and-Wait and multiple access protocols such as CSMA/CD and CSMA/CA. It explains various types of errors, redundancy concepts, and the mechanics of error detection methods, along with flow control to manage data transmission. Additionally, it details multiple access protocols, including ALOHA and CSMA, to prevent data collisions in shared communication channels.

Uploaded by

20ammar02
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

MODULE 2

Data Link Layer - Error Detection and Correction– Parity, CRC, Hamming Distance Flow Control and
Error Control, Stop and Wait, Multiple Access Protocols, CSMA/CD,CSMA/CA

ERROR DETECTION AND CORRECTION


ERROR:
Data can be corrupted during transmission. For reliable communication, errors must be detected and
corrected.
TYPES OF ERRORS:
 Single bit Error:
The term single bit error means that only one bit of a given data unit is changed from 1 to 0 or 0 to 1. 010101

is changed to 110101 here only one bit is changed by single bit error.

 Burst Error:

A burst error means that 2 or more bits in the data unit have changed. Example:
Here two bits are corrupted.

Redundancy

Error detection use the concept of redundancy, which means adding extra bits for detecting errors at the
destination .i.e., instead of repeating the entire data stream, a shorter group of bits may be appended to the
end of each unit.

Detection methods
 Parity check
 Cyclic redundancy check
 checksum

Parity check
A redundant bit called parity bit is added to every data unit so that the total number of 1’s in the unit
becomes even (or odd).

SIMPLE PARITY CHECK


In a simple parity check a redundant bit is added to a string of data so that total number of 1’s in the data
become even or odd. The total data bit is then passed through parity checking function. For even parity, it
checks for even number of 1’s and for odd parity it checks even number of 1’s. If an error is detected the
data is rejected.

PARITY CHECK CODE


The most familiar error-detecting code is the simple parity-check code. In this code, a k-bit dataword is
changed to an n-bit codeword where n = k + 1. The extra bit, called the parity bit, is selected to make the
total number of Is in the codeword even. Although some implementations specify an odd number of 1's.
The minimum Hamming distance for this category is dmin =2, which means that the code is a single-bit
error-detecting code; it cannot correct any error.
Our first code (Table 10.1) is a parity-check code with k -= 4 and n =5.
Figure 10.10 shows a possible structure of an encoder (at the sender) and a decoder (at the receiver). The
encoder uses a generator that takes a copy of a 4-bit dataword (ao, aI' a2' and a3) and generates a parity
bit roo The dataword bits and the parity bit create the 5-bit codeword. The parity bit that is added makes
the number of Is in the codeword even.

This is normally done by adding the 4 bits of the dataword (modulo-2); the result is the parity bit. In other
words,

If the number of 1s is even, the result is 0; if the number of 1s is odd, the result is 1. In both cases, the
total number of 1s in the codeword is even. The sender sends the codeword which may be corrupted during
transmission. The receiver receives a 5-bit word. The checker at the receiver does the same thing as the
generator in the sender with one exception: The addition is done over all 5 bits. The result, which is called
the syndrome, is just 1 bit. The syndrome is 0 when the number of Is in the received codeword is even;
otherwise, it is 1.

The syndrome is passed to the decision logic analyzer. If the syndrome is 0, there is no error in the received
codeword; the data portion of the received codeword is accepted as the dataword; if the syndrome is 1,
the data portion of the received codeword is discarded. The dataword is not created.

CYCLIC REDUNDANCY CHECK


Cyclic codes are special linear block codes with one extra property. In a cyclic code, if a codeword is cyclically
shifted (rotated), the result is another codeword. For example, if 1011000 is a codeword and we cyclically
left-shift, then 0110001 is also a codeword. In this case, if we call the bits in the first word ao to a6' and the
bits in the second word bo to b6.

In the encoder, the dataword has k bits (4 here); the codeword has n bits (7 here). The size of the dataword is
augmented by adding n - k (3 here) Os to the right-hand side of the word. The n-bit result is fed into the
generator. The generator uses a divisor of size n - k + I (4 here), predefined and agreed upon. The generator
divides the augmented dataword by the divisor (modulo-2 division). The quotient of the division is discarded;
the remainder (r2r1r0) is appended to the dataword to create the codeword. The decoder receives the possibly
corrupted codeword. A copy of all n bits is fed to the checker which is a replica of the generator. The remainder
produced by the checker is a syndrome of n - k (3 here) bits, which is fed to the decision logic analyzer. The
analyzer has a simple function. If the syndrome bits are all as, the 4 leftmost bits of the codeword are accepted
as the dataword (interpreted as no error); otherwise, the 4 bits are discarded (error).
Encoder
Let us take a closer look at the encoder. The encoder takes the dataword and augments it with n - k number of 0's.
It then divides the augmented dataword by the divisor, as shown in Figure 10.15.

As in decimal division, the process is done step by step. In each step, a copy of the divisor is XORed with the 4
bits of the dividend. The result of the XOR operation (remainder) is 3 bits (in this case), which is used for the
next step after 1 extra bit is pulled down to make it 4 bits long. There is one important point we need to remember
in this type of division. If the leftmost bit of the dividend (or the part used in each step) is 0, the step cannot use
the regular divisor; we need to use an all-Os divisor. When there are no bits left to pull down, we have a result.
The 3-bit remainder forms the check bits (r2' r1 and r0). They are appended to the dataword to create the
codeword.
Decoder
The codeword can change during transmission. The decoder does the same division process as the encoder.
The remainder of the division is the syndrome. If the syndrome is all Os, there is no error; the dataword is
separated from the received codeword and accepted. Otherwise, everything is discarded. Figure 10.16 shows
two cases: The left hand figure shows the value of syndrome when no error has occurred; the syndrome is
000. The right-hand part of the figure shows the case in which there is one single error. The syndrome is not
all 0s .

Divisor
Let us first consider the divisor. We need to note the following points:
1. The divisor is repeatedly XORed with part of the dividend.
2. The divisor has n - k + 1 bits which either are predefined or are all Os. In other words, the bits do not change
from one dataword to another. In our previous example, the divisor bits were either 1011 or 0000. The choice
was based on the leftmost bit of the part of the augmented data bits that are active in the XOR operation.
3. A close look shows that only n - k bits of the divisor is needed in the XOR operation. The leftmost bit is
not needed because the result of the operation is always 0, no matter what the value of this bit. The reason is
that the inputs to this XOR operation are either both Os or both 1s. In our previous example, only 3 bits, not
4, is actually used in the XOR operation.

POLYNOMIALS
The divisor in the CRC most often represented not as a string of 1s and 0s, but as an algebraic
polynomial. The polynomial format is useful to solve the concept mathematically.
Standard Polynomials:

FLOW CONTROL
Flow control coordinates that amount of data that can be sent before receiving ACK It is one of the most
important duties of the data link layer.
Protocols
STOP AND WAIT
If data frames arrive at the receiver site faster than they can be processed, the frames must be stored until their
use. Normally, the receiver does not have enough storage space, especially if it is receiving data from many
sources. This may result in either the discarding of frames or denial of service. To prevent the receiver from
becoming overwhelmed with frames,we somehow need to tell the sender to slow down. There must be feedback
from the receiver to the sender.
The protocol we discuss now is called the Stop-and-Wait Protocol because the sender sends one frame, stops
until it receives confirmation from the receiver (okay to go ahead), and then sends the next frame. We still have
unidirectional communication for data frames, but auxiliary ACK frames (simple tokens of acknowledgment)
travel from the other direction. We add flow control to our previous protocol.

After transmitting one packet, the sender waits for an acknowledgment (ACK) from the receiver before

transmitting the next one. In this way, the sender can recognize that the previous packet is transmitted successfuly
and we could say "stop-n-wait" guarantees reliable transfer between nodes.
To support this feature, the sender keeps a record of each packet it sends.
Also, to avoid confusion caused by delayed or duplicated ACKs, "stop-n-wait" sends each packets with unique
sequence numbers and receives that numbers in each ACKs.

If the sender doesn't receive ACK for previous sent packet after a certain period of time, the sender times
out and retransmits that packet again. There are two cases when the sender doesn't receive ACK; One is when
the ACK is lost and the other is when the frame itself is not transmitted.
To support this feature, the sender keeps timer per each packet.

MULTIPLE ACCESS
The medium access control (MAC) is a sublayer of the data link layer. The MAC sublayer emulates a
full-duplex logical communication channel in a multipoint network. This channel may provide unicast,
multicast, or broadcast communication service. The MAC sublayer uses MAC protocols to ensure that
signals sent from different stations across the same channel don't collide. Eg: two people speak A multiple-
access protocol to coordinate access to the link (multipoint or broadcast link).

RANDOM ACCESS:
Also called contention-based access. No station is superior to another station and No station is assigned to
control another. A station that has data to send uses a procedure defined by the protocol to make a decision on
whether or not to send. Decision depends on the state of the medium (idle or busy).
• Two features of RAP:
• No scheduled time for a station to transmit
• No rules specify which station should send next
• Stations compete with one another to access the medium

ALOHA Protocols
Was designed for wireless LAN and can be used for any shared medium. It has two types: one is Pure Aloha and
another is Slotted Aloha. There is a potential of collisions. The medium is shared between the stations. When a
station sends data, another station may attempt to do so at the same time. The data from the two stations collide and
become garbled.

Pure ALOHA
When a station sends data it waits for an acknowledgement. If the acknowledgement doesn’t come within the
allotted time then the station waits for a random amount of time called back-off time (Tb) and re-sends the data.
Since different stations wait for different amount of time, the probability of further collision decreases.

There are four stations (unrealistic assumption) that contend with one another for access to the shared channel.
The figure shows that each station sends two frames; there are a total of eight frames on the shared medium. Some
of these frames collide because multiple frames are in contention for the shared channel. Figure 12.2 shows that
only two frames survive: one frame from station 1 and one frame from station 3.
Pure ALOHA has a second method to prevent congesting the channel with retransmitted frames. After a
maximum number of retransmission attempts Kmax, a station must give up and try later. Figure 12.3 shows
the procedure for pure ALOHA based on the above strategy.

The time-out period is equal to the maximum possible round-trip propagation delay, which is twice the
amount of time required to send a frame between the two most widely separated stations (2 × Tp). The
backoff time TB is a random value that normally depends on K (the number of attempted unsuccessful
transmissions). The formula for TB depends on the implementation. One common formula is the binary
exponential backoff. In this method, for each retransmission, a multiplier R = 0 to 2K − 1 is randomly chosen
and multiplied by Tp (maximum propagation time) or Tfr (the average time required to send out a frame) to
find TB. Note that in this procedure, the range of the random numbers increases after each collision. The
value of Kmax is usually chosen as 15.
Slotted ALOHA
Slotted Aloha divides the time of shared channel into discrete intervals called as time slots. Any station can
transmit its data in any time slot. The only condition is that station must start its transmission from the
beginning of the time slot. If the beginning of the slot is missed, then station has to wait until the beginning
of the next time slot. A collision may occur if two or more stations try to transmit data at the beginning of
the same time slot. Slotted ALOHA was invented to improve the efficiency of pure ALOHA.
CSMA
To minimize the chance of collision and, therefore, increase the performance, the CSMA method was
developed. The chance of collision can be reduced if a station senses the medium before trying to use it.
Carrier sense multiple access (CSMA) requires that each station first listen to the medium (or check the state
of the medium) before sending. In other words, CSMA is based on the principle “sense before transmit”
or “listen before talk.” CSMA can reduce the possibility of collision, but it cannot eliminate it.

The possibility of collision still exists because of propagation delay; when a station sends a frame, it still
takes time (although very short) for the first bit to reach every station and for every station to sense it. In
other words, a station may sense the medium and find it idle, only because the first bit sent by another station
has not yet been received.

At time t1, station B senses the medium and finds it idle, so it sends a frame. At time t2 (t2 > t1), station C
senses the medium and finds it idle because, at this time, the first bits from station B have not reached station
C. Station C also sends a frame. The two signals collide and both frames are destroyed.

Persistence Methods
What should a station do if the channel is busy? What should a station do if the channel is idle? Three
methods have been devised to answer these questions:
 1-persistent method,
 nonpersistent method, and
 p-persistent method
1-Persistent: The node senses the channel, if idle it sends the data, otherwise it continuously keeps on
checking the medium for being idle and transmits unconditionally(with 1 probability) as soon as the channel
gets idle.
Non-Persistent: The node senses the channel, if idle it sends the data, otherwise it checks the medium after
a random amount of time (not continuously) and transmits when found idle.

• P-Persistent: The node senses the medium, if idle it sends the data with p probability.
• If the data is not transmitted ((1-p) probability) then it waits for some time and checks the medium again,
now if it is found idle then it send with p probability.
• This repeat continues until the frame is sent. It is used in Wifi and packet radio systems.
.

CSMA/CD
The CSMA method does not specify the procedure following a collision. Carrier sense multiple access with
collision detection (CSMA/CD) augments the algorithm to handle the collision. In this method, a station
monitors the medium after it sends a frame to see if the transmission was successful. If so, the station is
finished. If, however, there is a collision, the frame is sent again.

To better understand CSMA/CD, let us look at the first bits transmitted by the two stations involved in the
collision. Although each station continues to send bits in the frame until it detects the collision, we show
what happens as the first bits collide. In Figure 12.11, stations A and C are involved in the collision.

 At time t1, station A has executed its persistence procedure and starts sending the bits of its frame.
 At time t2, station C has not yet sensed the first bit sent by A.
 Station C executes its persistence procedure and starts sending the bits in its frame, which propagate
both to the left and to the right.
 The collision occurs sometime after time t2.
 Station C detects a collision at time t3 when it receives the first bit of A’s frame.
 Station C immediately aborts transmission.
 Station A detects collision at time t4 when it receives the first bit of C’s frame; it also immediately
aborts transmission.
Looking at the figure, we see that A transmits for the duration t4 − t1; C transmits for the duration t3 − t2.
CSMA/CA
Carrier sense multiple access with collision avoidance (CSMA/CA) was invented for wireless networks.
Collisions are avoided through the use of CSMA/CA’s three strategies: the interframe space, the contention
window, and acknowledgments Collisions are avoided through the use of CSMA/CA’s three strategies:
• Interframe space (IFS)
• Contention window
• Acknowledgments

Interframe space (IFS)


• When an idle channel is found, the station does not send immediately.
• It waits for a period of time called the interframe space or IFS.
Contention Window
• The contention window is an amount of time divided into slots.
• if station determine that the channel is free, it chooses a random number of slots as wait time
which doubles every time medium is not found idle.
Acknowledgment:
• The sender re-transmits the data if acknowledgement is not received before time-out.
 Carrier sense multiple access with collision avoidance.
 The process of collisions detection involves sender receiving acknowledgement signals.
 If there is just one signal(its own) then the data is successfully sent but if there are two signals(its own
and the one with which it has collided) then it means a collision has occurred.
 To distinguish between these two cases, collision must have a lot of impact on received signal.

CONTROLLED ACCESS
In controlled access, the stations consult one another to find which station has the right to send. A station
cannot send unless it has been authorized by other stations. We discuss three controlled-access methods.
Reservation
In the reservation method, a station needs to make a reservation before sending data. Time is divided into
intervals. In each interval, a reservation frame precedes the data frames sent in that interval. If there are N
stations in the system, there are exactly N reservation minislots in the reservation frame. Each minislot
belongs to a station. When a station needs to send a data frame, it makes a reservation in its own minislot.
The stations that have made reservations can send their data frames after the reservation frame.

Figure 12.18 shows a situation with five stations and a five-minislot reservation frame. In the first interval,
only stations 1, 3, and 4 have made reservations. In the second interval, only station 1 has made a reservation.
Polling
Polling works with topologies in which one device is designated as a primary station and the other
devices are secondary stations. All data exchanges must be made through the primary device even
when the ultimate destination is a secondary device. The primary device controls the link; the
secondary devices follow its instructions. It is up to the primary device to determine which device is
allowed to use the channel at a given time. The primary device, therefore, is always the initiator of a
session (see Figure 12.19). This method uses poll and select functions to prevent collisions. However,
the drawback is if the primary station fails, the system goes down.

Select
The select function is used whenever the primary device has something to send. Remember that the primary
controls the link. If the primary is neither sending nor receiving data, it knows the link is available. If it has
something to send, the primary device sends it. What it does not know, however, is whether the target device
is prepared to receive. So the primary must alert the secondary to the upcoming transmission and wait for
an acknowledgment of the secondary’s ready status. Before sending data, the primary creates and transmits
a select (SEL) frame, one field of which includes the address of the intended secondary.
Poll
The poll function is used by the primary device to solicit transmissions from the secondary devices. When
the primary is ready to receive data, it must ask (poll) each device in turn if it has anything to send. When
the first secondary is approached, it responds either with a NAK frame if it has nothing to send or with data
(in the form of a data frame) if it does. If the response is negative (a NAK frame), then the primary polls the
next secondary in the same manner until it finds one with data to send. When the response is positive (a data
frame), the primary reads the frame and returns an acknowledgment (ACK frame), verifying its receipt.
Token Passing
In the token-passing method, the stations in a network are organized in a logical ring. In other words, for
each station, there is a predecessor and a successor. The predecessor is the station which is logically before
the station in the ring; the successor is the station which is after the station in the ring. The current station
is the one that is accessing the channel now. The right to this access has been passed from the predecessor
to the current station. The right will be passed to the successor when the current station has no more data to
send.

In the physical ring topology, when a station sends the token to its successor, the token cannot be seen by other
stations; the successor is the next one in line. This means that the token does not have to have the address of the
next successor. The problem with this topology is that if one of the links—the medium between two adjacent
stations—fails, the whole system fails.

The dual ring topology uses a second (auxiliary) ring which operates in the reverse direction compared with the
main ring. The second ring is for emergencies only (such as a spare tire for a car). If one of the links in the main
ring fails, the system automatically combines the two rings to form a temporary ring. After the failed link is
restored, the auxiliary ring becomes idle again. Note that for this topology to work, each station needs to have
two transmitter ports and two receiver ports. The high-speed Token Ring networks called FDDI (Fiber
Distributed Data Interface) and CDDI (Copper Distributed Data Interface) use this topology.

In the bus ring topology, also called a token bus, the stations are connected to a single cable called a bus. They,
however, make a logical ring, because each station knows the address of its successor (and also predecessor for
token management purposes). When a station has finished sending its data, it releases the token and inserts the
address of its successor in the token. Only the station with the address matching the destination address of the
token gets the token to access the shared media. The Token Bus LAN, standardized by IEEE, uses this topology.
In a star ring topology, the physical topology is a star. There is a hub, however, that acts as the connector. The
wiring inside the hub makes the ring; the stations are connected to this ring through the two wire connections.
This topology makes the network less prone to failure because if a link goes down, it will be bypassed by the hub
and the rest of the stations can operate. Also adding and removing stations from the ring is easier. This topology
is still used in the Token Ring LAN designed by IBM.

CHANNELIZATION
Channelization is a multiple-access method in which the available bandwidth of a link is shared in time,
frequency, or through code, between different stations. In this section, we discuss three channelization
protocols.
FDMA: Frequency Division Multiple Access:
In frequency-division multiple access (FDMA), the available bandwidth is divided into frequency bands. Each
station is allocated a band to send its data. In other words, each band is reserved for a specific station, and
it belongs to the station all the time. Each station also uses a bandpass filter to confine the transmitter
frequencies. To prevent station interferences, the allocated bands are separated from one another by small
guard bands

FDMA specifies a predetermined frequency band for the entire period of communication. This means that stream
data (a continuous flow of data that may not be packetized) can easily be used with FDMA. We will see in Chapter
16 how this feature can be used in cellular telephone systems.

We need to emphasize that although FDMA and frequency-division multiplexing (FDM) conceptually seem
similar, there are differences between them. FDM, as we saw in Chapter 6, is a physical layer technique that
combines the loads from low bandwidth channels and transmits them by using a high-bandwidth channel. The
channels that are combined are low-pass. The multiplexer modulates the signals, combines them, and creates a
bandpass signal. The bandwidth of each channel is shifted by the multiplexer.

FDMA, on the other hand, is an access method in the data-link layer. The datalink layer in each station tells its
physical layer to make a bandpass signal from the data passed to it. The signal must be created in the allocated
band. There is no physical multiplexer at the physical layer. The signals created at each station are automatically
bandpass-filtered. They are mixed when they are sent to the common channel.

TDMA
In time-division multiple access (TDMA), the stations share the bandwidth of the channel in time. Each
station is allocated a time slot during which it can send data. Each station transmits its data in its assigned
time slot. Figure 12.22 shows the idea behind TDMA.

The main problem with TDMA lies in achieving synchronization between the different stations. Each station
needs to know the beginning of its slot and the location of its slot. This may be difficult because of propagation
delays introduced in the system if the stations are spread over a large area. To compensate for the delays, we can
insert guard times. Synchronization is normally accomplished by having some synchronization bits (normally
referred to as preamble bits) at the beginning of each slot.

CDMA
Code-division multiple access (CDMA) was conceived several decades ago. Recent advances in electronic
technology have finally made its implementation possible. CDMA differs from FDMA in that only one channel
occupies the entire bandwidth of the link. It differs from TDMA in that all stations can send data simultaneously;
there is no timesharing.
Let us assume we have four stations, 1, 2, 3, and 4, connected to the same channel. The data from station 1 are
d1, from station 2 are d2, and so on. The code assigned to the first station is c1, to the second is c2, and so on. We
assume that the assigned codes have two properties.
1. If we multiply each code by another, we get 0.
2. If we multiply each code by itself, we get 4 (the number of stations).
With these two properties in mind, let us see how the above four stations can send data using the same common
channel, as shown in Figure 12.23. Station 1 multiplies (a special kind of multiplication, as we will see) its data
by its code to get d1 ⋅ c1. Station 2 multiplies its data by its code to get d2 ⋅ c2, and so on. The data that go on the
channel are the sum of all these terms, as shown in the box. Any station that wants to receive data from one of
the other three multiplies the data on the channel by the code of the sender. For example, suppose stations 1 and
2 are talking to each other. Station 2 wants to hear what station 1 is saying. It multiplies the data on the channel
by c1, the code of station 1. Because (c1 ⋅ c1) is 4, but (c2 ⋅ c1), (c3 ⋅ c1), and (c4 ⋅ c1) are all 0s, station 2 divides
the result by 4 to get the data from station 1.

data = (d1 ⋅ c1 + d2 ⋅ c2 + d3 ⋅ c3 + d4 ⋅ c4) ⋅ c1


= d1 ⋅ c1 ⋅ c1 + d2 ⋅ c2 ⋅ c1 + d3 ⋅ c3 ⋅ c1 + d4 ⋅ c4 ⋅ c1 = 4 × d1

You might also like