0% found this document useful (0 votes)
3 views

Module-2 (2)

Module II covers data communication, including the exchange of data between devices, data flow, and the components of a data communication system. It discusses transmission modes (simplex, half-duplex, and full-duplex), modulation techniques, multiplexing methods, and switching techniques used in networks. The document highlights the advantages and disadvantages of each mode and technique, providing a comprehensive overview of data communication principles.

Uploaded by

LAUGH TOGERTHER
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

Module-2 (2)

Module II covers data communication, including the exchange of data between devices, data flow, and the components of a data communication system. It discusses transmission modes (simplex, half-duplex, and full-duplex), modulation techniques, multiplexing methods, and switching techniques used in networks. The document highlights the advantages and disadvantages of each mode and technique, providing a comprehensive overview of data communication principles.

Uploaded by

LAUGH TOGERTHER
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 41

Module -II

Data Communication
Data communication -refers to the exchange of data between two or more networked or connected devices.
These devices must be capable of sending and receiving data over a communication medium. Data may be text,
image, audio and multimedia documents.
Data flow – movement of data through a system comprised of software, hardware or a combination of both.
Data Communication System Components
There are mainly five components of a data communication system:
1. Message
2. Sender
3. Receiver
4. Transmission Medium
5. Set of rules (Protocol)

Transmission mode
Transmission mode means transferring data between two devices. It is also known as a communication
mode. Buses and networks are designed to allow communication to occur between individual devices that
are interconnected.
There are three types of transmission mode:-
1. Simplex Mode
• communication is unidirectional
• Only one of the two devices on a link can transmit, the other can only receive
• The simplex mode can use the entire capacity of the channel to send data in one direction.
Example: Keyboard and traditional monitors. The keyboard can only introduce input, the
monitor can only give the output.

Advantages:
1
• easiest and most reliable mode of communication.
• most cost-effective mode, as it only requires one communication channel.
• Simplex mode is particularly useful in situations where feedback or response is not required, such as
broadcasting or surveillance.
Disadvantages:
• Only one-way communication is possible.
• There is no way to verify if the transmitted data has been received correctly.
• Simplex mode is not suitable for applications that require bidirectional communication.

2. Half-Duplex Mode
• In half-duplex mode, each station can both transmit and receive, but not at the same time.
• When one device is sending, the other can only receive, and vice versa.
• The half-duplex mode is used in cases where there is no need for communication in both
directions at the same time.
• The entire capacity of the channel can be utilized for each direction.
Example: Walkie-talkie in which message is sent one at a time and messages are sent in both
directions.

Advantages:
• allows for bidirectional communication, which is useful in situations where devices need to send and
receive data.
• It is a more efficient mode of communication than simplex mode
• It is less expensive than full-duplex mode
Disadvantages:
• it is less reliable than Full-Duplex mode, as both devices cannot transmit at the same time.

2
• There is a delay between transmission and reception, which can cause problems in some applications.
• There is a need for coordination between the transmitting and receiving devices, which can complicate
the communication process.

3. Full-Duplex Mode

• In full-duplex mode, both stations can transmit and receive simultaneously.


• In full duplex mode, signals going in one direction share the capacity of the link with signals
going in another direction, this sharing can occur in two ways:
• the capacity is divided between signals traveling in both directions.

Advantages:
• allows for simultaneous bidirectional communication, which is ideal for real-time applications such as
video conferencing or online gaming.
• most efficient mode of communication, as both devices can transmit and receive data simultaneously.
• provides a high level of reliability and accuracy, as there is no need for error correction mechanisms.
Disadvantages:
• most expensive mode, as it requires two communication channels.
• more complex
• Full-duplex mode may not be suitable for all applications, as it requires a high level of bandwidth
Bandwidth
It is a measurement indicating the maximum capacity of wired or wireless communication link to transmit
data over a network connection in a given amount of time.
Bandwidth= Maximum frequency- minimum frequency
Bitrate(R)
It describes the rate at which bits are transferred from one location to another. It is measured in
bits/second(bps), kbps, Mbps
Bitrate=number of bits /second
3
Baud rate(r)
Total number of signal units transmitted in one second
Baud rate= bits/ element

Digital modulation and Multiplexing


Modulation
The process by which data/ information is converted into electrical or digital signal for transferring that signal
over a medium is known as modulation. It is a process encoding information into a transmitted signal.
Demodulation is the process of decoding the information from transmitted signal.
Low frequency signals are known as Baseband signals it cannot be transmitted directly. So, high frequency
periodic signals are used to carry this baseband signals are known as Carrier signal.
Advantages of Modulation/ Need for modulation
• Reduction of antenna size
• No signal mixing
• Increased communication range
• Multiplexing of signals
• Reduced cost of wires
• Improved reception quality
Disadvantages

• Cost of equipment is higher


• Complicated
• Not efficient for large bandwidth
Types of Modulation

Analog Modulation Digital Modulation


Analog Modulation

4
• Analog signal is transformed to analog carrier signal
• It can travel long distance
Digital Modulation

• digital signal is transformed to digital carrier signal


• two types of modulations: baseband modulation and Passband modulation
1. Baseband Modulation
• Baseband transmission needs larger antennas
• Cannot be transmitted over a radio link/ satellite
• It can be transmitted over a pair of cables (optical/ coaxial cables)
• In this modulation, we use +ve voltage as 1
-ve voltage as 0
In the case of optical fiber,
Presence of light=1
Absence of light= 0. This scheme is known as NRZ (Non-return-to-zero)scheme.

The two most common methods used in NRZ are:

NRZ-L: In NRZ-L encoding, the level of the signal depends on the type of the bit that it represents. If a bit is 0
or 1, then their voltages will be positive and negative respectively. Therefore, we can say that the level of the
signal is dependent on the state of the bit.

NRZ-I: NRZ-I is an inversion of the voltage level that represents 1 bit. In the NRZ-I encoding scheme, a transition
occurs between the positive and negative voltage that represents 1 bit. In this scheme, 0 bit represents no
change and 1 bit represents a change in voltage level.

Manchester Encoding

5
o It changes the signal at the middle of the bit interval but does not return to zero for synchronization.
o In Manchester encoding, a negative-to-positive transition represents binary 1, and positive-to-negative
transition represents 0.
o Manchester has the same level of synchronization as RZ scheme except that it has two levels of
amplitude.

Differential Manchester Encoding

o It changes the signal at the middle of the bit interval for synchronization, but the presence or absence
of the transition at the beginning of the interval determines the bit. A transition means binary 0 and no
transition means binary 1.
o In Manchester Encoding scheme, two signal changes represent 0 and one signal change represent 1.

2. Passband Transmission
• Modulation performed on the basic properties of signals (amplitude, frequency and phase).
• 3 types of Modulation techniques:
a) Amplitude Shift keying (ASK)

It is a type of Amplitude Modulation which represents the binary data in the form of variations in the
amplitude of a signal.

Any modulated signal has a high frequency carrier. The binary signal when ASK modulated, gives a zero value
for Low input while it gives the carrier output for High input.

The following figure represents ASK modulated waveform along with its input.
6
b) Frequency Shift Keying (FSK)

It is the digital modulation technique in which the frequency of the carrier signal varies according to the
digital signal changes. FSK is a scheme of frequency modulation.

The output of a FSK modulated wave is high in frequency for a binary High input and is low in frequency for a
binary Low input. The binary 1s and 0s are called Mark and Space frequencies.

c) Phase Shift Keying (PSK)

it is the digital modulation technique in which the phase of the carrier signal is changed by varying the
sine and cosine inputs at a particular time. PSK technique is widely used for wireless LANs, bio-metric,
contactless operations, along with RFID and Bluetooth communications.

PSK is of two types, depending upon the phases the signal gets shifted. They are −
7
Binary Phase Shift Keying BPSK

This is also called as 2-phase PSK or Phase Reversal Keying. In this technique, the sine wave carrier takes two
phase reversals such as 0° and 180°.

Multiplexing
• It is a method of combining more than one signal over a shared medium
• Divides capacity of communication channel into several logical channels
• Divides a given path logically into several paths and then uses each path to transmit data to individual
node
• Method of extracting original data streams from multiplexed signa is known as demultiplexing

Types of Multiplexing in Computer Networks


Multiplexing can be classified as:
• Frequency Division Multiplexing (FDM)
• Time-Division Multiplexing (TDM)
• Wavelength Division Multiplexing (WDM)
1. Frequency Division Multiplexing (FDM)

8
The frequency spectrum is divided among the logical channels and each user has exclusive access to his channel.

It sends signals in several distinct frequency ranges and carries multiple video channels on a single cable.
Each signal is modulated onto a different carrier frequency and carrier frequencies are separated by guard
bands.
Assignment of non-overlapping frequency ranges to each user or signal on a medium.
Thus, all signals are transmitted at the same time, each using different frequencies.
Advantages of FDM
• The process is simple and easy to modulate.
• A corresponding multiplexer or de-multiplexer is on the end of the high-speed line and separates the
multiplexed signals.
Disadvantages of FDM
• it cannot utilize the full capacity of the cable.
• It is important that the frequency bands do not overlap.
• There must be a considerable gap between the frequency bands in order to ensure that signals from
one band do not affect signals in another band.
2. Time Division Multiplexing (TDM)
Each user periodically gets the entire bandwidth for a small burst of time
Entire channel is dedicated to one user but only for a short period of time.
It is very extensively used in computer communication and telecommunication.
Sharing of the channel is accomplished by dividing available transmission time on a medium among users.
TDM splits cable usage into time slots.
The data rate of transmission media exceeds the data rate of signals.

9
Uses a frame and one slot for each slice of time and the time slots are transmitted whether the source has
data or not.

Time Division Multiplexing


There are two types of TDMs which are as follows:
1. Synchronous Time Division Multiplexing
2. Asynchronous Time Division Multiplexing
Synchronous Time Division Multiplexing
In the Synchronous Time Division Multiplexing (STDM), the multiplexer assigns an equal time slot to every
device at all times, whether or not a device has anything to send. Time slot A, for instance, is authorised to
device A alone and cannot be used by any other device.

Each time is assigned a time slot and it shows up. Then, a device has the time to transmit a portion of its data.
If a device cannot send or does not have data to send, its time slot remains null.

The time slots are consolidated into frames, and every frame includes one or more time slots committed to
each sending device. If there are n sending devices, the frame consists of n slots, where each slot will be
allocated to each of the sending devices. This happens if all the sending devices transmit at the same rate as
shown in the figure.

In the diagram given below, there are four inputs to multiplexer A. Each frame is having four slots
corresponding to each of the sending devices.

10
Interleaving

Synchronous Time Division Multiplexing (TDM) can be distinguished to a high-speed rotating switch. As the
switch is free in front of a device, that device can transmit a particular record of data onto the direction. The
switch transfers from device to device at a fixed price and in a permanent order. This process is called
interleaving.

11
At the receiver, the demultiplexer disintegrates every frame by obtaining each character in turn. As a character
is eliminated from a frame, it is moved to the suitable receiving device, as demonstrated in the figure below.

The disadvantages of Synchronous TDM are as follows −

• In synchronous time-division multiplexing, an equal time slot is given to each sender to load its data on
the channel.
• Different senders load different volumes of data, and frames are usually empty.

Asynchronous Time Division Multiplexing: It is a type of multiplexing, where the rate of sampling is different
and also does not require a general clock, it is called Asynchronous Time Division Multiplexing. Asynchronous
TDMs have generally low bandwidth. In case when there is nothing to transmit, this type of TDM gives its
time slot to other devices. This multiplexer scans all input lines and accepts the portion of data till the frame
is filled.

12
3. Wavelength Division Multiplexing(WDM)
It is the same as FDM but applied to fibers, only the difference is that here the operating frequencies are
much higher actually they are in the optical range. There’s great potential for fibers since the bandwidth is so
huge. Fibers with different energy bands are passed through a diffraction grating prism. Combined on the
long-distance link and then split at the destination. It has got high reliability and very high capacity.

It multiplexes multiple data streams onto a single fiber optic line. Different wavelength lasers transmit
multiple signals. Each signal carried on the fiber can be transmitted at a different rate from the other signals.
• Dense wavelength division multiplexing: It combines many (30, 40, 50, or more) channels onto one
fiber. DWDM channels have a very high capacity and it keeps on improving.
• Coarse wavelength division multiplexing: It combines only a few lambdas. In this, channels are more
widely spaced and are a cheaper version of DWDM.

Switching techniques
Switching is the process of transferring data packets from one device to another in a network, or from one
network to another, using specific devices called switches. Switching takes place at the Data Link layer of the
OSI Model.
Switching is process to forward packets coming in from one port to a port leading towards the destination.
When data comes on a port it is called ingress, and when data leaves a port or goes out it is called egress. A
communication system may include number of switches and nodes. At broad level, switching can be divided
into two major categories:
• Connectionless: The data is forwarded on behalf of forwarding tables. No previous handshaking
is required and acknowledgements are optional.
13
• Connection Oriented: Before switching data to be forwarded to destination, there is a need to
pre-establish circuit along the path between both endpoints. Data is then forwarded on that
circuit. After the transfer is completed, circuits can be kept for future use or can be turned down
immediately.

Circuit Switching
When two nodes communicate with each other over a dedicated communication path, it is called circuit
switching. There is a need of pre-specified route from which data will travels and no other data is permitted. In
circuit switching, to transfer the data, circuit must be established so that the data transfer can take place.
Circuits can be permanent or temporary. Applications which use circuit switching may have to go through three
phases:
• Establish a circuit
• Transfer the data
• Disconnect the circuit

Circuit switching was designed for voice applications. Telephone is the best suitable example of circuit
switching. Before a user can make a call, a virtual path between caller and callee is established over the network.

Message Switching
This technique was somewhere in middle of circuit switching and packet switching. In message switching, the
whole message is treated as a data unit and is switching / transferred in its entirety.

14
A switch working on message switching, first receives the whole message and buffers it until there are resources
available to transfer it to the next hop. If the next hop is not having enough resource to accommodate large
size message, the message is stored and switch waits.

This technique was considered substitute to circuit switching. As in circuit switching the whole path is blocked
for two entities only. Message switching is replaced by packet switching.
Drawbacks:
• Every switch in transit path needs enough storage to accommodate entire message.
• Because of store-and-forward technique and waits included until resources are available,
message switching is very slow.
• Message switching was not a solution for streaming media and real-time applications.
Packet Switching
Shortcomings of message switching gave birth to an idea of packet switching. The entire message is broken
down into smaller chunks called packets. The switching information is added in the header of each packet and
transmitted independently.
It is easier for intermediate networking devices to store small size packets and they do not take much resources
either on carrier path or in the internal memory of switches.

15
Packet switching enhances line efficiency as packets from multiple applications can be multiplexed over the
carrier. The internet uses packet switching technique. Packet switching enables the user to differentiate data
streams based on priorities. Packets are stored and forwarded according to their priority to provide quality of
service.

MOBILE SYSTEM OR CELLULAR NETWORK


Cellular network is an underlying technology for mobile phones, personal communication systems, wireless
networking etc. The technology is developed for mobile radio telephone to replace high power
transmitter/receiver systems. Cellular networks use lower power, shorter range and more transmitters for data
transmission.
Features of Cellular Systems
Wireless Cellular Systems solves the problem of spectral congestion and increases user capacity. The features
of cellular systems are as follows −
• Offer very high capacity in a limited spectrum.
• Reuse of radio channel in different cells.
• Enable a fixed number of channels to serve an arbitrarily large number of users by reusing the
channel throughout the coverage region.
• Communication is always between mobile and base station (not directly between mobiles).
• Each cellular base station is allocated a group of radio channels within a small geographic area
called a cell.
• Neighboring cells are assigned different channel groups.

16
• By limiting the coverage area to within the boundary of the cell, the channel groups may be
reused to cover different cells.
• Keep interference levels within tolerable limits.
• Frequency reuse or frequency planning.
• Organization of Wireless Cellular Network.
Cellular network is organized into multiple low power transmitters each 100w or less.
Shape of Cells
The coverage area of cellular networks are divided into cells, each cell having its own antenna for transmitting
the signals. Each cell has its own frequencies. Data communication in cellular networks is served by its base
station transmitter, receiver and its control unit.

DIFFERENT TYPES OF MOBILE SYSTEM


1. 1G
First-generation wireless technology was introduced in the early 1990s and provided basic voice (analog)
capabilities. It was called “voice grade” because it could only support very low data rates, which made it
unsuitable for anything other than voice communications.
Second-generation wireless technology, which arrived in the late 1990s, added packet- switched data
transmission to first and second generation networks, allowing for increased throughput and more advanced
features. This generation is also known as “GPRS” (General Packet Radio Service).
1G worked by transmitting voice signals over radio waves. These waves could be easily blocked by obstacles like
walls and buildings, which made them unsuitable for use in indoor environments.
Push-to-talk systems were used in 1G.
Drawbacks of 1G:
● Poor voice quality
● Poor battery life
● Large phone size
● No security
In order to overcome the limitations of first-generation technology, second-generation wireless technology was
developed in the late 1990s.
17
2. 2G
Also known as “cellular” technology, second-generation wireless improved on the original by using digital
transmission instead of analog transmission. This allowed for increased data rates and made it possible to
send richer content like photos and videos.
Voice calls through 2G wireless networks were initially supported alongside analog signals, but the two
technologies have been phased out in favor of digital signals. This gave us smaller devices, a more secure
connection, better call quality, and a higher capacity for connectivity.
Second-generation wireless also introduced the concept of cell towers. By dividing a service area into small
cells, each with its own tower, cellular networks were able to provide better coverage and support more users
than a traditional cellular tower could.
GSM technologies (global system for mobile communications) were used in 2G.
3. 3G
Third-generation wireless was a major upgrade to the second generation, introducing high- speed packet access
(HSPA) technology, HSPA+ and universal mobile telecommunications system (UMTS).

This gave users faster download speeds than ever before, allowing for things like video calling and HD
streaming. Voice calling on 3G networks is also much more reliable, thanks to HSPA’s use of carrier transmission
and packet switch for reliable high speed data throughputs..
HSPA also allowed for additional network capacity by incorporating multiple-input multiple- output (MIMO)
technologies. This resulted in a major improvement in network capacity, allowing for more users on a single cell
site.
4. 4G
Currently, the most widely used wireless technology is LTE (long term evolution) an evolved version of the
GSM/EDGE network that many people refer to as a “fourth-generation”
This gave users faster download speeds than ever before, allowing for things like video calling and HD
streaming. LTE also offers much better performance in congested areas than earlier technologies.
Voice calling on 4G networks is generally routed through VoLTE (Voice over LTE), which uses the data network
for voice traffic on shared bandwidth using packet switch, helping in more reliable voice calls and
accommodating a greater number of users per cell.
5. 5G
Currently, 5G networks are still in the limited rollout stage, but there have been a number of proposed network
technologies that could potentially be included in this new standard.

18
The goal is to achieve speeds up to 100x faster than current LTE networks, with latency times as low as one
millisecond.
In addition to these extremely fast connection speeds, it’s also been proposed that the new standard will
support a massive increase in the number of connected devices, up to one million per square kilometer.

DATALINK CONTROLS
Functions of datalink layer

• Framing
• Error control
• Flow control
• Error correction
• Error detection
1. Framing
Framing is a point-to-point connection between two computers or devices consisting of a wire where data is
transmitted as a stream of bits. However, these bits must be framed into discernible blocks of information.
Framing is a function of the data link layer.
Data transmission involves the synchronized transmission of bits from the source to the destination in the
physical layer. This section will focus on point-to-point data transmission between two devices. The data link
layer receives packets from the network layer and then adds source and destination addresses to packets. They
are converted into frames in the data link layer. When the frame size becomes large, a packet is divided into a
small frame. These smaller-sized frames enable error control and flow control more efficiently.
Framing is a point-to-point connection between two devices that consists of a wire in which data is
transmitted as a stream of bits.

Framing in a computer network uses frames to send/receive the data. The data link layer packs bits into frames
such that each frame is distinguishable from another.
The data link layer prepares a packet for transport across local media by encapsulating it with a header and a
trailer to create a frame.

19
The frame is defined as the data in telecommunications that moves between various network points.
Usually, a frame moves bit-by-bit serially and consists of a trailer field and header field that frames the
information. These frames are understandable only by the data link layer.
The frame consists of the following parts:
1. Frame Header: It consists of the frame's source and destination address.
2. Payload Field: It contains the message to be delivered.
3. Flag: It points to the starting and the ending of the frame.
4. Trailer: It contains the error detection and correction bits.

Problems in Framing
• Detecting start of the frame: When a frame is transmitted, every station must be able to detect it.
Station detects frames by looking out for a special sequence of bits that marks the beginning of the
frame i.e. SFD (Starting Frame Delimiter).
• How does the station detect a frame: Every station listens to link for SFD pattern through a sequential
circuit. If SFD is detected, sequential circuit alerts station. Station checks destination address to accept
or reject frame.
• Detecting end of frame: When to stop reading the frame.
• Handling errors: Framing errors may occur due to noise or other transmission errors, which can cause
a station to misinterpret the frame. Therefore, error detection and correction mechanisms, such as cyclic
redundancy check (CRC), are used to ensure the integrity of the frame.
• Framing overhead: Every frame has a header and a trailer that contains control information such as
source and destination address, error detection code, and other protocol-related information. This
overhead reduces the available bandwidth for data transmission, especially for small-sized frames.
• Framing incompatibility: Different networking devices and protocols may use different framing
methods, which can lead to framing incompatibility issues. For example, if a device using one framing
method sends data to a device using a different framing method, the receiving device may not be able
to correctly interpret the frame.

20
• Framing synchronization: Stations must be synchronized with each other to avoid collisions and ensure
reliable communication. Synchronization requires that all stations agree on the frame boundaries and
timing, which can be challenging in complex networks with many devices and varying traffic loads.
• Framing efficiency: Framing should be designed to minimize the amount of data overhead while
maximizing the available bandwidth for data transmission. Inefficient framing methods can lead to lower
network performance and higher latency.
Types of framing
There are two types of framing:
1. Fixed-size: The frame is of fixed size and there is no need to provide boundaries to the frame, the length of
the frame itself acts as a delimiter.
• Drawback: It suffers from internal fragmentation if the data size is less than the frame size
• Solution: Padding
2. Variable size: In this, there is a need to define the end of the frame as well as the beginning of the next frame
to distinguish. This can be done in two ways:
1. Length field – We can introduce a length field in the frame to indicate the length of the frame. Used
in Ethernet(802.3). The problem with this is that sometimes the length field might get corrupted.
2. End Delimiter (ED) – We can introduce an ED(pattern) to indicate the end of the frame. Used in Token
Ring. The problem with this is that ED can occur in the data. This can be solved by:

Methods for framing

1) Character count
2) Flag bytes with byte stuffing
3) Starting and ending flags, with bit stuffing
4) Physical layer coding violations

1. Character Count / Byte count


This method is rarely used and is generally required to count total number of characters that are present in
frame. This is be done by using field in header. Character count method ensures data link layer at the receiver
or destination about total number of characters that follow, and about where the frame ends.

21
There is disadvantage also of using this method i.e., if anyhow character count is disturbed or distorted by
an error occurring during transmission, then destination or receiver might lose synchronization. The
destination or receiver might also be not able to locate or identify beginning of next frame.

2. Character stuffing / Byte stuffing

It is used to convert a that may contain reserved values into another byte sequence that does not contain
reserved values. It is also known as character-oriented framing. Here, a special byte is stuffed before flag
and esc also that special byte is escape(ESC).
Bit stuffing: It is used for inserting one or more non-information bits into a message to be transmitted, to
break message sequence for synchronization. It is also known as bit-oriented framing. Here,0 bit stuffed after
five consecutive 1 bits.i.e extra bit is added after five consecutive ones.

22
3. Bit stuffing/ flag bits
Bit stuffing is also known as bit-oriented framing or bit-oriented approach.
Example:
Let ED = 01111 and if data = 01111
–> Sender stuffs a bit to break the pattern i.e. here appends a 0 in data = 011101.
–> Receiver receives the frame.
–> If data contains 011101, receiver removes the 0 and reads the data.

23
4. Physical layer encoding violation

This framing method is used only in those networks in which encoding on the physical medium contain some
redundancy.

Some LANs encode each bit of data by using two physical bits that Manchester coding uses.

Here, Bit 1 is encoded into a high-low (10) pair and Bit 0 is encoded into a low-high (01) pair.

The scheme means that every data bit has a transition in the middle, making it easy for the receiver to locate
the bit boundaries.

The combinations high-high and low-low are not used for data but are used for delimiting frames in some
protocols.

Error control

In connection-oriented service , we can easily find the errors and the chance of errors are less compared to
connection-less.
In connection-less services, most of the services are unacknowledged services. Due to noise, frame may lost.
So for the error control, a special control frame is used.
If control frame is a +ve acknowledgment- message reached safely
-ve acknowledgment- error occurs, retransmit it

24
To solve this, we can also use timers into the data link layer. If we lost frame and acknowledgement then, it
performs retransmission of frame.

Flow control
Flow control is a design issue at Datalink Layer. It is a technique that generally observes the proper flow of
data from sender to receiver. It is very essential because it is possible for sender to transmit data or information
at very fast rate and hence receiver can receive this information and process it. This can happen only if receiver
has very high load of traffic as compared to sender, or if receiver has power of processing less as compared to
sender. Flow control is basically a technique that gives permission to two of stations that are working and
processing at different speeds to just communicate with one another. Flow control in Data Link Layer simply
restricts and coordinates number of frames or amount of data sender can send just before it waits for an
acknowledgement from receiver. Flow control is actually set of procedures that explains sender about how
much data or frames it can transfer or transmit before data overwhelms receiver. The receiving device also
contains only limited amount of speed and memory to store data. This is why receiving device should be able
to tell or inform the sender about stopping the transmission or transferring of data on temporary basis before
it reaches limit. It also needs buffer, large block of memory for just storing data or frames until they are
processed.
Flow control can also be understand as a speed matching mechanism for two stations.

Approaches to Flow Control : Flow Control is classified into two categories:


• Feedback – based Flow Control : In this control technique, sender simply transmits data or information
or frame to receiver, then receiver transmits data back to sender and also allows sender to transmit
more amount of data or tell sender about how receiver is processing or doing. This simply means that
sender transmits data or frames after it has received acknowledgements from user.
• Rate – based Flow Control : In this control technique, usually when sender sends or transfer data at
faster speed to receiver and receiver is not being able to receive data at the speed, then mechanism
known as built-in mechanism in protocol will just limit or restricts overall rate at which data or
information is being transferred or transmitted by sender without any feedback or acknowledgement
from receiver.
Techniques of Flow Control in Data Link Layer : There are basically two types of techniques being developed
to control the flow of data

25
Error detection & correction
Error – it is a condition when the receiver’s information does not match the sender’s information. During
transmission, digital signals suffer from noise that can introduce errors in the binary bits traveling from sender
to receiver. That means a 0 bit may change to 1 or a 1 bit may change to 0.
Types of Errors
1. Single-Bit Error
A single-bit error refers to a type of data transmission error that occurs when one bit (i.e., a single binary
digit) of a transmitted data unit is altered during transmission, resulting in an incorrect or corrupted data
unit.

2. Multiple-Bit Error
A multiple-bit error is an error type that arises when more than one bit in a data transmission is affected.
Although multiple-bit errors are relatively rare when compared to single-bit errors, they can still occur,
particularly in high-noise or high-interference digital environments.

26
3. Burst Error
When several consecutive bits are flipped mistakenly in digital transmission, it creates a burst error. This
error causes a sequence of consecutive incorrect values.

To detect errors, a common technique is to introduce redundancy bits that provide additional information.
Error detection methods
1. Simple Parity Check
2. Two-dimensional Parity Check
3. Checksum
4. Cyclic Redundancy Check (CRC)

1. Simple Parity Check


Simple-bit parity is a simple error detection method that involves adding an extra bit to a data transmission.
It works as:
• 1 is added to the block if it contains an odd number of 1’s, and
• 0 is added if it contains an even number of 1’s
This scheme makes the total number of 1’s even, that is why it is called even parity checking.
27
Disadvantages
• Single Parity check is not able to detect even no. of bit error.
Two-dimensional Parity Check
Two-dimensional Parity check bits are calculated for each row, which is equivalent to a simple parity check bit.
Parity check bits are also calculated for all columns, then both are sent along with the data. At the receiving
end, these are compared with the parity bits calculated on the received data.

2. Checksum
Checksum error detection is a method used to identify errors in transmitted data. The process involves dividing
the data into equally sized segments and using a 1’s complement to calculate the sum of these segments. The
calculated sum is then sent along with the data to the receiver. At the receiver’s end, the same process is
repeated and if all zeroes are obtained in the sum, it means that the data is correct.
Checksum – Operation at Sender’s Side
• Firstly, the data is divided into k segments each of m bits.

28
• On the sender’s end, the segments are added using 1’s complement arithmetic to get the sum. The
sum is complemented to get the checksum.
• The checksum segment is sent along with the data segments.
Checksum – Operation at Receiver’s Side
• At the receiver’s end, all received segments are added using 1’s complement arithmetic to get the
sum. The sum is complemented.
• If the result is zero, the received data is accepted; otherwise discarded.

Disadvantages
• If one or more bits of a segment are damaged and the corresponding bit or bits of opposite value in a
second segment are also damaged.

3. Cyclic Redundancy Check (CRC)


• CRC is based on binary division.
• In CRC, a sequence of redundant bits, called cyclic redundancy check bits, are appended to the end of
the data unit so that the resulting data unit becomes exactly divisible by a second, predetermined
binary number.
• At the destination, the incoming data unit is divided by the same number. If at this step there is no
remainder, the data unit is assumed to be correct and is therefore accepted.

29
• A remainder indicates that the data unit has been damaged in transit and therefore must be rejected.

Advantages:
Increased Data Reliability: Error detection ensures that the data transmitted over the network is reliable,
accurate, and free from errors. This ensures that the recipient receives the same data that was transmitted by
the sender.
Improved Network Performance: Error detection mechanisms can help to identify and isolate network issues
that are causing errors. This can help to improve the overall performance of the network and reduce
downtime.
Enhanced Data Security: Error detection can also help to ensure that the data transmitted over the network is
secure and has not been tampered with.
Disadvantages:
Overhead: Error detection requires additional resources and processing power, which can lead to increased
overhead on the network. This can result in slower network performance and increased latency.
False Positives: Error detection mechanisms can sometimes generate false positives, which can result in
unnecessary retransmission of data. This can further increase the overhead on the network.

30
Limited Error Correction: Error detection can only identify errors but cannot correct them. This means that
the recipient must rely on the sender to retransmit the data, which can lead to further delays and increased
network overhead.
4. Hamming code
Hamming code is an error-correcting code used for detecting and correcting errors in data transmission. It
adds redundant bits to the data being transmitted which can be used to detect and correct errors that may
occur during transmission. Developed by R W. Hamming , it is widely used in applications where reliable data
transmission is critical, such as computer networks and telecommunication systems.
There are two types of parity bits:
1. Even parity bit: In the case of even parity, for a given set of bits, the number of 1’s are counted. If that
count is odd, the parity bit value is set to 1, making the total count of occurrences of 1’s an even number.
If the total number of 1’s in a given set of bits is already even, the parity bit’s value is 0.
2. Odd Parity bit – In the case of odd parity, for a given set of bits, the number of 1’s are counted. If that
count is even, the parity bit value is set to 1, making the total count of occurrences of 1’s an odd number.
If the total number of 1’s in a given set of bits is already odd, the parity bit’s value is 0.

31
5. Stop-and-wait Protocol
Propagation delay: amount of time taken by a packet to make a physical journey from one router to another.
Round-trip-time (RTT): amount of time taken by a packet to reach receiver+ time taken by the acknowledgment
to reach the sender.
Time out(To) = 2 * RTT
Time-to-live (TTL): amount of time/ no: of hops (nodes) that a packet is set to exist inside a network before
being discarding by a router.

Stop-and-wait protocol works under the assumption that the communication channel is noiseless and
transmissions are error-free.
Working :
• The sender sends data to the receiver.
• The sender stops and waits for the acknowledgment.
• The receiver receives the data and processes it.
• The receiver sends an acknowledgment for the above data to the sender.
• The sender sends data to the receiver after receiving the acknowledgment of previously sent data.
• The process is unidirectional and continues until the sender sends the End of Transmission (EoT)
frame.

32
Features
• It is used in Connection-oriented communication.
• It offers error and flows control.
• It can be used in data Link and transport Layers.
• Stop and Wait ARQ executes Sliding Window Protocol with Window Size 1.

Problems with Stop and Wait Protocol

1. Lost Data

One of the primary issues associated with the Stop and Wait protocol is the potential for lost data during
transmission. This problem can occur when a sender transmits a data packet, but it never reaches its intended
receiver due to network congestion, interference, or other factors affecting transmission quality.

The sender will transmit each individual packet and wait for confirmation that it has been received before
proceeding to send additional packets. If every fourth packet does not reach its destination due to network
problems, only three-quarters of those transmissions will succeed – leaving gaps in the information received
by the recipient.

2. Lost Acknowledgment

One of the critical problems with the Stop and Wait protocol is lost acknowledgment. When a sender sends
data packets, it expects to receive an acknowledgment from the receiver to indicate successful receipt of each
packet before sending another one.

33
However, if the acknowledgement gets lost due to network congestion or other errors, there will be no
indication that transmission was unsuccessful. As a result, the sender may repeatedly transmit the same data
packets leading to unnecessary delays and wastage of network bandwidth.

Automatic Repeat Request (ARQ) methods such as Go-Back-N ARQ and Selective Repeat ARQ have been
developed as solutions for dealing with this problem by allowing for retransmission of lost acknowledgments
ensuring that all sent data are properly acknowledged by the receiver without unnecessary delay or clogging
up of network bandwidth.

3. Delayed Data or Acknowledgement

The Stop and Wait protocol involve waiting for an acknowledgement from the receiver before transmitting the
next packet. One of the major problems with this method is delayed data or acknowledgement.

Delayed data or acknowledgement can pose a serious problem in communication networks since it may cause
both parties to wait unnecessarily long periods. For instance, if there is a high latency between the sender
and receiver, it may take too long for an acknowledgment to arrive back at the sender's end.

To address this particular issue with Stop and Wait protocol, other protocols that use Automatic Repeat
Request (ARQ) techniques such as Go-Back-N ARQ and Selective Repeat ARQ have been developed.

6. Stop and Wait for ARQ (Automatic Repeat Request)

The above 3 problems are resolved by Stop and Wait for ARQ (Automatic Repeat Request) that does both
error control and flow control.

34
1. TimeOut:

2. SequenceNumber(Data)

3. DelayedAcknowledgement:
This is resolved by introducing sequence numbers for acknowledgement also.

35
Working of stop & wait ARQ protocol

Advantages of Stop and Wait ARQ :

• Simple Implementation: Stop and Wait ARQ is a simple protocol that is easy to implement in both
hardware and software. It does not require complex algorithms or hardware components, making it an
inexpensive and efficient option.
• Error Detection: Stop and Wait ARQ detects errors in the transmitted data by using checksums or cyclic
redundancy checks (CRC). If an error is detected, the receiver sends a negative acknowledgment (NAK)
to the sender, indicating that the data needs to be retransmitted.
• Reliable: Stop and Wait ARQ ensures that the data is transmitted reliably and in order. The receiver
cannot move on to the next data packet until it receives the current one. This ensures that the data is
received in the correct order and eliminates the possibility of data corruption.
• Flow Control: Stop and Wait ARQ can be used for flow control, where the receiver can control the rate
at which the sender transmits data. This is useful in situations where the receiver has limited buffer space
or processing power.
• Backward Compatibility: Stop and Wait ARQ is compatible with many existing systems and protocols,
making it a popular choice for communication over unreliable channels.

36
Disadvantages of Stop and Wait ARQ :

• Low Efficiency: Stop and Wait ARQ has low efficiency as it requires the sender to wait for an
acknowledgment from the receiver before sending the next data packet. This results in a low data
transmission rate, especially for large data sets.
• High Latency: Stop and Wait ARQ introduces additional latency in the transmission of data, as the
sender must wait for an acknowledgment before sending the next packet. This can be a problem for real-
time applications such as video streaming or online gaming.
• Limited Bandwidth Utilization: Stop and Wait ARQ does not utilize the available bandwidth
efficiently, as the sender can transmit only one data packet at a time. This results in underutilization of
the channel, which can be a problem in situations where the available bandwidth is limited.
• Limited Error Recovery: Stop and Wait ARQ has limited error recovery capabilities. If a data packet
is lost or corrupted, the sender must retransmit the entire packet, which can be time-consuming and can
result in further delays.
• Vulnerable to Channel Noise: Stop and Wait ARQ is vulnerable to channel noise, which can cause
errors in the transmitted data. This can result in frequent retransmissions and can impact the overall
efficiency of the protocol.

7. Sliding Window Protocol

The sliding window protocol is a flow control protocol that allows both link nodes A and B to send and
receive data and acknowledgments simultaneously.

• Here, the sender can send multiple frames without having to wait for acknowledgments.
• If no new data frames are ready for transmission in a specified time, a separate acknowledgment frame
is generated to avoid time-out.
• Each outbound frame contains a sequence number ranging from 0 to 2𝑛−1(𝑛 bit field). For stop-and-wait
sliding window protocol, 𝑛 = 1.
• Sender Window

Sender Window is a set of sequence numbers maintained by the sender corresponding to the frame
sequence numbers of frames sent out but not yet acknowledged.

• The sender can transmit a maximum number of frames before receiving any acknowledgment without
blocking (Pipelining).
• All the frames in a sending window can be lost or damaged and hence must be saved in memory or
buffer till they are acknowledged.
• Receiving Window

37
A Receiving Window is a set of sequence numbers that is maintained by the receiver. It allows

receiving and acknowledging of multiple frames.

• The size of the receiving window is fixed at a specified initial size.


• Any frame received with a sequence number outside the receiving window is discarded.
• The sending and receiving window may not have the same size or any upper or lower limits.

Types of Sliding Window Protocols

The Sliding Window ARQ (Automatic Repeat reQuest) protocols are of two categories −

• Go – Back – N ARQ

Go – Back – N ARQ provides for sending multiple frames before receiving the acknowledgment for the first
frame. It uses the concept of sliding window, and so is also called sliding window protocol. The frames are
38
sequentially numbered and a finite number of frames are sent. If the acknowledgment of a frame is not
received within the time period, all frames starting from that frame are retransmitted.

• Selective Repeat ARQ

This protocol also provides for sending multiple frames before receiving the acknowledgment for the first
frame. However, here only the erroneous or lost frames are retransmitted, while the good frames are
received and buffered.

Piggybacking

Piggybacking is the technique of delaying outgoing acknowledgment and attaching it to the next data packet.
When a data frame arrives, the receiver waits and does not send the control frame (acknowledgment) back
immediately. The receiver waits until its network layer moves to the next data packet.

When a data frame arrives, the receiver waits and does not send the control frame (acknowledgment) back
immediately. The receiver waits until its network layer moves to the next data packet.

Acknowledgment is associated with this outgoing data frame. Thus the acknowledgment travels along with
the next data frame.

Working of Piggybacking

With piggybacking, a single message (ACK + DATA) over the wire in place of two separate messages.
Piggybacking improves the efficiency of the bidirectional protocols.

• If Host A has both acknowledgment and data, which it wants to send, then the data frame will be sent
with the ack field which contains the sequence number of the frame.
• If Host A contains only one acknowledgment, then it will wait for some time, then in the case, if it
finds any data frame, it piggybacks the acknowledgment, otherwise, it will send the ACK frame.
39
• If Host A left with only a data frame, then it will add the last acknowledgment to it. Host A can send a
data frame with an ack field containing no acknowledgment bit.

Advantages of Piggybacking

1. The major advantage of piggybacking is the better use of available channel bandwidth. This happens
because an acknowledgment frame needs not to be sent separately.
2. Usage cost reduction.
3. Improves latency of data transfer.
4. To avoid the delay and rebroadcast of frame transmission, piggybacking uses a very short-duration
timer.

Disadvantages of Piggybacking

1. The disadvantage of piggybacking is the additional complexity.


2. If the data link layer waits long before transmitting the acknowledgment (blocks the ACK for some
time), the frame will rebroadcast.

Pipelining

pipelining is the method of sending multiple data units without waiting for an acknowledgment for the first
frame sent. Pipelining ensures better utilization of network resources and also increases the speed of delivery,
particularly in situations where a large number of data units make up a message to be sent.

Flow Diagram of Pipelined Data Transmission

40
The flow diagram depicts data transmission in a pipelined system versus that in a non-pipelined system. Here,
pipelining is incorporated in the data link layer, and four data link layer frames are sequentially transmitted.

Data Link Protocols that use Pipelining

Two data link layer protocols use the concept of pipelining −

• Go – Back – N
Go – Back – N protocol provides for pipelining of frames, i.e. sending multiple frames before receiving
the acknowledgment for the first frame. The frames are sequentially numbered and a finite number of
frames are sent depending upon the size of the sending window. If the Non-pipelined Transmission
Pipelined Transmission Sender Receiver Sender Receiver Time acknowledgment of a frame is not
received within the time period, all frames starting from that frame are retransmitted. The size of the
receiving window in 1 in this case.
• Selective Repeat
This protocol also incorporates the concept of pipelining. Here, the receiver window is of size greater
than 1. In this protocol, only the erroneous or lost frames are retransmitted, while the good frames are
received and buffered. When the sender times out, the oldest unacknowledged frame is retransmitted.
If the retransmitted frame is received correctly, then the receiver delivers all the frames it has buffered
starting with the retransmitted frame.

41

You might also like