Module DDC
Module DDC
TABLE OF CONTENTS
I. UNIT 1 INTRODUCTION TO DIGITAL COMMUNICATION
Definition of Digital Communications………………………………..1
History of Digital Communications……………………………..…….2
Elements of Digital Communications…………………………………6
Summary………………………………………………………………….………..8
Exercises…………………………………………………………………………….9
II. UNIT 2 MODULATION METHODS
Introduction to Modulation….……………………………….………….11
Analog Modulation……………………….……………….…………..…….12
Pulse Code Modulation..…………………………………………..………19
Digital Modulation…………………………………………………………….27
Summary………………………………………………………………….………34
Exercise …………………………………………………………………………..34
III. UNIT 3 INFORMATION THEORY
Introduction to Information Theory…………………….………….36
Entropy……………….……………………….……………….…………..…….37
Divergence….……………………………………………………………………39
Mutual Information …………………………………………………………41
Summary………………………………………………………………….………43
Exercise …………………………………………………………………………..44
IV. INTRODUCTION TO DATA COMMUNICATION
Definition …………………….…………………………………………….…….45
History of Data Communication ……………………….…………….46
Elements of Data Communication……………………………………49
Summary………………………………………………………………….………51
Exercise …………………………………………………………………………..52
V. DATA TRANSMISSION
Introduction to Data Transmission………………………………….54
Data Transmission Media and Technologies.………………….62
Data Transmission Modes…………..…………………………………..64
Data Communication Standard….……………………………………66
Summary………………………………………………………………….………71
Exercise …………………………………………………………………………..73
VI. COMMUNICATION PROTOCOL
Introduction to Communication Protocol ……………………….75
Network Topology.…………………………………………………….…….78
Network Architecture……………………………………………………….80
OSI….……………………………………………………………..…………………81
UDP ……………………………………………………………………………….…84
TCP/IP….……………………………………………………………………………85
Summary………………………………………………………………….………88
Exercise …………………………………………………………………………..89
VII. ERROR DETECTION AND CORRECTION
Types of Error …………………………………………………………………91
Error Detection ……………………………………… ………………………92
Error Correction ………………………………………………………………95
Summary………………………………………………………………….………98
Exercise …………………………………………………………………………..99
VIII. INTRODUCTION TO COMPUTER NETWORK AND SECURITY
Cmputer Networks..…………………………………………………………101
Encryption and Decryption………………………………………………106
Virus, Worms and Hacking ……………………….……………………109
Network Security ………………………………………………………….…112
Summary………………………………………………………………….………118
Exercise …………………………………………………………………………..118
`
Preface
The Data and Digital Communications module is designed to provide a
comprehensive understanding of the principles and technologies that underpin
modern communication systems. In today's interconnected world, data
communication plays a crucial role in enabling the seamless transfer of
information across local and global networks. This module focuses on both the
theoretical foundations and practical applications of data and digital
communication systems, aiming to equip students with the skills and knowledge
required to design, analyze, and optimize these systems.
By the end of this module, students will have a strong foundation in both
theoretical concepts and practical skills in data and digital communications. They
will be prepared to face the evolving demands of the communications industry,
contribute to innovations in technology, and apply their learning to various fields,
including telecommunications, networking, signal processing, and beyond. This
module serves as a stepping stone for those aspiring to become proficient in
designing and managing modern communication systems that are reliable, secure,
and efficient.
The landscape of data communication has evolved dramatically over the years,
driven by advancements in digital technology, the proliferation of the internet,
and the growing demand for high-speed connectivity. As such, this module
explores a range of topics that are crucial for understanding how data is
transmitted, received, and processed in digital communication systems.
Data and Digital Communications
Objectives
4. Modulation Techniques:
o ASK (Amplitude Shift Keying): Varies the amplitude of the carrier signal to
represent binary data.
o FSK (Frequency Shift Keying): Uses different frequencies to represent
binary values.
o PSK (Phase Shift Keying): Changes the phase of the carrier wave to transmit
data.
Morse Code (1837): One of the earliest forms of digital communication. Developed
by Samuel Morse, it used a binary system of dots and dashes to represent letters and
numbers, transmitting messages over telegraph lines.
Pulse Code Modulation (PCM) (1937): Invented by Alec Reeves, PCM became a
foundational technology in digital communication. It involves converting analog
signals (like voice) into digital form by sampling the signal at regular intervals and
encoding the amplitude into binary format. PCM is still widely used in digital
telephony and audio.
Integrated Circuits (ICs) (1958): Jack Kilby and Robert Noyce independently
developed the integrated circuit, which significantly reduced the size and cost of
electronic systems, enabling more complex ` digital communication systems.
Digital Telephony (1960s): The first digital telephone systems emerged, replacing
older analog systems. Digital signals could be transmitted more efficiently and with
less noise over long distances. The first practical digital telephone exchange, known
as a T-carrier system (T1), was introduced by AT&T in 1962, using Pulse Code
Modulation (PCM) and Time Division Multiplexing (TDM).
Digital communication systems consist of several critical elements that work together to
ensure the efficient and accurate transmission of digital data. These elements form a chain,
starting from the message source to the final recipient. Below are the key elements of a
digital communication system:
1. Information Source
The information source generates the data that needs to be transmitted. This could be
in the form of text, audio, video, or any other type of data. For example, a microphone
capturing a voice, a computer sending a file, or a camera recording a video.
2. Source Encoder
The source encoder converts the information from its original format (e.g., audio,
video, or text) into a digital signal, typically represented in binary format (0s and 1s).
This process is called encoding.
The purpose of source encoding is to represent the data in the most efficient way,
often using data compression techniques to reduce redundancy and minimize the
amount of data that needs to be transmitted.
3. Channel Encoder
The channel encoder adds extra bits to the encoded data to enable error detection and
correction. This is essential because, during transmission, the signal can encounter
noise or interference, which may lead to errors.
Techniques like parity checks, cyclic redundancy check (CRC), or forward error
correction (FEC) are often employed to improve the reliability of data transmission.
4. Modulator
The modulator converts the digital data into a form that can be transmitted over a
physical communication channel, such as a cable or wireless medium. This process is
called modulation.
In modulation, the binary data is transformed into an analog signal by varying the
signal’s properties, such as amplitude, frequency, or phase. Common modulation
techniques include Amplitude Shift Keying (ASK), Frequency Shift Keying
(FSK), and Phase Shift Keying (PSK).
Quadrature Amplitude Modulation (QAM) ` is often used in modern systems to
carry more data by combining both amplitude and phase changes.
5. Communication Channel
The communication channel is the medium through which the signal is transmitted
from the sender to the receiver. Channels can be wired (such as fiber-optic cables,
coaxial cables, or twisted pair wires) or wireless (radio waves, microwaves, infrared
signals).
Channels are subject to various forms of noise and interference, such as thermal noise,
signal fading, or electromagnetic interference, which can affect the quality of the
transmitted signal.
6. Demodulator
The demodulator at the receiver end converts the modulated signal back into its
original digital form. This is the reverse of modulation.
The demodulator retrieves the binary data (1s and 0s) from the analog signal by
interpreting the changes in the signal’s amplitude, frequency, or phase.
7. Channel Decoder
The channel decoder detects and corrects any errors that occurred during
transmission, using the redundant bits that were added during the channel encoding
process. Error correction techniques such as Hamming codes or Reed-Solomon
codes can be applied.
The decoder ensures that the original data is recovered as accurately as possible, even
if some errors occurred during transmission.
8. Source Decoder
The source decoder converts the error-free digital signal back into its original form,
such as audio, video, or text. If the data was compressed during source encoding, it is
decompressed at this stage.
This process is the reverse of source encoding, and it results in the reproduction of the
original message.
9. Destination
The destination is the final point in the communication system where the decoded
message is delivered. This could be a speaker (for audio), a display screen (for video),
or a computer (for data files).
The destination is where the information is consumed by the user or application.
Additional Concepts:
Noise: External disturbances that can corrupt the signal during transmission. Noise is
a significant challenge in communication systems, and various techniques (like error
correction and modulation) help to mitigate its effects.
Bandwidth: The capacity of the communication channel to carry information. Higher
bandwidth allows more data to be transmitted in a given time.
Latency: The delay between the transmission and reception of a signal. Lower
latency is desirable in real-time applications like voice and video communication.
Bit Rate: The rate at which data is transmitted over the channel, usually measured in
bits per second (bps). Higher bit rates enable faster communication.
`
Summary
The evolution of telecommunications saw the early use of telegraph cables, followed
by the widespread adoption of the telephone in 1876. Subsequently, radio and movies became
popular forms of mass entertainment, with radio broadcasts starting in 1928 and Hollywood
dominating global cinema. Television marked a new era in mass entertainment in 1928, and
satellite technology, initially for military purposes, became integral to global communications.
The introduction of mobile phones saw Motorola making the first cell phone call in
1973, while the World Wide Web, pioneered by Tim Berners-Lee in 1989, transformed internet
communication. Google's launch in 1998 revolutionized internet search, and the debut of SMS
in 1992 laid the foundation for text messaging. Social media platforms emerged, connecting
people globally, and the release of the iPhone in 2007 combined phone and computer functions.
However, challenges like issues of free speech, user privacy, and data security arose
with the growth of social media. Despite the ubiquity of smartphones, not everyone has equal
access. The COVID-19 pandemic further accelerated the shift to online activities, prompting
schools, offices, and performances to move online. Frontline workers faced increased infection
risks, and aspects of daily life shut down. Additionally, limitations in internet access hindered
remote learning for many children and young people.
Digital communication involves several key elements, including a sender who initiates
the communication, a message conveyed in various forms, encoding to prepare the message
for transmission, and a channel through which it is sent. The transmission process moves the
encoded message through digital platforms, and decoding occurs on the receiver's end to
interpret the message. The receiver, who is the intended audience, provides feedback,
completing the communication loop. Noise, or interference, may affect clarity during
transmission. Protocols ensure consistent interpretation, and storage retains messages for future
reference. Interactive features engage users in two-way communication. Understanding these
elements is essential for effective digital communication, shaping how information is shared
and received in the digital landscape.
Exercises:
Multiple Choice: Choose the correct letter of the correct answer
1: What does digital communication encompass?
a.) Only the exchange of information through traditional means.
b.) The exchange of information using analog`technologies
c.) The exchange of information, messages, and ideas through digital technologies and
platforms
2: How do Organizations engage in digital communication with stakeholders?
a. Exclusively through physical meetings.
b. By using smoke signals.
c. Utilizing a diverse array of digital communication channels, including websites,
mobile chat, and blogs.
3: What role do proficient digital marketing professionals play?
a. Limited role in organizational communication.
b. Solely responsible for physical advertising
c. Navigating the convergence of technology and messaging effectively in online
communication efforts.
4: According to Edward Powers, what has significantly changed in today’s communication
landscape?
communication.
6: When was the telephone patented?
a. 1858
b. 1876
c. 1907
d. 1928
10: Which company introduced the first cell phone call in 1973?
a. Apple
b. Motorola
c. AT&T
d. Samsung
Unit 2
Modulation Methods
Objectives
Introduction to Modulation
Digital modulation is the process of converting digital signals (binary data, i.e., 0s
and 1s) into an analog waveform suitable for transmission over various
communication channels like radio waves, fiber optics, or wired cables. The key
purpose of digital modulation is to efficiently transmit digital data over
communication media, ensuring reliability, reducing noise effects, and maximizing
bandwidth usage
Analog Modulation
The key types of analog modulation are Amplitude Modulation (AM), Frequency
Modulation (FM), and Phase Modulation (PM). Each of these modulates different aspects
of the carrier wave (amplitude, frequency, or phase) in accordance with the information
signal.
ANALOG MODULATION
The carrier frequency (fc) is the frequency of the unmodulated carrier wave. In AM,
the carrier frequency remains constant, and its amplitude is modulated according to the
information signal. The carrier frequency is chosen based on the application (e.g., AM radio,
TV broadcasting).
Example:
For AM radio broadcasting, carrier frequencies range from 530 kHz to 1710 kHz.
The modulating frequency (fm) is the frequency of the modulating signal, which
contains the information to be transmitted. It determines how quickly the amplitude of the
carrier is varied.
Example:
Example:
If the modulation index m=0.5, the carrier amplitude varies by 50% of its original
value.
sideband (USB) and a lower sideband (LSB). The bandwidth is determined by the highest
frequency of the modulating signal.
Example:
5. Sidebands
In AM, the modulated signal consists of the carrier frequency and two sidebands:
Example:
If the carrier frequency is 1 MHz and the `modulating frequency is 5 kHz, the
sidebands are at 995 kHz (LSB) and 1.005 MHz (USB).
In AM, the total transmitted power is distributed across the carrier and sidebands. The
carrier consumes most of the power, while the sidebands carry the actual information.
Example:
0.52
If Pc=100 W and m=0.5, the total transmitted power is 𝑃𝑡 = 100 (1 + )=
2
112.5W
The carrier frequency (fc) is the frequency of the unmodulated carrier signal. In FM,
the carrier frequency remains constant in amplitude but varies in frequency according to the
input signal. The carrier frequency is selected based on the application and transmission
medium.
Example:
For FM radio broadcasting, the carrier frequency typically ranges between 88 MHz
and 108 MHz.
The modulating frequency (fm) is the frequency of the information (or modulating)
signal. This frequency determines how fast the carrier frequency is changing. In audio
transmission, this is the frequency range of the sound, and it typically lies between 20 Hz and
20 kHz for FM radio.
Example:
In FM radio, the modulating signal (e.g., an audio signal) has frequencies that can
range from 30 Hz to 15 kHz.
Frequency deviation (Δf) refers to the amount by which the carrier frequency varies
from its unmodulated frequency in response to the modulating signal. It is determined by the
amplitude of the modulating signal. In FM, higher amplitudes of the input signal cause larger
deviations in the carrier frequency.
Maximum frequency deviation: The peak frequency change from the carrier
frequency.
Example:
In FM radio, the maximum allowable frequency deviation is ±75 kHz, meaning the
carrier frequency can shift by up to 75 kHz above or below its central frequency.
Formula:
𝚫𝑓 = 𝑘𝑓 𝐴𝑚 Where:
5. Bandwidth (B)
The bandwidth (B) of an FM signal is the range of frequencies that the modulated
signal occupies. FM typically requires more bandwidth than amplitude modulation (AM)
because the frequency deviation increases with the amplitude of the modulating signal.
Carson's Rule:
A practical rule for estimating the bandwidth of an FM signal is Carson’s Rule, which is
given by: 𝐵 = 2(𝚫𝑓 + 𝑓𝑚 )
This formula accounts for both the frequency deviation and the highest modulating
frequency. It provides an estimate of the total bandwidth required for FM transmission.
Example:
In FM radio broadcasting with a frequency deviation of ±75 kHz and a maximum
modulating frequency of 15 kHz, the bandwidth is: B=2(75+15)=180 kHz
6. Power Distribution
In FM, the total power of the modulated signal remains constant, unlike AM where the
power depends on the amplitude variations. However, the power is distributed across the
carrier and sidebands.
Carrier Power: The power at the carrier frequency (fc) in FM decreases as the
modulation index increases because more power is transferred to the sidebands.
Sidebands: FM generates multiple sidebands, each at different frequencies (fc ±
n*fm), where n is an integer.
7. Signal-to-Noise Ratio (SNR)
The signal-to-noise ratio (SNR) in FM determines the quality of the received signal. FM
has an advantage over AM in terms of SNR because ` FM signals are less affected by noise
(noise typically affects amplitude, not frequency).
Capture Effect: In FM, the strongest signal tends to "capture" the receiver, reducing
interference from weaker signals. This enhances SNR in multi-signal environments.
Example:
In a noisy environment, an FM radio receiver can lock onto the strongest station,
reducing interference from others.
Phase Modulation (PM) is a form of modulation where the phase of the carrier
signal is varied in direct proportion to the instantaneous amplitude of the modulating
The carrier frequency (fc) is the frequency of the unmodulated carrier signal. In PM,
the carrier frequency remains constant in terms of amplitude and frequency but changes its
phase according to the modulating signal.
Example:
In PM-based communication systems, carrier frequencies can range from MHz to
GHz depending on the application (e.g., satellite communication, digital
broadcasting).
The modulating frequency (fm) refers to the frequency of the input signal that
contains the information to be transmitted. This signal causes the phase of the carrier to vary.
The modulating frequency determines how rapidly the phase changes occur.
Example:
If the modulating signal is an audio signal, the frequency range can be from 20 Hz to
15 kHz. `
Phase deviation (Δθ) refers to the maximum change in the phase of the carrier signal
from its unmodulated state. It depends on the amplitude of the modulating signal. A larger
amplitude of the modulating signal causes a greater phase deviation in the carrier.
Formula:
𝛥𝜃 = 𝑘𝑝 𝐴𝑚
Where:
𝑘𝑝 is the phase sensitivity (a constant that defines how much the phase changes for a
given modulating signal amplitude).
𝐴𝑚 is the amplitude of the modulating signal.
Example:
If the amplitude of the modulating signal increases, the phase deviation will increase,
leading to larger phase shifts in the carrier.
The modulation index (β) in PM is the ratio of the maximum phase deviation (Δθ) to
the modulating frequency (fm). It determines the degree of modulation and affects the
bandwidth of the signal.
Formula:
𝛥𝜃
𝛽=
𝑓𝑚
A higher modulation index indicates that the phase of the carrier changes more
dramatically in response to the modulating signal.
Wideband PM (WPM) occurs when the modulation index is greater than 1, while
narrowband PM (NPM) occurs when the modulation index is less than 1.
Example:
If the maximum phase deviation is 90 degrees (or π/2 radians) and the modulating
π/2
frequency is 5 kHz, the modulation index is: 𝛽 =
5000
5. Bandwidth (B)
Example: `
If the maximum frequency deviation is 75 kHz and the highest modulating frequency
is 15 kHz, the bandwidth of the PM signal is approximately: B=2×(15+75)=180 kHz
The Pulse Code Modulation process is done through the following steps:
Sampling
Quantization
Encodng
Block diagram of the Pulse Code Modulation process is as shown in the figure below.
Low-Pass Filter
A low-pass filter (LPF) is an electronic filter that allows signals with a frequency lower
than a specified cutoff frequency to pass through while attenuating (reducing) the amplitude
of frequencies higher than the cutoff. Low-pass filters are commonly used in a variety of
applications, such as signal processing, audio electronics, and communication systems.
Sampling
`
Sampling involves measuring the amplitude of an analog signal at discrete time
intervals. The rate at which these samples are taken is known as the sampling rate or
sampling frequency. The sampling rate (also known as sampling frequency) is a crucial
parameter in digital signal processing, especially when converting analog signals to digital
form. It defines the number of samples of an analog signal that are taken per second during
the process of digitization. The sampling rate is measured in Hertz (Hz), where 1 Hz
represents one sample per second. In most applications, higher sampling rates capture more
detail from the analog signal, resulting in better fidelity when the signal is reconstructed.
The Nyquist Theorem, also known as the Nyquist-Shannon Sampling Theorem, is
a fundamental principle in digital signal processing that dictates how frequently an analog
signal must be sampled to accurately convert it into a digital form without introducing errors
or distortion. It states:
"To avoid aliasing and fully reconstruct a continuous-time signal from its
samples, the signal must be sampled at a rate that is at least twice the highest frequency
component present in the signal."
This minimum sampling rate is known as the Nyquist rate, and half of the sampling
rate is referred to as the Nyquist frequency.
Sampling Rate (fs): The number of samples taken per second, measured in Hertz (Hz).
This is the frequency at which the analog signal is sampled.
Nyquist Rate: The minimum required sampling rate to avoid aliasing. It must be at least
twice the maximum frequency (fmax) of the signal:
𝑓𝑠 ≥ 2𝑓𝑚𝑎𝑥
where:
𝑓𝑠 is the sampling rate,
𝑓𝑚𝑎𝑥 is the highest frequency component of the signal.
If a signal with a frequency component of 1.5 kHz is sampled at 2 kHz (less than
twice the signal’s frequency), the signal will appear
` as a lower frequency after reconstruction,
causing aliasing. This distorts the signal and leads to inaccurate representation.
Examples
Voice Signals:
The human voice typically contains frequencies up to 3.4 kHz. According to the
Nyquist theorem, to accurately sample and reconstruct the voice signal, the sampling
rate must be at least twice the highest frequency component:
𝑓𝑠 ≥ 2 x 3.4Khz = 6.8Khz
In practice, telephone systems use a sampling rate of 8 kHz, slightly higher than the
Nyquist rate, to ensure faithful voice transmission.
Audio CDs:
Audio CDs are designed to cover the human hearing range, which is approximately 20
Hz to 20 kHz. Using the Nyquist theorem, the sampling rate for CD audio must be at
least twice the maximum frequency:
𝑓𝑠 ≥ 2 x 20Khz = 40Khz
CDs typically use a sampling rate of 44.1 kHz to ensure high-quality audio
reproduction without aliasing.
2. Oversampling: In some applications, signals are sampled at rates higher than the
Nyquist rate, a process called oversampling. This improves the quality of the digital
representation by reducing noise and providing more data points for signal processing.
3. Digital-to-Analog Conversion (DAC): The Nyquist theorem is also essential in DAC
systems. After a signal is sampled, it can be reconstructed accurately as long as the
sampling rate adheres to the Nyquist criteria.
Quantization
Types of Quantization
`
1. Uniform Quantization: In uniform quantization, the step size between adjacent
quantization levels is constant. This means that all values of the signal are quantized
with the same precision.
o Example: An 8-bit quantizer has 256 levels (from 0 to 255), and each sample
is mapped to the nearest of these levels.
2. Non-uniform Quantization: In non-uniform quantization, the step sizes are not
equal. This method is often used to allocate more precision to certain parts of the
signal range, such as in companding (μ-law or A-law) for audio signals.
o Example: In speech coding, non-uniform quantization is used to give more
precision to lower amplitude sounds (which are more perceptible to the human
ear).
Quantization Error
For a signal with a range 𝑉𝑚𝑎𝑥 − 𝑉𝑚𝑖𝑛 and L quantization levels, the quantization step
size Δ is given by:
𝑉𝑚𝑎𝑥 − 𝑉𝑚𝑖𝑛
𝛥=
𝐿
The maximum possible quantization error (for uniform quantization) is:
𝛥
Max Quantization Error =
2
If an audio signal with a range of 0-10 volts is quantized using an 8-bit quantizer (256 levels):
10 − 0
𝛥= = 0.039𝑉
256
Encoding
Encoding is the process of converting data from one form to another, typically for the
purpose of communication, storage, or processing. In the context of digital communication
and signal processing, encoding refers to transforming a signal or data into a specific format
that is optimized for transmission, storage, or interpretation by another system. In digital
systems, encoding is crucial for ensuring that information is represented, transmitted, and
decoded efficiently and accurately.
`
Common Line Encoding Schemes:
Manchester Encoding
Combines clock and data into one signal by making transitions in the middle of each
bit period.
A high-to-low transition represents a binary 0, while a low-to-high transition
represents a binary 1.
Self-clocking, but requires more bandwidth.
Examples:
Regenerative Repeater
1. Signal Amplification:
oWhen digital signals travel over long distances, their amplitude gradually
decreases due to attenuation in the transmission medium (e.g., copper wires,
fiber optics). The regenerative repeater amplifies the signal to restore its
strength.
2. Noise Filtering:
o Over time, transmitted signals accumulate noise. A regenerative repeater is
capable of distinguishing the original signal from the noise, regenerating the
original digital signal and eliminating most of the noise.
3. Signal Reshaping
o In digital communications, signals may lose their shape (distorted) due to
factors like dispersion or interference. The regenerative repeater rebuilds the
original clean digital signal, correcting the waveform and pulse transitions,
essentially "regenerating" the data in its original form.
4. Timing Recovery
o Regenerative repeaters also help in recovering the timing or clock signal from
the received data stream. They ensure that the receiver and transmitter stay
coordinated to appropriately interpret the data stream.
Decoder
A decoder is a critical component responsible for deciphering or interpreting encoded
signals received from a transmitter back into their original data format. Decoding is vital for
rebuilding information that was encoded to guarantee reliable and efficient data transmission
over communications channel.
When decoding encoded signals, a decoder is essential to understanding, analyzing, and
reassembling the original data. Decoders facilitate dependable communication by managing
mistakes, noise, and signal impairments, whether they are recovering data from a
communication link prone to errors or restoring compressed information to its original
format. The primary methods of decoding employed in digital communications are listed
below:
Decoder plays a key role in ensuring that communications system are efficient, robust,
and capable of handling noise and interference.
Reconstruction Filter
In digital-to-analog conversion (DAC) systems, a reconstruction filter is a crucial element
that transforms a discrete-time digital signal into a continuous-time analog signal. Following
the DAC's conversion of a digital signal into a series of discrete pulses or samples, the
reconstruction filter smoothest the signal to eliminate high-frequency elements and return the
signal to its original continuous analog waveform.
DIGITAL MODULATION
Types of Digital Modulation Techniques:
Amplitude Shift Keying (ASK)
Amplitude Shift Keying (ASK) is a digital modulation technique where the
amplitude of a carrier wave is varied to represent binary data (0s and 1s). In ASK, the
carrier's amplitude is changed according to the binary information being transmitted, while its
frequency and phase remain constant. ASK is one of the simplest forms of digital modulation
and is widely used in low-bandwidth applications.
In essence, the signal's amplitude is turned "on" and "off" based on the bit stream
being transmitted, which is why ASK is sometimes referred to as On-Off Keying (OOK)
when dealing with binary signals.
Mathematical Representation:
𝐴 cos(2𝜋𝑓𝑐 𝑡) 𝑖𝑓 𝑏𝑖𝑡 𝑖𝑠 1
𝑠(𝑡) = {
0 𝑖𝑓 𝑏𝑖𝑡 𝑖𝑠 0
Where:
𝐴 is the amplitude of the
carrier wave.
𝑓𝑐 is the frequency of the
carrier wave.
𝑡 is time.
Figure below (a) shows a digital message signal using two voltage levels. One level
represents 1 and the other represents 0. The unmodulated carrier is illustrated in Figure (b).
Figure (c) and (d) are the modulated waveforms using two versions of ASK. Figure (c) uses
OOK, and (d) uses binary ASK, or BASK.
A digital signal may be used to indicate a sequence of zeros and ones, as demonstrated in
Figure A. On the graph, different time periods are shown as a succession of vertical lines with
dots at equal intervals. The sequence 0 1 1 1 0 0 0 1 0 1 is shown from left to right. Each 0
and 1 in the series corresponds to one of the time periods created by the dotted vertical lines.
For one time period, the voltage is initially at zero, climbs to one for three time periods, and
then falls to zero for three time periods. After then, it climbs to one for a single time, falls to
zero for a single time, and then rises to one for a single time.
A sinusoidal waveform beginning at the origin is displayed in Figure B. It climbs
gradually to a rounded one-volt peak. Subsequently, the line descends smoothly to a rounded
trough situated exactly the same distance below the X axis as it does above. After then, it
descends once again to a trough that is the same depth before rising once more to the next
peak, which is the same height as the previous peak. Many cycles are displayed. The dotted
vertical lines make three cycles every time period, which is the frequency of the sinusoidal
waveform.
Figure A and B are combined to form Figure C. There is no waveform displayed in
where the voltage is 0. A sinusoidal segment with an amplitude of one volt is displayed when
the voltage is one. A set of vertical lines with dots spaced equally apart is used to symbolize
different time periods; these match to the ones shown in Figure A. After being at zero for one
time period, the signal shows a sinusoid for three time periods before reverting to zero for
three more. After then, the sinusoid appears for one time period, disappears for one time
period, and then reappears for one time period.
Figure D is comparable to Figure C, but instead of showing a zero voltage Figure D
shows a waveform with a smaller amplitude. A full amplitude sinusoidal signal with a
voltage of one volt is displayed where Figure A's voltage is one. Time periods are represented
by a series of equidistant dotted vertical lines, as in parts A and C. The sinusoidal signal first
appears with a lesser amplitude of around 0.5 volts for one time period. After that, it shows
three times at its maximum amplitude before returning to the smaller amplitude for three
more time periods. After that, the sinusoid occurs for a single time period at full amplitude,
then for a single time period at tiny amplitude, and finally for a single time period at full
amplitude once again.
Baud rate is a measure of the number of signal changes or symbols that occur per
second in a communication channel. It refers to the number of distinct symbol changes (also
called modulation changes) that the transmission system can handle each second. A symbol
can represent one or more bits, depending on the modulation scheme used. The relationship
between the number of available symbols, M, and` the number of bits that can be represented
by a symbol, n, is: M = 2n
Key Concepts:
Example:
For a simple Binary ASK or BPSK system, each symbol represents 1 bit. So, if the
baud rate is 2400, the bit rate is also 2400 bits per second.
For a 16-QAM system, each symbol represents 4 bits (because M=16 and
log 2 (16) = 4). If the baud rate is 2400, the bit rate will be:
Bit rate=2400×4=9600 bits per second
Here, you can transmit 9600 bits per second even though the symbol rate (baud rate)
is 2400 symbols per second.
Types of FSK:
1. Binary FSK (BFSK):
o The simplest form of FSK where two distinct frequencies are used to represent
binary 1 and 0.
o For example: `
f₁ (higher frequency) represents binary 1.
f₂ (lower frequency) represents binary 0.
2. Multiple Frequency Shift Keying (MFSK):
o Instead of just two frequencies, MFSK uses multiple distinct frequencies to
represent more than one bit per symbol.
o For example, in 4-FSK, four different frequencies are used to represent
combinations of two bits (00, 01, 10, 11).
Mathematical Representation:
𝐴 cos(2𝜋𝑓1 𝑡) 𝑖𝑓 𝑏𝑖𝑡 𝑖𝑠 1
𝑠(𝑡) = {
𝐴 cos(2𝜋𝑓2 𝑡) 𝑖𝑓 𝑏𝑖𝑡 𝑖𝑠 0
Where:
In Frequency Shift Keying (FSK), the relationship between bit rate, baud rate, and
bandwidth is important for understanding the efficiency of data transmission.
Example:
o In 4-FSK, M=4, so log 2 4 = 2
o If the baud rate is 1200 symbols per second, the bit rate would be:
Bit rate=1200×2=2400 bits per second
Bandwidth of FSK: `
The bandwidth of an FSK signal depends on several factors, including:
Frequency Deviation (Δf): The difference between the frequencies representing
binary 0 and 1.
Data Rate (R): The bit rate of the transmission.
The approximate bandwidth of an FSK signal can be calculated using Carson's Rule:
𝐵 ≈ 2 × (Δf + R)
Where:
𝐵 is the bandwidth.
Δf is the frequency deviation (half the difference between the two frequencies in
BFSK).
R is the bit rate.
Example:
In BFSK, if the frequency for a binary 0 is 1 kHz and for a binary 1 is 2 kHz, the
frequency deviation Δf is 500 Hz (half the difference between 1 kHz and 2 kHz).
For a bit rate R=1200 bits per second (bps):
B≈2×(500+1200)=2×1700=3400Hz
Thus, the required bandwidth for this FSK signal would be approximately 3.4 kHz.
In FSK systems, the trade-off between bit rate, baud rate, and bandwidth is important
for optimizing communication, especially in applications where bandwidth is a limited
resource.
Mathematical Representation:
The PSK signal can be expressed as:
𝑠(𝑡) = 𝐴 cos(2𝜋𝑓𝑐 𝑡 + ϕ𝑛 )
Where:
𝐴 is the amplitude.
𝑓𝑐 is the carrier frequency. `
ϕ𝑛 is the phase shift corresponding to the symbol.
Where:
𝐴 is the amplitude.
𝑓𝑐 is the carrier frequency.
ϕ is the phase shift (0 or 𝜋)
o
8-PSK extends QPSK by using eight different phase shifts to represent three
bits per symbol.
o The phase shifts are spaced by 45 degrees (π/4 radians).
o The signal can be represented as:
𝑠(𝑡) = 𝐴 cos(2𝜋𝑓𝑐 𝑡 + ϕ𝑖 ) Where ϕ𝑖 corresponds to one of the eight possible
phase shifts.
4. M-Phase Shift Keying (M-PSK):
o M-PSK generalizes the concept of PSK to M different phase shifts, where M
is any positive integer.
o Each phase shift represents a unique combination of bits. For example, 16-
PSK uses 16 different phase shifts to encode 4 bits per symbol.
Bit Rate
Bit Rate refers to the rate at which data bits are transmitted over a communication
channel. In PSK, it is directly related to the number of bits transmitted per second.
For BPSK (Binary Phase Shift Keying), each symbol represents 1 bit.
o Bit Rate (Rb ) = Symbol Rate = Baud Rate.
o Example: If you transmit 1,000 symbols per second, the bit rate is also 1,000
bits per second (bps).
For QPSK (Quadrature Phase Shift Keying), each symbol represents 2 bits.
o Bit Rate (Rb ) = 2 × Baud Rate.
o Example: If you transmit 1,000 symbols per second, the bit rate is 2,000 bps.
For 8-PSK, each symbol represents 3 bits.
o Bit Rate (Rb ) = 3 × Baud Rate. `
o Example: If you transmit 1,000 symbols per second, the bit rate is 3,000 bps.
Baud Rate
Baud Rate is the rate at which symbols (distinct signal changes) are transmitted per
second. Each symbol in PSK carries a specific number of bits.
For BPSK, the baud rate is equal to the bit rate.
o Baud Rate = Bit Rate.
For QPSK, the baud rate is half of the bit rate.
o Baud Rate = Bit Rate / 2.
For 8-PSK, the baud rate is one-third of the bit rate.
o Baud Rate = Bit Rate / 3.
Bandwidth
Bandwidth refers to the range of frequencies that a signal occupies. In PSK, the required
bandwidth is closely related to the baud rate.
For BPSK, the bandwidth BWBWBW is approximately equal to the baud rate.
o BW≈Rs , where Rs is the symbol rate (baud rate).
For QPSK, the bandwidth is also approximately equal to the baud rate.
o BW≈Rs, where Rs is the symbol rate.
For 8-PSK, the bandwidth is generally the same as for QPSK.
o BW≈Rs
Example
Suppose you want to transmit data with a bit rate of 1,000 bps:
BPSK:
These relationships show how different PSK schemes impact the transmission rate
and bandwidth requirements, with higher-order PSK schemes providing higher bit rates but
maintaining similar bandwidth efficiency as lower-order PSK schemes.
Summary
Every analog modulation technique, including AM, FM, and PM, has certain benefits
and drawbacks. FM and PM offer superior noise tolerance at the cost of requiring more
bandwidth than AM, which is simpler but more prone to noise. Even if digital modulation is
becoming more and more prevalent in contemporary communication technologies, these
modulation techniques are still fundamental to communication systems, particularly in
broadcast and telecommunications.
`
Analog signals can be digitalized using pulse code modulation (PCM), which involves
sampling, quantizing, and encoding the signals into a binary representation. It is extensively
utilized in audio, telecommunications, and broadcasting and serves as the foundation of many
digital communication systems. Although PCM demands a significant amount of bandwidth
and cautious quantization error management, it provides high quality and accurate
representation of analog signals.
Digital modulation techniques are selected according to the particular needs of the
communication system and offer a variety of trade-offs between noise resistance, bandwidth
utilization, and system complexity. In digital communication, considerations including
system complexity, noise resistance, power efficiency, and bandwidth availability influence
the modulation technique chosen. Because of their spectral efficiency, QAM and PSK are
typically used for high-speed communication systems, whereas FSK and MSK are preferred
in settings with less bandwidth constraints and noise concerns. Engineers can select the
optimal modulation technique for a particular communication scenario by weighing the trade-
offs between complexity, noise resistance, and bandwidth efficiency offered by each
technique.
Exercises:
1. A carrier wave of frequency f = 1MHz with a pack voltage of 20V is used to modulate a
signal of frequency 1kHz with a pack voltage of 10v. Find out the following (a) modulation
index, (b) Frequencies of the modulated wave, (c) Bandwidth.
Ans.
3. Speech signal is bandlimited to 3 kHz and sampled at the rate of 8 kHz. To achieve the
same quality of distortion PCM requires 8 bits/sample and DPCM requires 4 bits/sample.
Determine the bit rates required to transmit the PCM and DPCM encoded signals
Ans.
4. Determine (a) the peak frequency deviation, (b) minimum bandwidth, and (c) baud for a
binary FSK signal with a mark frequency of 49 kHz, a space frequency of 51 kHz, and an
input bit rate of 2 kbps.
Ans.
Unit 3
Information Theory
Objectives
• Define Information Theory and explain its significance in communication systems.
• Describe the concept entropy, divergence and mutual information
• Understand how entropy helps effectively on the bulk of information transfer.
• Analyze and reduce redundancy in data
• Evaluate how much information is lost when one probability distribution is used to
represent another.
• Understand and minimize the impact of inaccuracies when approximating or
transforming data.
Introduction to Information Theory
Information is the meaning or element that is sent over a communication system between
a sender and a recipient. It is the core entity
being transmitted, processed, and interpreted.
Any sort of data that has to be transferred from
one place to another, including text, audio, and
video, can be considered information. To
facilitate fast and dependable transmission
across several communication channels, digital
communication entails converting this
information into binary form, or bits. `
Information theory principles, like entropy,
divergence, and mutual information, define the
limits of data transmission and guide the design
of efficient communication systems.
The subject of information theory encompasses the transmission, measurement, and storage
of information. It provides a mathematical structure for understanding the limitations on
transmitting, compressing, and communicating data in noisy environments. Information theory
is essential to various fields such as computer science, machine learning, data compression,
encryption, and telecommunications. In 1948, Claude Shannon established this field with his
influential paper "A Mathematical Theory of Communication."
The fundamental concept of information theory is that the level of "informational value" in
a communicated message is determined by the extent to which the message's content is
unexpected. When a highly probable event occurs, the message contains minimal information.
Conversely, when a highly improbable event occurs, the message is significantly more
informative. For example, knowing that a specific number will not be the winning number in a
lottery conveys very little information, as any chosen number is almost certain not to win.
However, knowing that a particular number will win a lottery holds high informational value
because it indicates the realization of a highly unlikely event
1. Data Compression:
o Shannon’s theory defines the limits of compressing data. The minimum average
number of bits required to encode the output of a source is given by the entropy.
ENTROPHY
Entropy (Measure of Information), denoted as 𝐻(𝑋) is the fundamental measure of
uncertainty or unpredictability in a source of information. It quantifies the average amount of
information produced by a random source. In `data compression, entropy helps define the
theoretical limit for how much a data source can be compressed. The more uncertain an event,
the more information it contains. For instance, a fair coin flip (with equal probability for heads
or tails) has higher entropy than a biased coin.
The negative sign ensures that the entropy is a positive quantity, as the probabilities
𝑃(𝑥) are always between 0 and 1. When using logarithm base 2, entropy is measured in
bits, which is common in digital communication. If the logarithm is base 10, entropy is
measured in Hartleys or decibels (dB). In natural logarithms (base eee), entropy is measured
in nats.
High Entropy: When the source generates messages with equal probabilities, the uncertainty
is high, leading to higher entropy. For example, flipping a fair coin has an entropy of 1 bit,
since there’s an equal chance of getting heads or tails, meaning each outcome provides new
information.
Low Entropy: When a source consistently generates the same message (i.e., predictable
data), the entropy is low. For example, if the coin always lands heads, the entropy is 0 bits,
since no new information is provided.
For instance, the encoding scheme: A = “0”, D = “1” and I = “01” is efficient but
extremely flawed, as the message "AD" and the message "I" are indistinguishable (both would
be transmitted as 01). In contrast, the encoding scheme
Letter Encoding
A 00001
D 00111
I 11111
The reconstruction of any stream of bits into the original message of the given scheme
is accurate, but it is highly inefficient as it requires significantly fewer bits to reliably transmit
messages. It is common to wonder what the optimal encoding scheme is. While the answer
heavily depends on the application, such as the use of error correcting codes for transmissions
over noisy channels, one plausible answer is "the accurate encoding scheme that minimizes the
expected length of a message." This theoretical minimum is determined by the entropy of the
message.
Properties of Entropy: `
2. Maximization: Entropy is maximized when all outcomes are equally likely. For a
source with nnn outcomes, the maximum entropy is log 2 𝑛 bits, which occurs when
each outcome has a probability of 1/n
3. Additivity: If two independent sources are combined, the entropy of the combined
source is the sum of the individual entropies:
𝐻(𝑋, 𝑌) = 𝐻(𝑋) + 𝐻(𝑌)
Examples:
1. Fair Coin Toss: If a coin is fair, the probabilities of heads and tails are both 0.5. The
entropy is:
𝐻(𝑋) = −[0.5 log 2 0.5 + 0.5 log 2 0.5] = 1𝑏𝑖𝑡
2. Biased Coin Toss: If a coin is biased, say with probabilities 𝑝(ℎ𝑒𝑎𝑑𝑠) = 0.8 and
𝑝(𝑡𝑎𝑖𝑙𝑠) = 0.2 , the entropy is:
𝐻(𝑋) = −[0.8 log 2 0.8 + 0.2 log 2 0.5] = 1𝑏𝑖𝑡
This reflects that there is less uncertainty (or surprise) since heads is much more likely
than tails.
3. Uniform Distribution: For a random variable with nnn equally probable outcomes,
the entropy is maximized:
𝐻(𝑋) = log 2 𝑛
For example, in a 6-sided fair die:
𝐻(𝑋) = log 2 6 = 2.585𝑏𝑖𝑡𝑠
This indicates that rolling a die produces more uncertainty and information than
flipping a coin.
Applications of Entropy:
1. Data Compression: Entropy provides a lower bound on the average number of bits
needed to encode the output of a source. Efficient compression algorithms, like
Huffman coding or arithmetic coding, approach the entropy limit.
4. Machine Learning: Entropy is used in decision trees to evaluate the information gain
when splitting datasets. Lower entropy after a split indicates a better separation of
classes.
DIVERGENCE
Divergence refers to a measure of how one probability distribution differs from a second,
reference probability distribution. One of the most commonly used divergence measures is the
Kullback-Leibler (KL) Divergence
𝑃(𝑥)
𝐷𝐾𝐿 (𝑃||𝑄) = ∑ 𝑃(𝑥) log
𝑄(𝑥)
𝑋
Where:
𝑃(𝑥) is the probability of event xxx under the
true distribution P.
𝑄(𝑥) is the probability of event xxx under the
approximating distribution Q.
The sum is taken over all possible events x.
KL divergence is not symmetric, meaning 𝐷𝐾𝐿 (𝑃||𝑄) ≠ 𝐷𝐾𝐿 (𝑄||𝑃). This means the
"distance" from P to Q is generally different from the "distance" from Q to P. 𝐷𝐾𝐿 (𝑃||𝑄) ≥
0, and it is zero if and only if P = Q (the two distributions are identical). This property shows
that KL divergence measures the discrepancy or inefficiency of using Q in place of P.The
result of KL divergence is expressed in bits if the logarithm is base 2 (common in
information theory) or in nats if the natural logarithm is used.
`
Example:
Consider two discrete distributions:
P = (0.5,0.5) a fair coin with an equal probability of heads or tails.
Q = (0.8,0.2) a biased coin where heads is more likely than tails.
The KL divergence between P and Q is:
0.5 0.5
𝐷𝐾𝐿 (𝑃||𝑄) = 0.5 log + 0.5 log ≈ 0.305𝑏𝑖𝑡𝑠
0.8 0.2
This value tells us how inefficient it would be to use the biased distribution Q to approximate
the fair distribution P.
Applications of KL Divergence:
1. Machine Learning:
o KL divergence is used in model training to compare how closely a model's
predicted probability distribution (e.g., the output of a classifier) matches the
true distribution.
o It is a common loss function in variational inference and generative models
like Variational Autoencoders (VAEs).
2. Data Compression:
o In compression algorithms, KL divergence helps determine the efficiency of
different coding schemes. The lower the divergence, the closer the compressed
data is to the optimal encoding for the true data distribution.
3. Bayesian Statistics:
o KL divergence is used to measure the difference between the prior and posterior
distributions in Bayesian inference, representing how much the evidence has
changed the prior belief.
5. Information Retrieval:
o KL divergence is used in search engines and information retrieval systems to
measure how different a document's probability distribution is from the
distribution of a user's query, helping rank documents based on relevance.
MUTUAL INFORMATION
Mutual Information (MI) is a key concept in information theory that quantifies the
amount of information shared between two random variables. It measures how much
knowing the value of one variable reduces the uncertainty about the other. In other words, it
captures the degree of dependence between the variables.
H(X) H(Y)
`
H(X|Y) I(X;Y) H(Y|X)
H(X,Y)
𝑝(𝑥, 𝑦)
𝐼(𝑋; 𝑌) = ∑ ∑ 𝑝(𝑥, 𝑦) log
𝑝(𝑥)𝑝(𝑦)
𝑥∈𝑋 𝑦∈𝑌
Where:
𝑝(𝑥) is the joint probability distribution of X and Y.
𝑝(𝑥) and 𝑝(𝑦) are the marginal probability distributions of X and Y, respectively.
The logarithm is typically base 2, meaning mutual information is measured in bits.
Key Concepts:
3. Interpretation:
o If X and Y are independent: The mutual information 𝐼(𝑋; 𝑌) = 0 because
knowing X provides no information about Y, and vice versa.
o If X and Y are completely dependent: The mutual information is
maximized, as knowing X fully determines Y, and vice versa.
o Mutual information is always non-negative.
Simplified Formula:
Mutual information can also be expressed in terms of entropy:
Where:
𝐻(𝑋) is the entropy of X, representing the uncertainty about X.
𝐻(𝑋|𝑌) is the conditional entropy of X given Y, representing the remaining
uncertainty about X once YYY is known.
This shows that mutual information is the reduction in the uncertainty of X (or Y) due to
knowing Y (or X).
1. Symmetry:
o Mutual information is symmetric, meaning 𝐼(𝑋; 𝑌) = 𝐼(𝑌; 𝑋). It doesn't
matter whether you measure how much X tells you about Y or vice versa.
2. Non-Negativity:
o 𝐼(𝑋; 𝑌) ≥ 0, and it equals zero only if X and Y are independent.
3. Relation to Entropy:
o Mutual information can also be viewed as the difference between the joint
entropy and the sum of the marginal entropies:
This shows that mutual information measures the overlap or shared information
between X and Y
Example:
Consider a scenario where X represents the weather (sunny or rainy) and Y represents
whether a person carries an umbrella. If there's a strong correlation between the weather and
the use of an umbrella, the mutual information 𝐼(𝑋; 𝑌) will be high, because knowing the
weather provides a lot of information about whether the person will carry an umbrella.
If the person’s umbrella usage is completely random and independent of the weather,
then the mutual information 𝐼(𝑋; 𝑌) will be zero.
Find the mutual information 𝐼(𝑋; 𝑌) between the two random variables X and Y whose joint
1 1
4. Data Clustering:
o In clustering algorithms, mutual information can be used to measure the
similarity between clusters and the true data distribution.
Summary
Mutual Information quantifies how much information two random variables share.
It measures the reduction in uncertainty about one variable given knowledge of the
other.
𝑝(𝑥, 𝑦)
𝐼(𝑋; 𝑌) = ∑ ∑ 𝑝(𝑥, 𝑦) log
𝑝(𝑥)𝑝(𝑦)
𝑥∈𝑋 𝑦∈𝑌
Mutual Information is widely used in communication, machine learning, and feature
selection.
Exercises:
1. Suppose that women who live beyond the age of 80 outnumber men in the same age
`
group by three to one. How much information, in bits, is gained by learning that a
person who lives beyond 80 is male?
Ans:
2. Find the mutual information 𝐼(𝑋; 𝑌) between the two random variables X and Y
1 3
1 3
whose 𝑃 = [4 , 4] 𝑎𝑛𝑑 𝑄 = [43 4
1].
4 4
Ans:
3. Consider a variant of the game Twenty Questions in which you have to guess which
one of seven horses won a race. The probability distribution over winning horses is as
follows:
horse 1 2 4. 5. 6. 7. 8.
Prob. Of winning 1⁄4 1⁄4 1⁄8 1⁄8 1⁄8 1⁄16 1⁄16
Ans:
Unit 4
Introduction to Data Communication
Objectives
Define data communication and its core concepts: Understand the fundamental
principles of transmitting digital information between devices.
Identify the historical evolution of data communication: Trace the significant
milestones in data communication technology, from early inventions to modern
networks.
Explain the key elements involved in data communication: Recognize the roles of
sender, receiver, channel, protocol, and MAC in successful data transmission.
Discuss the challenges and limitations of data communication: Explore issues like
security, reliability, congestion, and quality of service.
Definition
The history of data communication traces the evolution of how information has been
transmitted across distances, starting from simple, early methods to today’s advanced digital
networks. Here's an overview of the major milestones:
The advent of 4G (LTE) and later 5G networks in the 2010s and 2020s provided
even faster wireless communication with higher data rates, reduced latency, and
support for a massive number of devices.
1. Message:
The message is the information or data to be communicated. It can be in various
forms, including text, numbers, images, audio, or video, depending on the application
and the nature of the communication.
2. Sender:
The sender is the device or entity that generates and sends the data. It could be a
computer, mobile phone, sensor, or other devices capable of transmitting data to
another device.
The sender encodes the message into signals that can be transmitted over a
communication medium.
3. Receiver:
The receiver is the device or entity that receives and processes the transmitted
message. It decodes the signals back into usable data.
Examples include computers, smartphones, servers, or other end devices capable of
receiving data.
4. Transmission Medium:
The transmission medium is the physical path or channel through which the message
travels from the sender to the receiver. The medium can be:
o Wired (copper cables, fiber optics)
o Wireless (radio waves, microwaves, satellite links)
The choice of medium affects the speed, quality, and distance over which data can be
transmitted.
6. Protocol: `
A protocol is a set of rules or conventions that define how data should be transmitted
and received. Protocols ensure that devices understand each other and that data is
transmitted accurately and securely.
Examples of protocols include TCP/IP, HTTP, FTP, and SMTP.
7. Feedback:
In certain communication systems, feedback is sent from the receiver back to the
sender to confirm that the message has been received and understood correctly.
This element is particularly important in interactive or real-time communication
systems where the sender needs confirmation, such as error checking or
acknowledgment signals.
2. Efficiency:
Optimizing the use of resources like bandwidth and power to transmit data quickly
and cost-effectively, minimizing delays or bottlenecks.
3. Timeliness:
Guaranteeing that the data reaches its destination within an acceptable timeframe,
which is especially important for real-time applications like video streaming or online
gaming.
4. Security:
Protecting the data from unauthorized access or tampering during transmission,
ensuring confidentiality, integrity, and authentication.
5. Scalability:
Supporting the growth of networks and the ability to add new devices or increase the
volume of data without degrading performance.
6. Interoperability:
Ensuring that devices and systems from different manufacturers or platforms can
communicate effectively using standardized protocols.
7. Flexibility:
Providing the ability to adapt to various communication needs, technologies, and
media, such as wired, wireless, or fiber-optic communication.
8. Reliability:
Ensuring continuous and stable communication, with minimal disruptions, through
mechanisms like error correction, fault tolerance, and retransmission when needed.
Summary
`
Data Communication refers to the sharing or transfer of collection of facts, figures,
etc. between devices capable of such exchanges using some of the other communication
mediums. Whenever we communicate we share facts, ideas, etc. in mutually agreed-upon
language and speed with the maximum accuracy possible. The same is the case in data
communication, here the effectiveness of Data Communication is determined by correctness
in delivery, the accuracy of transfer, timeliness, and lesser variation in packet arrival times.
Key Elements:
1. Message: The data or information being communicated.
2. Sender: The device that sends the data.
3. Receiver: The device that receives and processes the data.
4. Transmission Medium: The physical path (wired or wireless) through which the data
travels.
5. Protocol: A set of rules governing the transmission of data to ensure interoperability
between devices.
6. Encoder/Decoder: Devices or processes that convert data into signals for
transmission and back into usable data upon reception.
Data communication has evolved from early methods like telegraphy to today’s advanced
digital systems, playing a fundamental role in modern computing, networking, and
telecommunications. It enables global connectivity through the internet and networks,
supports real-time applications like video calls, and allows for the automation of industrial
and consumer devices (IoT). Data communication shapes our lives in profound ways, from
personal connections to global business. Understanding its principles empowers us to navigate the
technological landscape and leverage its potential for good.
Exercises:
1. The first operational computer network in the world was the _________ for the United
States Department of Defense
a) ARPANET
b) UNIVAC
c) ERNET
d) SKYNET
6. Frequency of failure and network recovery time after a failure are measures of the
_____________ of a network.
a) Performance
b) Reliability
c) Security
d) Interoperability
7. Ensuring that devices and systems from different manufacturers or platforms can
communicate effectively using standardized protocols.
a) Performance
b) Reliability
c) Security
d) Interoperability
8. Are essential components that enable the efficient transfer of data between devices except
a) Message
b) Noise
c) Sender
d) Receiver
10. What is the main goal of Quality of Service (QoS) in data communication?
a) Prioritize different types of data traffic
b) Reduce network congestion
c) Improve data security measures
d) Increase overall network speed
Unit 5
Data Transmission
Objectives
Explain the fundamentals of data transmission
Analyze different data transmission channels and discuss the limitations and
advantages of each channel type
Investigate data transmission protocols and methods and describe different data
transmission methods
Explore the future of data transmission
Introduction to Data Transmission
Data transmission refers to the process of transferring data between two or more devices
using some form of transmission medium (wired or wireless). It is a key aspect of data
communication, ensuring that information moves from a sender (source) to a receiver
(destination) in an accurate and timely manner.
1. Serial Transmission
In serial transmission, data is sent bit by bit over a single communication channel or
wire.
Each bit of data is transmitted sequentially, one after another.
Requires fewer wires or channels than parallel transmission.
Suitable for long-distance communication since synchronization is easier.
Types:
o Asynchronous Serial Transmission:
Data is transmitted one byte (or character) at a time with start and stop
bits.
No synchronization between the sender and receiver clocks.
Example: Communication between a computer and a peripheral device
(e.g., keyboard or mouse).
Applications:
o Used in USB (Universal Serial Bus), RS-232, and long-distance
communication technologies like DSL (Digital Subscriber Line).
Advantages:
o Efficient for long-distance communication.
o Requires fewer communication lines, reducing costs.
Disadvantages:
o Slower than parallel transmission for short distances.
2. Parallel Transmission
Definition: In parallel transmission, multiple bits are transmitted simultaneously over
multiple communication channels or wires.
Characteristics:
o Several bits (usually a byte or more) are transmitted at the same time, each
over a separate wire or channel.
o Requires more wires or channels than serial transmission.
Applications:
o Used in situations requiring fast data transmission over short distances, such as
within a computer system or between a computer and a printer.
o Examples: Parallel ports, data buses in computers (e.g., PCI Express).
Advantages:
o Faster than serial transmission for short distances since multiple bits are
transmitted simultaneously.
Disadvantages:
o Requires more hardware (wires, connectors) and is less efficient over long
distances due to synchronization issues and signal degradation.
conversion, timing, and synchronization required for communication. DCE can either receive
data from DTE for transmission or deliver received data to DTE.
In the realm of data communication, DTE (Data Terminal Equipment) and DCE (Data
Communication Equipment) represent two categories of devices that collaborate to enable
data transmission. These designations explain the functions that devices fulfill in a
communication arrangement, especially within serial communication systems. Understanding
the difference between DTE and DCE is crucial for establishing effective communication and
setting up network or serial connections.
In a typical communication setup, a DTE device connects to a DCE device to transmit
data over a network or communication link. For example, when a computer (DTE) connects
to a modem (DCE), the modem takes the digital signals from the computer and converts them
into analog signals for transmission over a phone line. When receiving data, the modem
converts the analog signals back into digital form for the computer to process. In serial
communication (e.g., RS-232), different pin configurations are used for DTE and DCE
devices. This is to ensure proper communication `between the two. When connecting two
DTE devices (e.g., two computers), a null modem cable or adapter is used to simulate the
DCE interface by swapping the transmission (TX) and reception (RX) pins so that both DTE
devices can communicate directly.
d) Ethernet
A family of networking technologies commonly
used in local area networks (LANs).
Features:
o Supports both asynchronous and
synchronous communication.
o Transmission speeds vary from 10 Mbps
(Ethernet) to 100 Gbps (Gigabit
Ethernet).
o Allows multiple devices to communicate
over a shared medium.
Applications: Networking, internet
communication, and connecting devices within
LANs.
b) Bluetooth
A wireless technology for short-range communication
between devices.
Features:
o Speeds up to 3 Mbps (Bluetooth 2.0) or up to
50 Mbps (Bluetooth 5.0).
o Typical range of 10 meters, extendable to
100 meters with high-power devices.
Applications: Wireless headphones, keyboards,
mice, and IoT devices.
d) Zigbee
A low-power, wireless communication protocol designed for IoT and sensor networks.
Features:
o Low data rates (up to 250 kbps).
o Suitable for short-range, low-power communication.
4. Specialized Interfaces
These interfaces are used in specific applications, such as audiovisual systems or data
storage.
Categories:
o Unshielded Twisted Pair (UTP): Most widely used, cost-effective but more
susceptible to interference.
o Shielded Twisted Pair (STP): Includes a shielding layer to protect against
external interference.
b) Coaxial Cable:
Made up of a central copper conductor, an insulating layer, and a metallic shield that
protects against interference.
Used for cable TV, broadband internet, and some LANs.
Types:
o Single-mode fiber
(SMF): Supports long-
distance communication
(up to 100 km or more).
o Multi-mode fiber
(MMF): Used for shorter
distances (up to 2 km).
a) Radio Waves:
Used for long-range wireless communication such as AM/FM radio, TV broadcasting,
and mobile networks. Widely used in Wi-Fi, Bluetooth, and cellular networks (2G, 3G, 4G,
5G).
Range: Can cover distances from a few meters (Wi-Fi, Bluetooth) to several
kilometers (cellular networks).
Speed: Varies depending on the technology (e.g., Wi-Fi 6 up to 9.6 Gbps, 5G up to 10
Gbps).
b) Microwave Transmission:
Uses high-frequency radio waves to transmit data over long distances in a line-of-sight
fashion. Used for satellite communication and point-to-point communication.
Range: Can cover several kilometers, requiring unobstructed line-of-sight.
Speed: Can exceed 1 Gbps for high-capacity links.
c) Infrared: `
Uses infrared light to transmit data over short distances. Commonly used in remote
controls, some IoT devices, and short-range communication systems.
Range: Limited to a few meters and requires line-of-sight.
Speed: Varies but typically up to a few Mbps.
d) Satellite Communication:
Data is transmitted between ground stations and orbiting satellites using radio waves.
Used for long-distance communication, broadcasting, and internet access in remote areas.
Range: Global, with satellites covering large geographical areas.
Speed: Varies, with some satellite services offering up to 100 Mbps.
1. Simplex Mode
In simplex mode, data is transmitted in one direction only, meaning the communication is
unidirectional. One device acts as the sender, and the other acts as the receiver, with no
possibility of the receiver sending data back to the sender.
3. Full-Duplex Mode
In full-duplex mode, data transmission occurs simultaneously in both directions. This
allows both the sender and receiver to communicate with each other at the same time, using
separate channels or frequencies for sending and receiving.
transmission medium are defined by these standards. Data communication standards are
established and maintained by various standards organizations. These organizations
develop, regulate, and promote the adoption of standards that ensure the interoperability,
reliability, and efficiency of data communication systems globally.
Classification of Standards
Open Standards
Open standards are protocols, technologies, or specifications that are developed and
maintained by public or independent organizations. They are generally accessible to anyone,
ensuring broad compatibility and interoperability across different systems and devices.
`
Examples of Open Standards in Data Communication:
TCP/IP: The protocol suite that powers the internet, ensuring devices across the
globe can communicate.
IEEE 802.11 (Wi-Fi): A widely used standard for wireless networking that ensures
devices from different manufacturers can connect to the same network.
HTML/CSS: Standards for structuring and presenting web content that ensures
interoperability across web browsers.
Ethernet (IEEE 802.3): A standard for wired local area networks (LANs) that
enables broad interoperability.
Proprietary Standards
Proprietary standards, also known as closed standards, are developed, controlled, and
owned by a single company or organization. These standards are typically not publicly
available and often require licensing fees for use.
Key Standards:
o OSI Model (ISO/IEC 7498): Defines the architecture for data communication
systems.
Key Standards: `
o Ethernet (IEEE 802.3): The most widely used wired LAN standard.
o Wi-Fi (IEEE 802.11): Standard for wireless LAN communication.
o Bluetooth (IEEE 802.15): Standard for short-range wireless communication.
Key Standards:
o ITU-T G.992: Standard for Digital Subscriber Line (DSL) technology.
o ITU-T X.25: Packet-switched data communication protocol used in early data
networks.
Key Standards:
o TIA-568: Standard for structured cabling in telecommunications
infrastructure.
o TIA-942: Standard for data centers and telecommunications infrastructure.
Key Standards:
o GSM (Global System for Mobile Communications): Standard for mobile
networks.
o LTE (Long-Term Evolution): 4G mobile communication standard.
o Digital Video Broadcasting (DVB): Standard for broadcasting digital
television.
Key Standards:
o ANSI T1.105: Standards for digital transmission systems, including SONET.
o ANSI X3.131: Fibre Channel, a high-speed network technology.
Key Standards:
o UMTS (Universal Mobile Telecommunications System): Standard for 3G
mobile networks.
o LTE (Long-Term Evolution): Standard for 4G mobile communication.
9. Wi-Fi Alliance
The Wi-Fi Alliance is an industry consortium that promotes Wi-Fi technology and
ensures interoperability between Wi-Fi devices.
Role in Data Communication:
o Responsible for certifying that Wi-Fi devices meet the standards set by IEEE
802.11.
o Ensures that Wi-Fi products from different manufacturers work together
seamlessly.
Key Standards:
o Wi-Fi CERTIFIED™: A certification program that guarantees device
interoperability and security for wireless networks.
Key Standards: `
o HTML (HyperText Markup Language): Standard for structuring web
content.
o CSS (Cascading Style Sheets): Standard for presenting web content.
o XML (Extensible Markup Language): Standard for data representation on
the web.
These organizations play a crucial role in establishing the standards that govern data
communication across the world. From networking protocols to wireless technologies, these
standards ensure that devices, systems, and networks can communicate effectively, reliably,
and securely. Their work enables the global exchange of information and underpins much of
the modern digital infrastructure.
Summary
Data transmission is essential for communication across networks and can be carried
out using various modes, types, and media. Depending on the application, different
technologies and standards are used to ensure efficient and reliable data transfer between
devices. The choice of transmission method, interface, and medium depends on factors like
distance, data rate, and the environment in which the transmission occurs.
Data Transmission Modes:
Simplex: Data flows in one direction only (e.g., television broadcast).
Half-Duplex: Data can flow in both directions, but not simultaneously (e.g., walkie-
talkies).
Full-Duplex: Data can flow in both directions simultaneously (e.g., telephone
communication).
Types of Data Transmission:
Serial Transmission: Data is transmitted one bit at a time over a single channel (e.g.,
USB, RS232).
Parallel Transmission: Multiple bits are transmitted simultaneously over multiple
channels (e.g., data transfer in computer buses).
Data Transmission Interfaces:
DTE (Data Terminal Equipment): Refers to devices like computers or routers that
generate and consume data.
DCE (Data Circuit-terminating Equipment): Refers to devices like modems or
switches that provide a connection between DTEs for data transmission.
Data Transmission Media:
Wired (Guided): Data is transmitted through physical media such as twisted pair
cables, coaxial cables, or fiber optics.
Wireless (Unguided): Data is transmitted through the air or space using
electromagnetic waves (e.g., radio waves, microwaves, infrared).
Data Transmission Standards:
Organization Primary Focus Key Contributions
International standards across various OSI model, structured cabling
ISO
industries (ISO/IEC 11801)
Electrical and electronic engineering Ethernet (IEEE 802.3), Wi-Fi (IEEE
IEEE
standards, including networking 802.11), Bluetooth (IEEE 802.15)
Global telecommunication standards and DSL (ITU-T G.992), X.25, Radio
ITU
protocols frequency allocation
Internet standards, especially TCP/IP and
IETF ` TCP/IP, HTTP, SMTP
networking protocols
Telecommunications and cabling TIA-568 (Structured Cabling), TIA-
TIA
infrastructure standards 942 (Data Centers)
European telecommunications standards,
ETSI GSM, LTE, DVB
mobile communication
ANSI U.S. standards across multiple industries SONET, Fibre Channel
Mobile broadband communication
3GPP UMTS, LTE, 5G NR
standards (3G, 4G, 5G)
Wi-Fi Certification and interoperability of Wi-Fi
Wi-Fi CERTIFIED™
Alliance devices
Web standards for accessibility,
W3C HTML, CSS, XML
interoperability, and security
Exercises:
6. Which agency developed standards for physical connection interfaces and electronic
signaling specifications?
a) EIA
b) ITU-T
c) ANSI
d) ISO
7. In ___________ transmission, a start bit and a stop bit frame a character byte
a) asynchronous serial
b) synchronous serial
c) parallel
d) (a) and (b)
8. In transmission, we send bits one after another without start or stop bits or gaps. It is the
responsibility of the receiver to group the bits.
a) Synchronous
b) Asynchronous
c) Isochronous
d) none of the above
Unit 6
Communication Protocol
Objectives
Grasp the basic concepts of communication protocols, including their definitions,
purpose, and importance in network communication.
Identify and differentiate between various types of communication protocols
Examine the key functions and features of communication protocols, such as data
integrity, error detection, flow control, and addressing.
Familiarize with the layered architecture of communication protocols, such as the
TCP/IP model or the OSI model, and understand the role of each layer in facilitating
communication.
Assess the strengths and weaknesses of different communication protocols based on
criteria like reliability, efficiency, security, and scalability.
Introduction to Communication Protocol
A Communication Protocol is a set of rules, standards, and procedures that enable
devices to communicate with each other by defining how data is formatted, transmitted,
received, and processed. Communication protocols ensure the reliable exchange of
information between devices in a network, regardless of differences in hardware, software, or
location.
2. Data Integrity:
o Ensure that the data transmitted between devices arrives without errors or
corruption.
o Protocols incorporate error detection and correction mechanisms (e.g.,
checksums, parity bits) to verify data accuracy.
4. Flow Control:
o Prevent overwhelming a receiving device or network by regulating the rate of
data transmission.
o Flow control mechanisms ensure that the sender transmits data at a rate the
receiver can handle, avoiding congestion or data loss.
7. Synchronization:
o Ensure proper timing and coordination between sender and receiver to
maintain data integrity.
o Synchronization mechanisms are especially important for real-time
communications (e.g., video calls, voice over IP).
`
8. Security:
o Protect data from unauthorized access, tampering, or eavesdropping during
transmission.
o Protocols include encryption, authentication, and integrity checks to safeguard
sensitive information and ensure secure communication (e.g., HTTPS,
SSL/TLS).
9. Scalability:
o Support communication across small to large-scale networks without
significant performance degradation.
o Protocols like TCP/IP are designed to handle the growing demands of the
internet and large distributed systems.
11. Compatibility:
o Ensure that older or less advanced systems can still communicate with newer
technologies.
o Protocols often define backward compatibility features to support legacy
systems.
Connection-Oriented Protocols:
Connection-oriented protocols establish a connection before data is transmitted. This
connection ensures that the sender and receiver are synchronized, and data is delivered in the
correct order with error checking.
Key Characteristics:
Establishment of a Connection: A connection is established between the
communicating devices before any data transmission occurs (a handshake process).
Reliable Communication: These protocols ensure that the data sent is received
correctly and in sequence. If data is lost or corrupted, the protocol will retransmit it.
Acknowledgments: The receiver sends acknowledgment (ACK) signals to confirm
that the data packets have been received successfully.
Flow Control: These protocols implement ` flow control to prevent overwhelming the
receiver with too much data.
Error Detection and Correction: Errors are detected, and retransmission
mechanisms are in place to correct them.
Examples of Connection-Oriented Protocols:
TCP (Transmission Control Protocol): A widely used connection-oriented protocol
in the TCP/IP suite. It establishes a reliable, ordered, and error-checked data transfer
between devices, ensuring that data is delivered accurately.
o Use Case: Web browsing, file transfer, email.
FTP (File Transfer Protocol): Used for transferring files between a client and a
server, ensuring that the files are received as intended.
SCTP (Stream Control Transmission Protocol): Provides message-oriented
communication with multi-streaming capabilities, often used for telephony
applications.
Advantages:
Reliability: Guarantees that data will arrive at the destination correctly.
Orderly Communication: Ensures that packets are delivered in the right order.
Flow Control: Prevents congestion by controlling the flow of data.
Disadvantages:
Overhead: The connection setup (handshake) and maintenance result in more
overhead compared to connectionless protocols.
Latency: The process of establishing and managing the connection can introduce
delays.
Connectionless Protocols:
Connectionless protocols do not establish a connection before data is transmitted.
Instead, they send data packets (called datagrams) without guaranteeing their delivery, order,
or integrity.
Key Characteristics:
No Connection Setup: There is no handshake or formal connection established
between the sender and receiver.
Best-Effort Delivery: The protocol sends data without checking whether the receiver
is ready or whether the data arrives successfully.
No Acknowledgments: The receiver does not send ACK signals, and there is no
mechanism for retransmission in case of data loss.
Unreliable Communication: Since there is no guarantee that data will reach its
destination, errors are not corrected by the protocol itself.
Advantages:
Low Overhead: No need for connection establishment or maintenance, reducing
protocol overhead.
Faster Transmission: Ideal for time-sensitive applications where speed is prioritized
over reliability.
Disadvantages:
Unreliable Delivery: No guarantees of packet delivery, order, or integrity.
No Flow Control: There is no mechanism to prevent network congestion.
No Error Handling: Errors are not detected or corrected by the protocol itself,
making it less suitable for applications requiring high reliability.
Network Topology
Network topology refers to the physical or logical arrangement of nodes and links in
a computer network. It defines how devices (nodes like computers, servers, switches) are
interconnected, and how data flows between them. Network topology plays a critical role in
the performance, efficiency, and fault tolerance of a network.
1. Bus Topology
All devices are connected to a single central cable or backbone. Data is broadcast to all
devices but only the intended recipient accepts it.
Advantages:
o Easy to install and requires less cabling.
o Cost-effective for small networks.
Disadvantages:
o Difficult to troubleshoot.
o Performance degrades as more devices are
added.
o A failure in the main cable can bring down
the entire network.
2. Star Topology
All devices are connected to a central hub or switch. Data is sent from one device to the
hub, which then forwards it to the destination device.
Advantages:
o Easy to install, manage, and troubleshoot.
o Failure of one device does not affect others.
o High performance for small networks.
Disadvantages:
o Requires more cabling than bus topology.
o If the hub or switch fails, the entire network
goes down.
3. Ring Topology
Devices are connected in a circular loop. Each device is connected to two other devices,
forming a ring. Data travels in one direction (unidirectional) or
both directions (bidirectional), passing through each device
until it reaches its destination. `
Advantages:
o Simple data transmission and no data collisions.
o Predictable performance even with high traffic.
Disadvantages:
o A single point of failure can disrupt the network.
o Adding or removing devices can disrupt the
network.
4. Mesh Topology
Every device is connected to every other device in the network. Data can take multiple
paths to reach its destination, making the network very
reliable. Full Mesh: Every node is connected to every other
node. Partial Mesh: Some nodes are connected to multiple
nodes, while others are connected to just one.
Advantages:
o High fault tolerance as data can travel
through multiple routes.
o Reliable and secure.
Disadvantages:
o Very expensive due to the number of connections required.
o Difficult to set up and maintain.
5. Tree Topology (also known as Hierarchical Topology)
A combination of star and bus topologies. Devices are connected in a hierarchical manner,
with groups of star-configured devices connected to a linear
backbone. Data travels from one level to another through the
hierarchical layers.
Advantages:
o Easy to manage and scalable.
o Good for large networks, such as schools or
large organizations.
Disadvantages:
o A failure in a segment can affect large portions of the network.
o Maintenance is complex.
6. Hybrid Topology
A combination of two or more topologies, such as a star-bus or star-ring combination.
Depends on the combination of topologies used.
Advantages:
o Flexible and scalable.
o Can leverage the strengths of different
topologies.
Disadvantages:
o Complex and costly to design and
maintain.
Network Architecture
Network architecture refers to the design and structure of a computer network. It
defines how different network components interact,
` how data is transmitted, and how
network resources are organized. It encompasses both the hardware and software
components, protocols, communication methods, and standards that enable communication
between devices
1. Client-Server Architecture:
In this model, client devices (e.g., computers or mobile devices) request services from a
central server. The server provides resources, services, or data to clients. Clients depend on
the server for data storage and processing.
Examples:
o Web services where a browser (client) requests data from a web server.
o Email services using an email server.
3. Cloud-Based Architecture:
Services and resources are hosted in remote data centers (the cloud), and users access
them over the internet. Scalable and flexible; users pay only for the resources they use.
Reduces the need for local infrastructure.
Examples:
o Cloud storage services (e.g., Google Drive, Dropbox).
o Cloud computing platforms (e.g., AWS,` Microsoft Azure).
5. Hybrid Architecture:
Combines elements of client-server, peer-to-peer, and cloud-based architectures
depending on the needs of the organization or network. Offers flexibility and scalability. A
mix of centralized and decentralized control.
Examples:
o Enterprises that use local servers for critical data and cloud services for less
sensitive applications.
Open Systems Interconnection (OSI)
The OSI (Open Systems Interconnection) Model is a conceptual framework used to
understand and standardize the functions of a networking system. It defines how data is
transferred across a network in seven distinct layers. Each layer has a specific role, and they
work together to enable communication between devices. The OSI model was developed by
the International Organization for Standardization (ISO) in 1984 to promote interoperability
between different vendors' networking products.
Transport layer is responsible for transferring data between devices using either User
Datagram Protocol (UDP) or Transmission Control Protocol (TCP). If the data units are too
large and exceed the maximum packet sizes, they are divided into smaller segments before
being sent to Layer 3. Upon reception, the receiving devices reassemble these segments at
Layer 3. Each segment is assigned a sequence number. As the data packets are fragmented
into smaller parts, they must be reorganized correctly for reassembly.
Layer 4 also manages flow control and error control. Flow control ensures that data
moves between senders and receivers at an optimal rate to prevent a fast connection from
overwhelming a slow one. A receiving device with a slower connection acknowledges the
receipt of segments and allows for the transmission of the next ones. Simultaneously, error
control verifies the packet checksums to ensure the completeness and accuracy of the
received data. Layer 4 also specifies the ports that Layer 5 will utilize for various functions.
For instance, DNS uses Port 53, which is typically open on systems, firewalls, and clients for
transmitting DNS queries. Web servers use port 80 for HTTP or 443 for HTTPS.
Applications of UDP:
Streaming Media: UDP is ideal for real-time applications like video and audio
streaming (e.g., YouTube, Skype), where it's more important to keep the stream
moving than to correct minor errors.
Online Gaming: Multiplayer online games often use UDP for fast transmission of
game state updates, where occasional packet loss is preferable to delays caused by
retransmissions.
DNS (Domain Name System): DNS uses UDP to quickly resolve domain names to
IP addresses, as a single request-response exchange is sufficient.
VoIP (Voice over IP): Voice calls over the internet rely on UDP to minimize delays
in conversation, accepting minor packet loss to ensure real-time communication.
Advantages of UDP:
Faster Transmission: No connection setup, error correction, or flow control allows
for quick data transmission.
Low Overhead: Minimal packet header size and no retransmission of lost packets
reduce resource consumption.
Useful for Real-Time Applications: Ideal for applications that prioritize speed over
perfect reliability, like live streaming or gaming
2. Internet Layer:
Responsible for routing data across networks and determining the best path for data packets
to travel. Assigns logical addresses (IP addresses) to devices. Routes packets through
intermediate routers to reach their destination.
Protocols:
o IP (Internet Protocol): Handles addressing and routing of data packets.
IPv4: Most commonly used version of IP, with 32-bit addresses.
IPv6: Newer version with 128-bit addresses to accommodate more
devices.
o ICMP (Internet Control Message Protocol): Sends error messages and
diagnostic information (e.g., used in the "ping" command).
o IGMP (Internet Group Management Protocol): Manages multicasting in
IPv4.
`
3. Transport Layer:
Ensures reliable and error-free transmission of data between devices. It is responsible for
end-to-end communication, flow control, and error handling. Manages data flow between
devices. Ensures data is delivered in the correct order without duplication or loss.
Protocols:
o TCP (Transmission Control Protocol): A connection-oriented protocol that
ensures reliable, ordered delivery of data.
Key Features: Three-way handshake (connection establishment), error
checking, retransmission of lost packets.
o UDP (User Datagram Protocol): A connectionless protocol that sends data
without guaranteeing delivery, order, or integrity. It's faster but less reliable than
TCP.
Key Features: Low-latency transmission, often used for real-time
applications like video streaming or gaming.
4. Application Layer:
Provides high-level services for end-user applications and interfaces with the transport
layer. Facilitates communication between applications (e.g., web browsers, email clients) and
the network. Defines protocols for specific data exchange.
Protocols:
o HTTP/HTTPS (Hypertext Transfer Protocol / Secure): For web browsing.
o FTP (File Transfer Protocol): For transferring files.
o SMTP (Simple Mail Transfer Protocol): For sending emails.
o DNS (Domain Name System): Resolves domain names to IP addresses.
o POP3/IMAP: For retrieving emails.
Advantages of TCP/IP:
Interoperability: Works across different hardware and software platforms.
Scalability: Supports small local networks as well as the global internet.
Standardization: Based on open standards, enabling widespread adoption and
compatibility.
Disadvantages of TCP/IP:
Complexity: Can be complex to configure and manage in larger networks.
Security: Older versions of TCP/IP have security vulnerabilities (e.g., lack of
encryption in early IP protocols).
Overhead: TCP adds extra overhead due to its reliability mechanisms (e.g., error
checking, acknowledgments). `
Summary
Communication protocols define how data is transmitted, structured, and received over
networks, ensuring devices can exchange information efficiently and reliably. From the basic
formatting of data packets to ensuring secure and reliable data transmission, protocols are the
backbone of modern communication networks, enabling interoperability, scalability, and
security.
The primary objectives of communication protocols are to ensure seamless and reliable
communication across networks, maintain data integrity and security, and optimize the use of
resources like bandwidth and time. These protocols allow devices and systems to interact in a
standardized way, enabling interoperability, scalability, and compatibility in diverse
networking environments.
The OSI model is a conceptual framework that standardizes the functions of networking
systems into seven layers, ensuring smooth communication across diverse network
environments. It provides guidelines for product developers and offers a common language
for network engineers to design, troubleshoot, and maintain networks. While TCP/IP is more
commonly used in practice, the OSI model remains a key reference model for understanding
and teaching network protocols.
OSI Model (7 Layers):
1. Physical Layer: Manages the transmission of raw data over physical media (e.g.,
`
cables, radio signals).
2. Data Link Layer: Manages error-free data transfer between adjacent nodes (e.g.,
Ethernet).
3. Network Layer: Handles routing of data across networks (e.g., IP).
4. Transport Layer: Ensures reliable delivery of data (e.g., TCP).
5. Session Layer: Manages communication sessions between applications.
6. Presentation Layer: Translates data into a readable format for applications (e.g.,
encryption, compression).
7. Application Layer: Provides services directly to users (e.g., HTTP, FTP, SMTP).
TCP/IP is the most widely used communication protocol suite, enabling devices to
communicate over the internet. With its layered structure, it provides a reliable and flexible
framework for data transmission, routing, and application support across diverse networks. The
combination of TCP's reliability and IP's addressing and routing capabilities makes TCP/IP
essential for modern digital communication.
TCP/IP Model (4 Layers):
1. Link Layer: Combines OSI’s Physical and Data Link layers.
2. Internet Layer: Corresponds to OSI’s Network layer (e.g., IP).
3. Transport Layer: Ensures reliable transmission of data (e.g., TCP, UDP).
4. Application Layer: Combines OSI’s Application, Presentation, and Session layers
(e.g., HTTP, DNS, FTP).
services, depending on the need for scalability, reliability, security, and cost-efficiency.
Protocols define the rules for data transmission, ensuring reliable, secure, and efficient
communication between devices. Examples include TCP/IP, HTTP, and Ethernet. Standards
are created by organizations like IEEE, IETF, and ISO to ensure that networking technologies
are interoperable, secure, and compatible across various devices and systems.
These elements together create a cohesive framework that supports global communication
networks, such as the internet, ensuring efficient and standardized data transmission.
Exercises:
2. Seven devices are arranged in a mesh topology. _______ physical channels link these
devices.
a) Seven
b) Six
c) Twenty
d) Twenty-one
`
3. The key element of a protocol is _______.
a) Syntax
b) Semantics
c) Timing
d) All of the above
6. The layer is responsible for delivering data units from one station to the next without
errors.
a) Transport
b) Network
c) data link
d) physical
9. When data are transmitted from device A to device B, the header from A's layer 4 is read
by B's layer.
a) Physical
b) Transport
c) Application
d) None of the above
10. As the data packet headers moves from upper layer to lower layers headers are .
a) Added
b) Removed
c) Rearranged `
d) Modified
Unit 7
Error Detection and Correction
Objectives
Identify the different types of errors that can occur during data transmission and
storage
Understand the sources of these errors, such as noise, interference, and hardware
failures.
Review various error detection methods, including parity bits, checksums, and cyclic
redundancy checks (CRC)
Understand how these methods correct errors and their effectiveness in different
scenarios.
Compare and contrast different error detection and correction techniques based on
their efficiency, complexity, and practical applications.
Examine the practical applications of error detection and correction in various fields,
such as telecommunications, computer networks, data storage systems, and
multimedia systems.
Types of Error
Error detection and correction are essential techniques in digital communication and
data storage systems that ensure data integrity by identifying and rectifying errors that may
occur during data transmission or storage. Errors can arise from various sources, including
noise, interference, signal degradation, or hardware malfunctions.
When the information sent and received differ,` it is called an error. Noise in digital
signals during transmission can cause mistakes in the binary bits as they go from source to
recipient. This implies that a bit may shift from 0 to 1 or from 1 to 0. Every time a message
is transmitted, data (implemented at either the Transport Layer or Data Link Layer of the OSI
Model) may become distorted or become jumbled due to noise. Error-detection codes are
appended to digital transmissions as supplementary data in order to stop such mistakes. This
aids in finding any mistakes that could have happened when sending the message.
Types of Error
Single-Bit Errors:
The term "single-bit error" may sound quite straightforward and innocuous, but in
practice, it has the potential to corrupt whole data sets, providing the recipient with entirely
erroneous information upon decoding. Additionally, as the error is genuinely extremely little
but causes significant data destruction, a highly good error detection system is needed to
identify it.
As an illustration, suppose that 0111 is the data packet that a sender transmitted to a
recipient. And a single-bit mistake happened during transmission, resulting in the recipient
seeing 0011 rather than 0111, or only one bit flipped. When the receiver decodes it to decimal
now, they will obtain 3(0011) rather than the proper data, which is 7(0111). Thus, there will
be significant data corruption when this data is employed for intricate logic. Using error
detection and repair techniques like CRC and parity bits is one way to apply
countermeasures.
Burst Errors:
Involve multiple bits being altered in a contiguous block. Can affect two or more bits,
making them more complex to detect and correct. In the same way, it is most likely a single
bit mistake and one type of transmission fault. However, a burst mistake occurs when many
data bits inside a packet are modified, distorted, or altered while being sent. And because of
how quickly this multibit corruption happens, it is known as "Burst." The primary causes of
burst errors are impulsive noise and communication line interference. A data packet can
become completely garbled and worthless as a result of numerous bits being corrupted.
Retransmission is one way to fix burst faults, but it also uses up more network resources and
can introduce new burst issues. Therefore, we must use reliable error detection and correction
methods, such as convolutional or Reed-Solomon codes. These techniques provide redundant
data so that, in the event of a burst mistake, the recipient may piece together the original data.
For instance, when a data block of 110001 is sent, the recipient receives 101101. Here, we
can observe that a burst mistake occurred, resulting in a total of 3 bits of damaged data in a
single occurrence.
Error Detection
Limitation: Can only detect single-bit errors; it cannot identify which bit is erroneous. If two
bits are interchanged, then it cannot detect the errors
Example:
Original data: 1010110 (contains 4 ones, even number)
Parity bit: 0 (to maintain even parity)
Transmitted data: 10101100
2. Checksums:
Checksum is a simple error detection technique used in data communication to ensure
the integrity of transmitted data. It works by creating a value (the checksum) derived from the
data content that is being sent. This value is then sent alongside the original data, allowing the
receiver to verify whether the data has been transmitted correctly.
The data is split into k segments of m bits each in the checksum error detection technique.
To get the total, the segments are summed at the sender’s end using 1’s complement
arithmetic. To obtain the checksum, a complement of the sum is taken. The checksum
segment is sent with the data segments. To obtain the total, all received segments are summed
using 1’s complement arithmetic at the receiver’s end. The sum is then calculated. If the
result is 0, the data is accepted; otherwise, it is rejected.
Example:
Limitation: May fail to detect errors if two bits are altered in such a way that the
checksum remains unchanged.
* Sender Side (Generation of Encoded Data from Data and Generator Polynomial or
Key):
1. The binary data is first augmented by adding k-1 zeros in the end of the data
2. Use modulo-2 binary division to divide binary data by the key and store remainder of
division.
3. Append the remainder at the end of the data to form the encoded data and send the
same
Example:
No Error in
transmission:
Error in transmission:
Data word to be sent - 100100
Key – 1101 Receiver Side:
Sender Side:
Since the
remainder is not
all zeroes, the
error is detected at
the receiver side.
Vertical Redundancy Check (VRC) and Longitudinal Redundancy Check (LRC) are
two error detection techniques used to ensure data integrity during transmission. Both
methods involve adding redundancy bits to the data to detect errors that might have occurred.
Error Correction
Error correction is a technique used in digital communication to detect and correct
errors that occur during data transmission. Unlike error detection methods that only identify
the presence of errors, error correction methods allow the receiver to reconstruct the original
data even if errors are present. Error Correction codes are used to detect and repair mistakes
that occur during data transmission from the transmitter to the receiver. Error correction
techniques rely on the addition of redundant data to the original message. This redundancy
allows the receiver to not only detect errors but also determine their location and make
corrections. Two main approaches to error correction are used:
Hamming Code
Hamming code is a linear error-correcting code developed by Richard Hamming in the
1950s. It is used to detect and correct single-bit errors in data transmission. Hamming code
adds redundancy bits to the original data to create a code word, enabling the detection and
correction of errors in the transmitted message. To generate a Hamming code, we add parity
bits to the original data bits. The number of parity bits, denoted as r, depends on the length of
the data bits m and satisfies the following condition:
2𝑟 ≥ 𝑚 + 𝑟 + 1
𝑚 is the number of data bits.
𝑟 is the number of parity bits.
The parity bits are placed at positions that are powers of 2: 20 , 21 , 22 , … … For example,
in a 7-bit Hamming code, the parity bits would be placed at positions 1, 2, 4, and so on.
Parity bit 1 covers all the bits positions whose binary representation includes a 1 in the
least significant position (1, 3, 5, 7, 9, 11, etc).
Parity bit 2 covers all the bits positions whose binary representation includes a 1 in the
second position from the least significant bit (2, 3, 6, 7, 10, 11, etc).
Parity bit 4 covers all the bits positions whose binary representation includes a 1 in the
third position from the least significant bit (4–7, 12–15, 20–23, etc).
Parity bit 8 covers all the bits positions whose binary representation includes a 1 in the
fourth position from the least significant bit `bits (8–15, 24–31, 40–47, etc).
In general, each parity bit covers all bits where the bitwise AND of the parity position
and the bit position is non-zero.
Step 5: Since we check for even parity set a parity bit to 1 if the total number of ones in the
positions it checks is odd. Set a parity bit to 0 if the total number of ones in the positions it
checks is even.
Redundant bits are always placed at positions that correspond to the power of 2, so the
redundant bits will be placed at positions: 1,2,4 and 8.
R1:
R2:
We look at bits 2,3,6,7,10,11 to calculate R2. In this case, because the number of 1s in these
bits together is odd, we make the R2 bit equal to 1 to maintain even parity.
R4:
We look at bits 4,5,6,7 to calculate R4. In this case, because the number of 1s in these bits
together is odd, we make the R4 bit equal to 1 to maintain even parity.
R8:
`
We look at bits 8,9,10,11 to calculate R8. In this case, because the number of 1s in these bits
together is even, we make the R8 bit equal to 0 to maintain even parity.
Thus, the final block of data which is transferred looks like this:
Summary
Error Detection
Error detection techniques identify whether errors have occurred during data transmission.
These techniques rely on adding extra bits to the original data, which are used to check the
accuracy of the received data. Common error detection methods include:
Parity Bit: Adds a single bit to make the number of 1s either even (even parity) or
odd (odd parity).
Checksum: A value calculated from the data, used to verify the integrity of data
received.
Cyclic Redundancy Check (CRC): A polynomial-based method that generates a
remainder used to check data accuracy.
Hamming Code: Corrects single-bit errors and detects two-bit errors by using parity bits
placed at specific positions in the data.
2𝑟 ≥ 𝑚 + 𝑟 + 1
Key Differences
Error Detection identifies the presence of errors but does not correct them, while
Error Correction not only detects but also corrects the errors.
Error detection methods are simpler and require less computational power compared
to error correction techniques, which are more complex due to the need to locate and
correct errors.
Importance
Reliability: Ensures data integrity by minimizing errors in digital communication.
Efficiency: Reduces the need for retransmissions, which can save time and
bandwidth.
Data Accuracy: Guarantees that information is accurately transmitted and received,
` systems.
which is critical in real-time communication
Error detection and correction are fundamental to reliable data communication. Error
detection methods like parity bits, checksums, and CRC identify errors, while error correction
techniques like Hamming code allow for the recovery of the original data. Together, they
form the backbone of data integrity in communication systems, enhancing the accuracy and
reliability of data transfer.
Exercises:
1. For a 12 bit data string of 101100010010, determine the number of Hamming bits
required, and the Hamming code.
Ans.
2. In CRC if the data unit is 100111001 and the divisor is 1011 then what is divided at the
receiver?
Ans.
3. A bit stream 1101011011 is transmitted using the standard CRC method. The generator
polynomial is 𝑥 4 + 𝑥 + 1. What is the actual bit string transmitted?
Ans.
4. A hamming code 0110001 is being received. Find the correct code which is being
transmitted.
Ans.
5. If the data transmitted along with checksum is 10101001 00111001 00011101. But the
data received at destination is 00101001 10111001 00011101. Detect the error
Ans. `
Unit 8
Introduction to Computer Network and Security
Objectives
• Define key terms related to computer networks and security, including nodes, protocols,
encryption, and authentication.
• Discuss different types of networks (LAN, WAN, MAN, etc.) and their characteristics.
• Identify common security threats (malware, phishing, DDoS attacks) and
vulnerabilities that can compromise network security.
• Explain the importance of encryption in securing data in transit and at rest.
• Discuss emerging technologies and trends impacting network security, such as AI,
machine learning, and zero-trust security models.
• Explore the challenges and opportunities presented by advancements in networking
and security technologies.
Computer Network
Computer Network refers to a collection of interconnected devices that share resources
and communicate with each other over various media. The primary purpose of a computer
network is to enable data exchange, resource sharing, and connectivity among computers,
servers, and other devices.
In the context of computer networks, nodes are individual devices or endpoints that are
connected to the network. Each node can send, receive, or forward data, playing a crucial role
in communication and data transfer within the network. Nodes can be computers, servers,
routers, switches, printers, smartphones, or any other device that has a network address and
can communicate over a network. Nodes are essential components of computer networks,
acting as connection points that communicate and share data. Understanding the role of nodes
helps in managing network infrastructure, optimizing data flow, and implementing security
measures. They are critical to the functioning of any network, from small local setups to vast
global systems like the internet.
Types of Nodes
1. End Devices (Hosts):
o Computers: Desktop PCs, laptops, or workstations used by individuals for
various tasks.
o Mobile Devices: Smartphones, tablets, and other portable devices connected
to the network.
o Servers: Powerful machines that store, process, and deliver data to other
nodes on the network.
o
Printers/Scanners: Network-connected peripherals that can receive print jobs
or data requests.
2. Networking Devices:
o Routers: Nodes that direct data packets between different networks,
determining the best path for data to travel. Routers are networking devices
that use headers and forwarding tables to find the optimal way to forward data
packets between networks. A router is a computer networking device that links
two or more computer networks and selectively exchanges data packets
between them. A router can use address information in each data packet to
determine if the source and destination are on the same network or if the data
packet has to be transported between networks. When numerous routers are
deployed in a wide collection of interconnected networks, the routers share
target system addresses so that each router can develop a table displaying the
preferred pathways between any two systems on the associated networks
o Switches: Devices that connect multiple nodes within the same network,
forwarding data only to the specific device it is meant for.A switch differs
from a hub in that it only forwards frames to the ports that are participating in
the communication, rather than all of the ports that are connected. The
collision domain is broken by a switch, yet the switch depicts itself as a
broadcast domain. Frame forwarding decisions are made by switches based on
MAC addresses.
o Hubs: Basic networking devices that broadcast data to all connected nodes,
regardless of the intended recipient. A device that joins together many twisted
pair or fiber optic Ethernet devices to give the illusion as a formation of a
single network segment. The device ` can be visualized as a multiport repeater.
A network hub is a relatively simple broadcast device. Any packet entering
any port is regenerated and broadcast out on all other ports, and hubs do not
control any of the traffic that passes through them. Packet collisions occur as a
result of every packet being sent out through all other ports, substantially
impeding the smooth flow of communication.
o Access Points: Devices that enable wireless communication between wired
networks and wireless devices.
3. Specialized Nodes:
o Firewalls: Security devices that monitor and filter incoming and outgoing
network traffic based on predefined security rules.
3. Performance management
A company’s workload only increases as it grows. When one or more processors are
added to the network, it improves the system’s overall performance and accommodates this
growth. Saving data in well-architected databases can drastically improve lookup and fetch
times.
4.Cost savings
Huge mainframe computers are an expensive investment, and it makes more sense to
add processors at strategic points in the system. This not only improves performance but also
saves money. Since it enables employees to access information in seconds, networks save
operational time, and subsequently, costs. Centralized network administration also means that
fewer investments need to be made for IT support.
7. Reduction of errors
Networks reduce errors by ensuring that all involved parties acquire information from
a single source, even if they are viewing it from different locations. Backed-up data provides
consistency and continuity. Standard versions of `customer and employee manuals can be
made available to a large number of people without much hassle.
Encryption is a process used to convert data into a coded format to protect it from
unauthorized access. It is one of the most crucial methods in data security, ensuring that
sensitive information remains confidential and secure during storage or transmission.
Encryption uses algorithms to transform plaintext (readable data) into ciphertext (an
unreadable, encoded format), which can only be decoded by authorized users with the correct
decryption key. Data can be secured with encryption by being changed into an unintelligible
format that can only be interpreted by a person with the proper decryption key. Sensitive data,
including financial and personal information as well as communications over the internet, is
frequently protected with it.
Decryption is the process of converting encrypted data (ciphertext) back into its original,
readable form (plaintext) using a decryption key. It is the reverse of the encryption process,
allowing authorized users to access the information that was securely transmitted or stored.
Examples:
Advanced Encryption Standard (AES): Widely used in modern encryption for
securing sensitive data. AES (Advanced Encryption Standard) was developed for the
government to protect electronic data and eventually replaced DES as the standard.
AES is available for free public or private use. AES encrypts data and does so by using
one of three symmetric keys: 128, 192 or 256 bits. Further, each key has a different
number of encryption rounds.128-bit key: 10 rounds,192-bit key: 12 rounds and 256-
bit key: 14 rounds The longer the key, the more difficult it is for a bad actor to intercept
the message and hack it.
Triple DES (3DES): An improvement over DES, applying the algorithm three times
to increase security. Developed in 1971 by IBM, DES (Data Encryption Standard) was
the encryption standard soon after its development. In 1976, the U.S. government
adopted DES as its standard and in 1977, it was recognized as a standard by the National
Bureau of Standards. DES uses a symmetric encryption key, but the encryption
algorithm only encrypts 56 bits, which meant the system was limited and vulnerable to
brute force attacks, the most basic form of digital attack. Since DES used a single 64-
bit key, the number of combinations possible in a password was relatively low.
Examples:
Rivest-Shamir-Adleman (RSA): One of the most widely used public-key
cryptosystems for secure data transmission. RSA, named after MIT mathematicians
Ronald Rivest, Adi Shamir and Leonard Adelman, is a cipher that relies on asymmetric
encryption and is widely used for the secure transmission of data. It uses the
public/private method of encryption and decryption and has a slower transfer of data
rate, when compared to other encryption algorithms. RSA is a secure cipher that has
been proven to be safe. It is difficult to break due to its use of prime factorization to
generate keys.
Elliptic Curve Cryptography (ECC): Uses elliptic curves for stronger security with
shorter keys, often used in mobile devices and smart cards. Is a key-based technique
for encrypting data. ECC focuses on pairs of public and private keys for decryption and
encryption of web traffic. Elliptic curve cryptography (ECC) is a public key
The choice of a specific type of encryption is crucial because it directly impacts the security,
performance, and usability of data protection methods. Each type of encryption has its
strengths, weaknesses, and ideal use cases, making it important to choose the right one based
on the needs of a given application.
Security Requirements: The choice of encryption depends on the level of security needed.
For instance, financial institutions or government data require the highest level of encryption,
such as AES-256 or RSA with long key lengths.
Data Sensitivity: Highly sensitive information, like personal identifiable information (PII) and
classified documents, often requires both symmetric and asymmetric encryption to ensure
secure data transfer and storage.
Performance Needs: For applications that require fast processing, symmetric encryption like
AES is preferred due to its lower computational overhead. Asymmetric encryption might be
used only for the initial key exchange, followed by symmetric encryption for data transfer.
Key Management: Asymmetric encryption is ideal when key management is a concern since
it eliminates the need to securely share keys over insecure channels. This is critical in
applications like digital certificates and SSL/TLS` for secure internet communications.
Device Capabilities: Devices with limited computational power, such as IoT devices, often
rely on lightweight symmetric encryption or ECC (a more efficient form of asymmetric
encryption) to ensure data security without overwhelming the system resources.
Viruses
A virus is a type of malicious software (malware) that attaches itself to a legitimate program
or file. When the infected program runs, the virus activates and can replicate itself, spreading
to other files and programs on the computer. Requires user interaction to spread (e.g., opening
an infected file or running a malicious program). Can corrupt, delete, or modify files and disrupt
system operations.
Types of Viruses:
File Infectors: Attach themselves to executable files and spread when the file is run.
As one of the most popular types of viruses, a file-infector virus arrives embedded or
attached to a computer program file – a file with an .EXE extension in its name. When
the program runs, the virus instructions are activated along with the original program.
The virus carries out the instructions in its code it could delete or damage files on your
computer, attempt to implant itself within other program files on your computer, or do
anything else that its creator dreamed up to cause havoc. The presence of a file-infector
virus can be detected in two major ways: The size of a file may have suspiciously
increased. If a program file is too big, a virus may account for the extra size. At this
point, you need to know two things: What size the file(s) should be when fresh from
the software maker. Whether the virus is a cavity seeker a dangerous type that hides
itself in the unused space in a computer program.
Boot Sector Viruses: Infect the master boot record of a computer and load when the
system starts up. While less common today, boot-sector viruses were once the mainstay
of computer viruses. A boot-sector occupies the portion (sector) of a floppy disk or hard
drive that the computer first consults when it boots up. The boot sector provides
instructions that tell the computer how to start up; the virus tells the computer to load
itself during that start up.
Macro Viruses: Infect files like Word documents or Excel spreadsheets by exploiting
macro scripting features. During the late 1990s and early 2000, macro viruses were the
most prevalent viruses. Unlike other virus types, macro viruses are not specific to an
operating system and spread with ease via attachments, floppy disks, Web downloads,
file transfers, and cooperative applications. Popular applications that support macros
(such as Microsoft Word and Microsoft Excel) are the most common platforms for this
type of virus. These viruses are written in Visual Basic and are relatively easy to create.
Macro viruses infect at different points during a file's use, for example, when it is
opened, saved, closed, or deleted. `
Effects:
Slows down system performance.
Causes data loss and corruption.
Can lead to identity theft if personal data is compromised.
2. Worms
A worm is a type of malware that can self-replicate and spread independently without the
need for user interaction. Worms often exploit vulnerabilities in software or operating systems
to propagate. Does not require a host file or user action to spread. Can quickly infect multiple
systems over a network. A worm is a malicious program that originates on a single computer
and searches for other computers connected through a local area network or Internet
Connection. When a worm finds another computer, it replicates itself onto that computer and
continues to look for other connected computers on which to replicate. A worm continues to
attempt to replicate itself indefinitely or until a self-timing mechanism halts the process. It does
not infect other files. A worm code is a stand-alone code. In other words, a worm is a separate
file.
Types of Worms:
Email Worms: Spread through infected email attachments or links. The email box is
used as a client by the worm. The mail has infected link or attachment which once
opened downloads the worm. This worm searches the email contacts of the infected
system and sends links so that those systems are also destroyed. These worms have
double extensions like mp4 or video extensions so that the user believes it to be media
extensions. These worms do not have a downloadable link but a short link to open the
same. The link is clicked and the worm is downloaded, it either deletes the data or
modifies the same and the network is destroyed. An example of an email worm is
ILOVEYOU worm which infected computers in 2000.
Internet Worms: Exploit vulnerabilities in network protocols to spread from one
computer to another. Internet is used as a medium to search other machines vulnerable
and affect them. Those systems where the antiviruses are not installed are affected
easily with these worms. Once the machines are located they are infected and the same
process is started all over again in those systems. This is used to check the recent
updates and security measures if the system hasn’t installed any. The worm spreads
through the internet or local area network connections.
Instant Messaging Worms: Spread through messaging apps by sending infected links
or files to contacts. These worms work as email worms as the contacts from chat rooms
are taken and messages are sent to those contacts. Once the contact accepts the
invitation and opens the message or link, the system is infected. The worms have either
links to open websites or attachments to download. These worms are not as effective as
other worms. Users can destroy these worms by changing the password and deleting
the messages.
Effects:
Consumes network bandwidth, leading to slow internet connections.
Can cause massive data loss and disrupt services.
May install backdoors that allow hackers to control infected systems.
3. Hacking
Hacking refers to unauthorized access to or manipulation of computer systems, networks,
or data. Hackers use various techniques to exploit vulnerabilities in software, hardware, or
network security. Hacking in cyber security refers ` to the misuse of devices like computers,
smartphones, tablets, and networks to cause damage to or corrupt systems, gather information
on users, steal data and documents, or disrupt data-related activity.
Types of Hackers:
White Hat Hackers: Ethical hackers who help organizations find and fix security
vulnerabilities. White hat hackers can be seen as the “good guys” who attempt to
prevent the success of black hat hackers through proactive hacking. They use their
technical skills to break into systems to assess and test the level of network security,
also known as ethical hacking. This helps expose vulnerabilities in systems before
black hat hackers can detect and exploit them. The techniques white hat hackers use
are similar to or even identical to those of black hat hackers, but these individuals are
hired by organizations to test and discover potential holes in their security defenses.
Black Hat Hackers: Malicious hackers who exploit vulnerabilities for personal gain,
financial theft, or to cause damage. Black hat hackers are the "bad guys" of the
hacking scene. They go out of their way to discover vulnerabilities in computer systems
and software to exploit them for financial gain or for more malicious purposes, such as
to gain reputation, carry out corporate espionage, or as part of a nation-state hacking
campaign. These individuals’ actions can inflict serious damage on both computer users
and the organizations they work for. They can steal sensitive personal information,
compromise computer and financial systems, and alter or take down the functionality
of websites and critical networks.
Gray Hat Hackers: Operate between ethical and unethical hacking, sometimes
violating laws but not with malicious intent. Grey hat hackers sit somewhere between
the good and the bad guys. Unlike black hat hackers, they attempt to violate standards
and principles but without intending to do harm or gain financially. Their actions are
typically carried out for the common good. For example, they may exploit a
vulnerability to raise awareness that it exists, but unlike white hat hackers, they do so
publicly. This alerts malicious actors to the existence of the vulnerability.
Effects:
Leads to data breaches and loss of sensitive information.
Causes financial losses due to theft or downtime.
Damages an organization's reputation and customer trust.
Network Security
Types of Firewalls
Packet-Filtering Firewalls
The most basic type of firewall that inspects data packets individually, checking their
source and destination addresses, protocols, and ports. It either permits or denies the packet
based on a set of rules defined in the firewall's configuration. Cannot inspect the contents of
the packets, making it less effective against more sophisticated attacks. A packet filtering
firewall is the most basic type of firewall. It acts like a management program that monitors
network traffic and filters incoming packets based on configured security rules. These firewalls
are designed to block network traffic IP protocols, an IP address, and a port number if a data
packet does not match the established rule-set. While packet-filtering firewalls can be
considered a fast solution without many resource requirements, they also have some
limitations. Because these types of firewalls do not prevent web-based attacks, they are not the
safest.
traffic. In simple words, when a user establishes a connection and requests data, the SMLI
firewall creates a database (state table). The database is used to store session information such
as source IP address, port number, destination IP address, destination port number, etc.
Connection information is stored for each session in the state table. Using stateful inspection
technology, these firewalls create security rules to allow anticipated traffic. In most cases,
firewalls are implemented as additional security levels. These types of firewalls implement
more checks and are considered more secure than stateless firewalls. This is why stateful packet
inspection is implemented along with many other firewalls to track statistics for all internal
traffic. Doing so increases the load and puts more pressure on computing resources. This can
give rise to a slower transfer rate for data packets than other solutions.
network. `
An Intrusion Detection System (IDS) is a network security solution that monitors
network traffic for suspicious activities and alerts the network administrator when such
activities are detected. The primary role of IDS is to identify potential threats or malicious
activities within the network. When suspicious activities are detected, IDS generates alerts to
notify the network administrators for further analysis. It keeps detailed logs of all detected
activities, which can be used for analyzing attack patterns or forensic investigations. Two main
network deployment locations exist for IDS. Network-based IDS (NIDS) monitors the entire
network's traffic by analyzing data packets flowing through the network. Typically placed at
strategic points within the network to detect potential attacks on the network level. Host-based
IDS (HIDS) installed on individual devices or hosts within the network. Monitors activities
such as file changes, system logs, and processes on the host system for any suspicious behavior.
Apart from its deployment location, IDS also differs in terms of the methodology used for
identifying potential intrusions. Signature-Based Detection uses predefined patterns
(signatures) of known attacks to identify threats. Effective against known threats but may fail
to detect new or unknown attacks. Anomaly-Based Detection establishes a baseline of normal
network behavior and alerts administrators when it detects deviations from this norm. Capable
of identifying previously unknown threats but may produce more false positives.
An Intrusion Prevention System (IPS) is a network security tool that not only detects
malicious activities but also takes action to prevent these activities from succeeding. It actively
monitors network traffic and can block, reroute, or drop malicious data packets to prevent
attacks. In addition to detecting threats, IPS actively works to stop them by taking immediate
action. It can block malicious traffic, prevent unauthorized access, and contain the threat in
real-time. IPS automatically implements countermeasures to minimize damage and prevent
attacks from compromising the network. IPS solutions are placed within flowing network
traffic, between the point of origin and the destination. IPS might use any one of the multiple
available techniques to identify threats. For instance, signature-based IPS compares network
activity against the signatures of previously detected threats. While this method can easily
deflect previously spotted attacks, it’s often unable to recognize newly emerged threats.
Conversely, anomaly-based IPS monitors abnormal activity by creating a baseline standard for
network behavior and comparing traffic against it in real-time. While this method is more
effective at detecting unknown threats than signature-based IPS, it produces both false positives
and false negatives. Cutting-edge IPS are infused with artificial intelligence (AI) and machine
learning (ML) to improve their anomaly-based monitoring capabilities and reduce false alerts.
Types of VPN
Remote Access VPN:
Allows individual users to connect to a private network remotely. Commonly used by
remote workers to access company resources securely. Users connect to the VPN server over
the Internet, which then provides access to the internal
` network. A remote access VPN securely
connects a device outside the corporate office. These devices are known as endpoints and may
be laptops, tablets, or smartphones. Advances in VPN technology have allowed security checks
to be conducted on endpoints to make sure they meet a certain posture before connecting. Think
of remote access as computer to network.
Site-to-Site VPN:
Connects entire networks to each other, enabling secure communication between
multiple sites. Often used by businesses with multiple offices to connect their local area
networks (LANs). A VPN gateway is set up at each site, allowing secure traffic between the
networks. Site-to-site VPNs are mainly used in large companies. They are complex to
implement and do not offer the same flexibility as other VPNs. However, they are the most
effective way to ensure communication within and between large departments.
Client-to-Site VPN:
Similar to remote access VPN, but specifically designed for mobile users or devices to
connect to a corporate network. Useful for employees who need to connect to the corporate
network while traveling or working from various locations. Users install a VPN client on their
device to connect securely to the corporate network. The advantage of this type of VPN access
is greater efficiency and universal access to company resources. Provided an appropriate
telephone system is available, the employee can, for example, connect to the system with a
headset and act as if he/she were at their company workplace. For example, customers of the
company cannot even tell whether the employee is at work in the company or in their home
office.
Access Control
Access control mechanisms regulate who can access or use network resources and data.
It ensures that only authorized users have permission to access certain areas of the network.
Access control is a data security process that enables organizations to manage who is
authorized to access corporate data and resources. Secure access control uses policies that
verify users are who they claim to be and ensures appropriate control access levels are granted
to users. Implementing access control is a crucial component of web application security,
ensuring only the right users have the right level of access to the right resources. The process
is critical to helping organizations avoid data breaches and fighting attack vectors, such as a
buffer overflow attack, KRACK attack, on-path attack, or phishing attack.
2. Authorization
Authorization adds an extra layer of security to the authentication process. It specifies
access rights and privileges to resources to determine whether the user should be granted access
to data or make a specific transaction.
For example, an email service or online bank account can require users to provide two-
factor authentication (2FA), which is typically a combination of something they know (such
as a password), something they possess (such `as a token), or something they are (like a
biometric verification). This information can also be verified through a 2FA mobile app or a
thumbprint scan on a smartphone.
3. Access
Once a user has completed the authentication and authorization steps, their identity
will be verified. This grants them access to the resource they are attempting to log in to.
4. Manage
Organizations can manage their access control system by adding and removing the
authentication and authorization of their users and systems. Managing these systems can
become complex in modern IT environments that comprise cloud services and on-premises
systems.
5. Audit
Organizations can enforce the principle of least privilege through the access control
audit process. This enables them to gather data around user activity and analyze that
information to discover potential access violations.
Techniques:
Role-Based Access Control (RBAC): Access is based on the user's role within the
organization. RBAC creates permissions based on groups of users, roles that users
hold, and actions that users take. Users are able to perform any action enabled to their
role and cannot change the access control level they are assigned.
Multi-Factor Authentication (MFA): Requires multiple forms of verification to
grant access, such as passwords and biometric scans.
Summary
Computer networks are essential for connecting devices, sharing resources, and enabling
communication in various settings, from homes to large businesses and across the internet.
Understanding the types, topologies, protocols, and components of networks is crucial for
designing efficient systems that meet specific communication needs. Networks play a
significant role in both personal and professional environments, driving innovation,
productivity, and collaboration.
Network security is a critical aspect of protecting digital communication and data from a
variety of threats. It includes a range of techniques, technologies, and best practices aimed at
preventing unauthorized access, detecting intrusions, and responding to security incidents. By
implementing robust network security measures, organizations can safeguard their sensitive
information, maintain the integrity of their systems, and ensure the continued availability of
their services.
Encryption and decryption are fundamental to modern data security, protecting information
from unauthorized access and ensuring privacy. Encryption transforms readable data into an
unreadable format, while decryption reverses the process to restore the original data. There are
two main types of encryption: symmetric (using the same key for both encryption and
decryption) and asymmetric (using different keys for encryption and decryption). Encryption
is essential for securing communications, protecting sensitive data, and maintaining trust in
digital interactions. The use of strong encryption methods and proper key management
practices is critical to safeguarding sensitive information against cyber threats.
`
Worms, viruses, and hacking are critical cybersecurity threats that can disrupt computer
systems, steal data, and cause financial and reputational damage. While worms and viruses are
specific types of malware that spread and infect systems, hacking refers to unauthorized
activities aimed at exploiting system vulnerabilities. Employing robust security measures, such
as antivirus software, firewalls, regular updates, and user education, is essential to defend
against these cyber threats and protect digital assets.
Computer Network and Security work together to ensure secure and efficient
communication between devices and systems. Computer networks enable data exchange and
resource sharing, while network security focuses on protecting this data from threats like
hacking, malware, and unauthorized access. Key security measures include encryption,
firewalls, IDS/IPS, and secure authentication mechanisms. These technologies are essential for
safeguarding data, maintaining user privacy, and ensuring the integrity of communication
systems.
Exercises:
Multiple Choice: Choose the correct letter of the correct answer
1. It can be a software program or a hardware device that filters all data packets coming
through the internet, a network, etc. it is known as the_______:
a) Antivirus
b) Firewall
c) Cookies
d) Malware
2. A local area network (LAN) is defined by _______________glass or plastic
a) The geometric size of the network
References:
Textbooks
Joachim Speidel (2021). Introduction to Digital Communications (2nd ed.). Springer Nature
Switzerland AG
Stallings, William. (2014). Data and Computer Communications (10th ed.). Upper Saddle
River: Pearson Education, Inc.
Leis, John. W. (2018). Communication Systems Principles using MATLAB. Hoboken, NJ:
John Wiley & Sons.
Sibley, Martin. (2018). Modern Telecommunications: Basic Principles and Practices. Boca
Raton, Fl: CRC Press, Taylor & Francis Group.
Electronic Sources
TheKnowledgeAcademy. “Digital Communication: Definition, Examples and Its Types.”
Www.theknowledgeacademy.com
“The History of Communications.” World101 from the Council on Foreign Relations, 17 Dec.
2022, world101.cfr.org/global-era-issues/globalization/two-hundred-years-global-
`
communications.
Admin. (2022, May 16). Pulse code modulation - modulation, types, advantages and
disadvantages, applications. BYJUS. https://byjus.com/physics/pulse-code-modulation
Agarwal, T. (2021, March 15). Pulse code modulation and demodulation : Block Diagram & its
working. ElProCus. https://www.elprocus.com/pulse-code-modulation-and-demodulation
https://www.geeksforgeeks.org
https://www.spiceworks.com (Spiceworks)
https://www.cisco.com (CISCO)
https://www.fortinet.com (FORTINET)