0% found this document useful (0 votes)
5 views

Module DDC

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

Module DDC

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 124

`

PANGASINAN STATE UNIVERSITY


UNDANETA CITY CAMPUS

DATA AND DIGITAL


COMMUNICATIONS

ENGR. EMMERSON A. CANUEL, MSME


`

TABLE OF CONTENTS
I. UNIT 1 INTRODUCTION TO DIGITAL COMMUNICATION
Definition of Digital Communications………………………………..1
History of Digital Communications……………………………..…….2
Elements of Digital Communications…………………………………6
Summary………………………………………………………………….………..8
Exercises…………………………………………………………………………….9
II. UNIT 2 MODULATION METHODS
Introduction to Modulation….……………………………….………….11
Analog Modulation……………………….……………….…………..…….12
Pulse Code Modulation..…………………………………………..………19
Digital Modulation…………………………………………………………….27
Summary………………………………………………………………….………34
Exercise …………………………………………………………………………..34
III. UNIT 3 INFORMATION THEORY
Introduction to Information Theory…………………….………….36
Entropy……………….……………………….……………….…………..…….37
Divergence….……………………………………………………………………39
Mutual Information …………………………………………………………41
Summary………………………………………………………………….………43
Exercise …………………………………………………………………………..44
IV. INTRODUCTION TO DATA COMMUNICATION
Definition …………………….…………………………………………….…….45
History of Data Communication ……………………….…………….46
Elements of Data Communication……………………………………49
Summary………………………………………………………………….………51
Exercise …………………………………………………………………………..52
V. DATA TRANSMISSION
Introduction to Data Transmission………………………………….54
Data Transmission Media and Technologies.………………….62
Data Transmission Modes…………..…………………………………..64
Data Communication Standard….……………………………………66
Summary………………………………………………………………….………71
Exercise …………………………………………………………………………..73
VI. COMMUNICATION PROTOCOL
Introduction to Communication Protocol ……………………….75
Network Topology.…………………………………………………….…….78
Network Architecture……………………………………………………….80
OSI….……………………………………………………………..…………………81
UDP ……………………………………………………………………………….…84
TCP/IP….……………………………………………………………………………85
Summary………………………………………………………………….………88
Exercise …………………………………………………………………………..89
VII. ERROR DETECTION AND CORRECTION
Types of Error …………………………………………………………………91
Error Detection ……………………………………… ………………………92
Error Correction ………………………………………………………………95
Summary………………………………………………………………….………98
Exercise …………………………………………………………………………..99
VIII. INTRODUCTION TO COMPUTER NETWORK AND SECURITY
Cmputer Networks..…………………………………………………………101
Encryption and Decryption………………………………………………106
Virus, Worms and Hacking ……………………….……………………109
Network Security ………………………………………………………….…112
Summary………………………………………………………………….………118
Exercise …………………………………………………………………………..118
`

ABOUT THE AUTHOR

Engr. Emmerson A. Canuel is a dedicated educator


and engineer with a robust academic background
and a passion for empowering students in the field
of electronics and communications. He earned his
Bachelor of Science in Electronics and
Communications Engineering (BSECE) from
Saint Louis University in Baguio City, where he
laid the foundation for his technical expertise. Further enhancing his
qualifications, he pursued a Master’s degree in Science in Management
Engineering (MSME) at the University of Pangasinan in Dagupan City.
Since 2015, Engr. Canuel has been an influential instructor at
Pangasinan State University, Urdaneta City Campus, where he shares
his knowledge and experience with aspiring engineers. His teaching
portfolio includes Engineering Mathematics and various subjects
related to Electronics and Communications, reflecting his commitment
to academic excellence and innovation in education. Engr. Canuel also
served as the Pollution Control and Energy Efficiency Coordinator
(2021-2023) for PSU Urdaneta Campus, demonstrating his
commitment to sustainable practices and environmental stewardship
within the educational institution. His expertise is recognized beyond
the classroom, as he actively coaches and mentoring students for
regional and national competitions, inspiring them to excel and achieve
their goals in the competitive field of engineering. His contributions to
the engineering community reflect his commitment to excellence and
innovation in education and practice. Engr. Canuel’s passion for
teaching and commitment to student success make him a respected
figure in the academic and engineering communities, dedicated to
fostering the next generation of engineers.
`

Preface
The Data and Digital Communications module is designed to provide a
comprehensive understanding of the principles and technologies that underpin
modern communication systems. In today's interconnected world, data
communication plays a crucial role in enabling the seamless transfer of
information across local and global networks. This module focuses on both the
theoretical foundations and practical applications of data and digital
communication systems, aiming to equip students with the skills and knowledge
required to design, analyze, and optimize these systems.

This module begins by exploring the fundamental concepts of data transmission,


signal processing, and the conversion of analog signals into digital formats.
Students will delve into topics such as modulation techniques, encoding schemes,
error detection and correction, and the characteristics of different transmission
media. The emphasis on digital communication ensures a solid understanding of
how information is processed and transmitted efficiently in various formats, from
simple text to complex multimedia data.

Students will also study the different layers of communication protocols,


including data link, network, transport, and application layers, to understand how
data is packaged, transmitted, and received in a structured manner. Concepts such
as multiplexing, switching techniques, flow control, and network security will be
covered in detail to illustrate the robustness and reliability of communication
systems.

By the end of this module, students will have a strong foundation in both
theoretical concepts and practical skills in data and digital communications. They
will be prepared to face the evolving demands of the communications industry,
contribute to innovations in technology, and apply their learning to various fields,
including telecommunications, networking, signal processing, and beyond. This
module serves as a stepping stone for those aspiring to become proficient in
designing and managing modern communication systems that are reliable, secure,
and efficient.

The landscape of data communication has evolved dramatically over the years,
driven by advancements in digital technology, the proliferation of the internet,
and the growing demand for high-speed connectivity. As such, this module
explores a range of topics that are crucial for understanding how data is
transmitted, received, and processed in digital communication systems.
Data and Digital Communications

FINAL EXAM UnitLABORATORY


1
Introduction to Digital Communications

Objectives

 Clarify the concept of digital communication, highlighting its significance in


contemporary society.
 Provide a brief historical overview of the development of digital communication.
 Discuss the impact of digital communication on various aspects of modern life, such as
business, education, and interpersonal relationships.
 Introduce the fundamental elements of digital communication, including but not limited
channels, encoding, decoding and feedback.

Definition of Digital Communication

Digital communication refers to the transmission of information electronically using


digital signals. It involves the encoding, transmission, and decoding of information, which can
be in the form of data, images, videos, or audio. Digital communication systems are widely
used in various fields such as telecommunications, data networks, and broadcasting due to their
ability to transmit information efficiently, securely, and with less error compared to analog
systems.

Key Concepts in Digital Communication: `


1. Digital vs. Analog Communication:
o Analog communication transmits information in a continuous signal that
varies with time (e.g., radio signals). However, analog signals are susceptible
to noise and distortion.
o Digital communication uses discrete signals (binary values 0 and 1). Digital
signals are more robust to noise and easier to compress, encrypt, and transmit
over long distances.

2. Basic Components of Digital Communication:


o Source: Generates the original message (e.g., text, audio, or video).
o Encoder: Converts the message into a digital format (such as binary data).
o Transmitter: Sends the encoded signal over a communication channel (e.g.,
fiber optics, airwaves).
o Communication Channel: The medium through which the signal is
transmitted (e.g., cables, wireless spectrum).
o Receiver: Receives the transmitted signal from the channel.
o Decoder: Converts the received signal back into its original form (e.g.,
decoding the binary data into audio).
o Destination: The final recipient of the message.

3. Advantages of Digital Communication:


o Noise resistance: Digital signals are less affected by noise compared to analog
signals.

PANGASINAN STATE UNIVERSITY 1


Data and Digital Communications

o Error detection and correction: Digital communication systems can use


techniques like parity checks and cyclic redundancy checks (CRC) to detect
and correct errors in transmission.
o Compression: Data can be compressed to use bandwidth efficiently.
o Security: Encryption algorithms can be used to secure digital communication,
making it more secure.
o Efficient use of bandwidth: Advanced techniques like multiplexing allow for
more efficient use of available bandwidth.

4. Modulation Techniques:
o ASK (Amplitude Shift Keying): Varies the amplitude of the carrier signal to
represent binary data.
o FSK (Frequency Shift Keying): Uses different frequencies to represent
binary values.
o PSK (Phase Shift Keying): Changes the phase of the carrier wave to transmit
data.

5. Applications of Digital Communication:


o Telecommunications: Mobile phones, satellite communication, and internet
services all use digital communication.
o Data Transmission: The internet relies on digital communication to transmit
data between computers and networks.
o Broadcasting: Digital television and radio use digital signals for clearer and
more reliable broadcasting.
o Multimedia: Video streaming, online ` gaming, and video conferencing all rely
on digital communication technologies.

Digital vs. Analog Communication Summary


Feature Analog Communication Digital Communication

Signal Type Continuous Discrete (binary)

Noise Resistance Low High

Error Detection Limited Advanced techniques available

Bandwidth Usage Typically higher More efficient (compression techniques)

Security Less secure Highly secure (encryption techniques)

Complexity Lower Higher

History of Digital Communication

The history of digital communication is deeply rooted in technological advancements over


centuries, leading to the sophisticated systems we use today. It evolved from early attempts at
binary communication and the use of basic electrical circuits to modern digital

PANGASINAN STATE UNIVERSITY 2


Data and Digital Communications

telecommunication systems. Below is a chronological overview of key milestones in the


history of digital communication:

1. Early Developments (1800s - 1940s)

 Morse Code (1837): One of the earliest forms of digital communication. Developed
by Samuel Morse, it used a binary system of dots and dashes to represent letters and
numbers, transmitting messages over telegraph lines.

 Telegraph (1837): The telegraph, invented by Samuel Morse, allowed long-distance


communication by transmitting electrical signals over wires. It used digital-like
signaling by converting messages into a series
` of electrical impulses (dots and
dashes).

 Pulse Code Modulation (PCM) (1937): Invented by Alec Reeves, PCM became a
foundational technology in digital communication. It involves converting analog
signals (like voice) into digital form by sampling the signal at regular intervals and
encoding the amplitude into binary format. PCM is still widely used in digital
telephony and audio.

PANGASINAN STATE UNIVERSITY 3


Data and Digital Communications

2. World War II and Post-War Developments (1940s - 1960s)

 Shannon’s Information Theory (1948): Claude Shannon, often referred to as the


father of digital communication, published "A Mathematical Theory of
Communication," which laid the foundation for modern digital communication. His
theory introduced the concept of the "bit" as a fundamental unit of information and
provided a framework for understanding data transmission, channel capacity, and
error correction.
 Binary Communication: During this period, advancements in binary systems began
to shape modern communication technologies. Binary systems, based on 1s and 0s,
allowed for simpler and more reliable communication compared to analog systems,
especially in noisy environments.
 Transistors (1947): The invention of the
transistor by John Bardeen, Walter Brattain,
and William Shockley revolutionized
electronics and communications. Transistors
replaced vacuum tubes, making
communication devices smaller, faster, and
more reliable.

3. The Digital Revolution (1960s - 1980s)

 Integrated Circuits (ICs) (1958): Jack Kilby and Robert Noyce independently
developed the integrated circuit, which significantly reduced the size and cost of
electronic systems, enabling more complex ` digital communication systems.

 Digital Telephony (1960s): The first digital telephone systems emerged, replacing
older analog systems. Digital signals could be transmitted more efficiently and with
less noise over long distances. The first practical digital telephone exchange, known
as a T-carrier system (T1), was introduced by AT&T in 1962, using Pulse Code
Modulation (PCM) and Time Division Multiplexing (TDM).

PANGASINAN STATE UNIVERSITY 4


Data and Digital Communications

 ARPANET (1969): The U.S. Department of Defense developed ARPANET, which


became the foundation for the internet. ARPANET was a packet-switching network
that used digital communication techniques to transmit data between computers. This
marked the beginning of the modern internet and digital data networks.
 Fiber Optics (1970s): The development of
fiber-optic cables allowed for the high-speed
transmission of digital data using light signals,
drastically improving the speed and capacity of
communication networks.
 Digital Encoding and Compression: Digital
communication systems in the 1970s and 1980s introduced advanced digital encoding
and compression techniques to reduce the amount of data needed for transmission.
This led to more efficient use of bandwidth.

4. Modern Era (1980s - Present)


 Mobile Communication (1980s): The first generation (1G) of mobile phones used
analog technology, but with the
advent of 2G in the 1990s, digital
communication became the
standard. 2G networks introduced
digital encoding and encryption,
enabling better sound quality and
data services (e.g., SMS).
`
 Internet and World Wide Web
(1990s): The internet became widely accessible to the public, and digital
communication via email, websites, and online services surged. The development of
protocols like TCP/IP (Transmission Control Protocol/Internet Protocol) standardized
the way data was transmitted over networks, allowing for global digital
communication.
 Digital Television and Radio (1990s
- 2000s): Television and radio
broadcasting shifted from analog to
digital formats, offering higher quality
audio and video, as well as more
efficient use of the broadcast
spectrum.

 Wireless Communication and Wi-Fi (1990s - 2000s): The development of wireless


communication technologies like Wi-Fi and Bluetooth allowed digital data to be
transmitted over short distances without wires. Wireless networks became an integral
part of everyday communication.
 High-Speed Broadband and Fiber Optic Networks (2000s): The deployment of
high-speed broadband and fiber-optic networks enabled rapid data transmission for
internet access, video streaming, and cloud services.

PANGASINAN STATE UNIVERSITY 5


Data and Digital Communications

 4G and 5G Mobile Networks (2010s


- Present): Fourth-generation (4G)
and fifth-generation (5G) networks
marked significant advancements in
mobile communication, enabling
faster data transfer, higher capacity,
and lower latency, thus supporting
modern applications like video
conferencing, IoT (Internet of
Things), and autonomous systems.
 Quantum Communication (Present and Future): In recent years, research into
quantum communication has gained traction. Quantum communication uses quantum
mechanics principles to transmit data, offering theoretically unhackable encryption
and faster communication speeds. This is considered the future of secure digital
communication.

Key Contributions to Digital Communication


 Claude Shannon: His information theory provided a mathematical framework for
digital communication, introducing concepts like data compression, signal-to-noise
ratio, and channel capacity.
 Alec Reeves: Developed Pulse Code Modulation (PCM), a critical technology for
converting analog signals into digital form.
 AT&T and Bell Labs: Pioneered many early developments in digital telephony and
communication systems.
`
Elements of Digital Communication

Digital communication systems consist of several critical elements that work together to
ensure the efficient and accurate transmission of digital data. These elements form a chain,
starting from the message source to the final recipient. Below are the key elements of a
digital communication system:

1. Information Source
 The information source generates the data that needs to be transmitted. This could be
in the form of text, audio, video, or any other type of data. For example, a microphone
capturing a voice, a computer sending a file, or a camera recording a video.

PANGASINAN STATE UNIVERSITY 6


Data and Digital Communications

2. Source Encoder
 The source encoder converts the information from its original format (e.g., audio,
video, or text) into a digital signal, typically represented in binary format (0s and 1s).
This process is called encoding.
 The purpose of source encoding is to represent the data in the most efficient way,
often using data compression techniques to reduce redundancy and minimize the
amount of data that needs to be transmitted.

3. Channel Encoder
 The channel encoder adds extra bits to the encoded data to enable error detection and
correction. This is essential because, during transmission, the signal can encounter
noise or interference, which may lead to errors.
 Techniques like parity checks, cyclic redundancy check (CRC), or forward error
correction (FEC) are often employed to improve the reliability of data transmission.

4. Modulator
 The modulator converts the digital data into a form that can be transmitted over a
physical communication channel, such as a cable or wireless medium. This process is
called modulation.
 In modulation, the binary data is transformed into an analog signal by varying the
signal’s properties, such as amplitude, frequency, or phase. Common modulation
techniques include Amplitude Shift Keying (ASK), Frequency Shift Keying
(FSK), and Phase Shift Keying (PSK).
 Quadrature Amplitude Modulation (QAM) ` is often used in modern systems to
carry more data by combining both amplitude and phase changes.

5. Communication Channel
 The communication channel is the medium through which the signal is transmitted
from the sender to the receiver. Channels can be wired (such as fiber-optic cables,
coaxial cables, or twisted pair wires) or wireless (radio waves, microwaves, infrared
signals).
 Channels are subject to various forms of noise and interference, such as thermal noise,
signal fading, or electromagnetic interference, which can affect the quality of the
transmitted signal.

6. Demodulator
 The demodulator at the receiver end converts the modulated signal back into its
original digital form. This is the reverse of modulation.
 The demodulator retrieves the binary data (1s and 0s) from the analog signal by
interpreting the changes in the signal’s amplitude, frequency, or phase.

7. Channel Decoder
 The channel decoder detects and corrects any errors that occurred during
transmission, using the redundant bits that were added during the channel encoding
process. Error correction techniques such as Hamming codes or Reed-Solomon
codes can be applied.
 The decoder ensures that the original data is recovered as accurately as possible, even
if some errors occurred during transmission.

PANGASINAN STATE UNIVERSITY 7


Data and Digital Communications

8. Source Decoder
 The source decoder converts the error-free digital signal back into its original form,
such as audio, video, or text. If the data was compressed during source encoding, it is
decompressed at this stage.
 This process is the reverse of source encoding, and it results in the reproduction of the
original message.

9. Destination
 The destination is the final point in the communication system where the decoded
message is delivered. This could be a speaker (for audio), a display screen (for video),
or a computer (for data files).
 The destination is where the information is consumed by the user or application.

Additional Concepts:
 Noise: External disturbances that can corrupt the signal during transmission. Noise is
a significant challenge in communication systems, and various techniques (like error
correction and modulation) help to mitigate its effects.
 Bandwidth: The capacity of the communication channel to carry information. Higher
bandwidth allows more data to be transmitted in a given time.
 Latency: The delay between the transmission and reception of a signal. Lower
latency is desirable in real-time applications like voice and video communication.
 Bit Rate: The rate at which data is transmitted over the channel, usually measured in
bits per second (bps). Higher bit rates enable faster communication.
`
Summary

Digital communication involves the exchange of information through digital


technologies, enabling real-time interaction and multimedia sharing over electronic devices
and networks. In organizational contexts, it extends to online communication efforts,
encompassing diverse channels like websites, mobile chat, and blogs. Proficient digital
marketing professionals play a crucial role in leveraging technology and messaging
convergence. Edward Powers, a professor at Northeastern, underscores the significant
advancements in message dissemination today compared to past decades. He emphasizes the
need for digital communication professionals to adeptly deploy new tools for effective
communication.

The evolution of telecommunications saw the early use of telegraph cables, followed
by the widespread adoption of the telephone in 1876. Subsequently, radio and movies became
popular forms of mass entertainment, with radio broadcasts starting in 1928 and Hollywood
dominating global cinema. Television marked a new era in mass entertainment in 1928, and
satellite technology, initially for military purposes, became integral to global communications.

The introduction of mobile phones saw Motorola making the first cell phone call in
1973, while the World Wide Web, pioneered by Tim Berners-Lee in 1989, transformed internet
communication. Google's launch in 1998 revolutionized internet search, and the debut of SMS
in 1992 laid the foundation for text messaging. Social media platforms emerged, connecting
people globally, and the release of the iPhone in 2007 combined phone and computer functions.

PANGASINAN STATE UNIVERSITY 8


Data and Digital Communications

However, challenges like issues of free speech, user privacy, and data security arose
with the growth of social media. Despite the ubiquity of smartphones, not everyone has equal
access. The COVID-19 pandemic further accelerated the shift to online activities, prompting
schools, offices, and performances to move online. Frontline workers faced increased infection
risks, and aspects of daily life shut down. Additionally, limitations in internet access hindered
remote learning for many children and young people.

Digital communication involves several key elements, including a sender who initiates
the communication, a message conveyed in various forms, encoding to prepare the message
for transmission, and a channel through which it is sent. The transmission process moves the
encoded message through digital platforms, and decoding occurs on the receiver's end to
interpret the message. The receiver, who is the intended audience, provides feedback,
completing the communication loop. Noise, or interference, may affect clarity during
transmission. Protocols ensure consistent interpretation, and storage retains messages for future
reference. Interactive features engage users in two-way communication. Understanding these
elements is essential for effective digital communication, shaping how information is shared
and received in the digital landscape.

Exercises:
Multiple Choice: Choose the correct letter of the correct answer
1: What does digital communication encompass?
a.) Only the exchange of information through traditional means.
b.) The exchange of information using analog`technologies
c.) The exchange of information, messages, and ideas through digital technologies and
platforms
2: How do Organizations engage in digital communication with stakeholders?
a. Exclusively through physical meetings.
b. By using smoke signals.
c. Utilizing a diverse array of digital communication channels, including websites,
mobile chat, and blogs.
3: What role do proficient digital marketing professionals play?
a. Limited role in organizational communication.
b. Solely responsible for physical advertising
c. Navigating the convergence of technology and messaging effectively in online
communication efforts.
4: According to Edward Powers, what has significantly changed in today’s communication
landscape?

a) Messages are disseminated at a slower pace.


b) Options for disseminating messages have not changed.
c) Expanded and accelerated options for disseminating messages compared to a few
decades ago.
5: What does Edward Powers emphasizes about digital communication professionals?
a. They need not consider new tools for effective communication.
b. New tools in communication have no impact.
c. They should carefully consider how to deploy new tools effectively for

PANGASINAN STATE UNIVERSITY 9


Data and Digital Communications

communication.
6: When was the telephone patented?
a. 1858
b. 1876
c. 1907
d. 1928

7: Which social media platform originated at Harvard University in 2004?


a. Twitter
b. Instagram
c. Facebook
d. TikTok

8: What was the first artificial satellite, launched in 1957?


a. Sputnik 1
b. GPS
c. Telstar
d. Explorer 1

9: Who pioneered the World Wide Web in 1989?


a. Bill Gates
b. Tim Berners-Lee
c. Steve Jobs
d. Mark Zuckerberg `

10: Which company introduced the first cell phone call in 1973?
a. Apple
b. Motorola
c. AT&T
d. Samsung

PANGASINAN STATE UNIVERSITY 10


Data and Digital Communications

Unit 2
Modulation Methods

Objectives

• Define modulation and explain its significance in communication systems.


• Describe the concept of analog modulation.
• Compare and contrast these analog modulation methods.
• Introduce the concept of digital modulation.
• Explain the advantages of digital modulation in modern communication systems.
• Explore the trade-offs involved in selecting modulation techniques, including data rate
vs. bandwidth, robustness to noise, and spectral efficiency.
• Illustrate how different modulation methods address these trade-offs.
• Provide practical examples, simulations, or demonstrations to help students visualize
how modulation works in practice.

Introduction to Modulation

Modulation is the process of altering a carrier signal (typically a high-frequency wave)


in order to encode information for transmission over a communication channel. In digital
communication, modulation techniques are crucial because they allow digital signals to be
transmitted over various types of media, including radio waves, fiber optics, and cables. The
main goal of modulation is to transmit data efficiently, minimize noise interference, and
maximize the use of bandwidth. `

Modulation methods can be divided into two broad categories:

 Analog modulation is the process of varying a continuous carrier wave in order to


transmit analog information (like voice or video signals) over a communication
channel. Unlike digital modulation, which uses discrete signals, analog modulation
deals with continuous waveforms, and its primary use is in broadcasting and
telecommunications systems. The key types of analog modulation are Amplitude

PANGASINAN STATE UNIVERSITY 11


Data and Digital Communications

Modulation (AM), Frequency Modulation (FM), and Phase Modulation (PM).


Each of these modulates different aspects of the carrier wave (amplitude, frequency,
or phase) in accordance with the information signal.

 Digital modulation is the process of converting digital signals (binary data, i.e., 0s
and 1s) into an analog waveform suitable for transmission over various
communication channels like radio waves, fiber optics, or wired cables. The key
purpose of digital modulation is to efficiently transmit digital data over
communication media, ensuring reliability, reducing noise effects, and maximizing
bandwidth usage

Analog Modulation

The key types of analog modulation are Amplitude Modulation (AM), Frequency
Modulation (FM), and Phase Modulation (PM). Each of these modulates different aspects
of the carrier wave (amplitude, frequency, or phase) in accordance with the information
signal.

ANALOG MODULATION

Parameters of Amplitude Modulation


Amplitude Modulation (AM), the amplitude of the carrier wave is varied in
proportion to the instantaneous amplitude of the modulating (information) signal while the
frequency and phase of the carrier remain constant. Several key parameters define the
behavior and performance of AM, including carrier frequency, modulation index, bandwidth,
and power distribution.

PANGASINAN STATE UNIVERSITY 12


Data and Digital Communications

1. Carrier Frequency (fc)

The carrier frequency (fc) is the frequency of the unmodulated carrier wave. In AM,
the carrier frequency remains constant, and its amplitude is modulated according to the
information signal. The carrier frequency is chosen based on the application (e.g., AM radio,
TV broadcasting).

Example:

 For AM radio broadcasting, carrier frequencies range from 530 kHz to 1710 kHz.

2. Modulating Frequency (fm)

The modulating frequency (fm) is the frequency of the modulating signal, which
contains the information to be transmitted. It determines how quickly the amplitude of the
carrier is varied.

Example:

 In AM radio, the modulating frequency corresponds to the audio signal, which


typically ranges from 20 Hz to 15 kHz.
𝐴𝑚
3. Modulatin Index (m) 𝑚=
𝐴𝑐 `
The modulation index (m) represents the extent of modulation, calculated as the ratio of
the peak amplitude of the modulating signal (Am) to the peak amplitude of the carrier signal
(Ac). The modulation index indicates how much the amplitude of the carrier wave is varied
by the modulating signal.

 Undermodulation: When m<1 the carrier is


not fully modulated.

 Perfect modulation: When m=1, the carrier


is fully modulated.

 Overmodulation: When m>1, the signal is


overmodulated, causing distortion.

Example:

 If the modulation index m=0.5, the carrier amplitude varies by 50% of its original
value.

4. Bandwidth (B) 𝐵 = 2𝑓𝑚


The bandwidth (B) of an= AM signal is the range of frequencies the modulated signal
occupies. In AM, the modulated signal contains the carrier and two sidebands: an upper

PANGASINAN STATE UNIVERSITY 13


Data and Digital Communications

sideband (USB) and a lower sideband (LSB). The bandwidth is determined by the highest
frequency of the modulating signal.

 Double-sideband AM (DSB-AM): The total bandwidth is twice the highest


modulating frequency.

Example:

 If the highest modulating frequency is 5 kHz, the bandwidth of the AM signal is


B=2⋅5 kHz =10 kHz

5. Sidebands

In AM, the modulated signal consists of the carrier frequency and two sidebands:

 Lower Sideband (LSB): 𝑓𝑐 − 𝑓𝑚

 Upper Sideband (USB): 𝑓𝑐 +


= 𝑓𝑚
Each sideband contains a copy of the modulating
= signal, and the total bandwidth includes
both the upper and lower sidebands.

Example:

 If the carrier frequency is 1 MHz and the `modulating frequency is 5 kHz, the
sidebands are at 995 kHz (LSB) and 1.005 MHz (USB).

6. Power in Amplitude Modulation

In AM, the total transmitted power is distributed across the carrier and sidebands. The
carrier consumes most of the power, while the sidebands carry the actual information.

Total Power (Pt):


𝑚2
The total power in an AM signal is given by: 𝑃𝑡 = 𝑃𝑐 (1 + )
2
Where:

 𝑃𝑡 is the total transmitted power.


 𝑃𝑐 is the power of the unmodulated carrier.
 𝑚 is the modulation index.

Carrier Power (Pc):


𝑉𝑐 2
The carrier power is calculated as: 𝑃𝑐 = Where:
2𝑅

 𝑉𝑐 is the carrier voltage.


 R is the load resistance.

Power in Sidebands (Psb):


𝑚2
The power in the sidebands is proportional to the square of the modulation index: 𝑃𝑠𝑏 = 𝑃𝑐 2

PANGASINAN STATE UNIVERSITY 14


Data and Digital Communications

Example:
0.52
 If Pc=100 W and m=0.5, the total transmitted power is 𝑃𝑡 = 100 (1 + )=
2
112.5W

Parameters of Frequency Modulation (FM)


In Frequency Modulation (FM), the frequency of the carrier signal is varied in
proportion to the amplitude of the modulating signal (information signal). Several key
parameters characterize and influence FM signals, including the modulation index,
bandwidth, frequency deviation, and signal-to-noise ratio (SNR).

1. Carrier Frequency (fc)

The carrier frequency (fc) is the frequency of the unmodulated carrier signal. In FM,
the carrier frequency remains constant in amplitude but varies in frequency according to the
input signal. The carrier frequency is selected based on the application and transmission
medium.

Example:

 For FM radio broadcasting, the carrier frequency typically ranges between 88 MHz
and 108 MHz.

2. Modulating Frequency (fm)

The modulating frequency (fm) is the frequency of the information (or modulating)
signal. This frequency determines how fast the carrier frequency is changing. In audio
transmission, this is the frequency range of the sound, and it typically lies between 20 Hz and
20 kHz for FM radio.

Example:

 In FM radio, the modulating signal (e.g., an audio signal) has frequencies that can
range from 30 Hz to 15 kHz.

PANGASINAN STATE UNIVERSITY 15


Data and Digital Communications

3. Frequency Deviation (Δf)

Frequency deviation (Δf) refers to the amount by which the carrier frequency varies
from its unmodulated frequency in response to the modulating signal. It is determined by the
amplitude of the modulating signal. In FM, higher amplitudes of the input signal cause larger
deviations in the carrier frequency.

 Maximum frequency deviation: The peak frequency change from the carrier
frequency.

Example:

 In FM radio, the maximum allowable frequency deviation is ±75 kHz, meaning the
carrier frequency can shift by up to 75 kHz above or below its central frequency.

Formula:
𝚫𝑓 = 𝑘𝑓 𝐴𝑚 Where:

 𝑘𝑓 is the frequency sensitivity (constant of proportionality).


 𝐴𝑚 is the amplitude of the modulating signal.

4. Modulation Index (β)


The modulation index (β) in FM is the ratio` of the frequency deviation (Δf) to the
modulating frequency (fm). It defines the extent of frequency modulation and affects both the
bandwidth and the nature of the FM signal.
Formula:
𝚫𝑓
𝛃=𝑓
𝑚

 Wideband FM (WBFM): When β is greater than 1, this is called wideband FM,


where the frequency deviation is much larger than the modulating frequency (e.g., FM
radio broadcasting).
 Narrowband FM (NBFM): When β is less than 1, this is called narrowband FM,
where the frequency deviation is small compared to the modulating frequency (e.g.,
two-way radios or marine communication).
Example:
 In FM radio with a frequency deviation of ±75 kHz and a modulating frequency of
𝟕𝟓
15 kHz, the modulation index is 𝛃 = 15 = 𝟓

5. Bandwidth (B)
The bandwidth (B) of an FM signal is the range of frequencies that the modulated
signal occupies. FM typically requires more bandwidth than amplitude modulation (AM)
because the frequency deviation increases with the amplitude of the modulating signal.

PANGASINAN STATE UNIVERSITY 16


Data and Digital Communications

Carson's Rule:
A practical rule for estimating the bandwidth of an FM signal is Carson’s Rule, which is
given by: 𝐵 = 2(𝚫𝑓 + 𝑓𝑚 )
This formula accounts for both the frequency deviation and the highest modulating
frequency. It provides an estimate of the total bandwidth required for FM transmission.
Example:
 In FM radio broadcasting with a frequency deviation of ±75 kHz and a maximum
modulating frequency of 15 kHz, the bandwidth is: B=2(75+15)=180 kHz
6. Power Distribution
In FM, the total power of the modulated signal remains constant, unlike AM where the
power depends on the amplitude variations. However, the power is distributed across the
carrier and sidebands.
 Carrier Power: The power at the carrier frequency (fc) in FM decreases as the
modulation index increases because more power is transferred to the sidebands.
 Sidebands: FM generates multiple sidebands, each at different frequencies (fc ±
n*fm), where n is an integer.

7. Signal-to-Noise Ratio (SNR)
The signal-to-noise ratio (SNR) in FM determines the quality of the received signal. FM
has an advantage over AM in terms of SNR because ` FM signals are less affected by noise
(noise typically affects amplitude, not frequency).
 Capture Effect: In FM, the strongest signal tends to "capture" the receiver, reducing
interference from weaker signals. This enhances SNR in multi-signal environments.

Example:
 In a noisy environment, an FM radio receiver can lock onto the strongest station,
reducing interference from others.

Parameters of Phase Modulation (PM)

Phase Modulation (PM) is a form of modulation where the phase of the carrier
signal is varied in direct proportion to the instantaneous amplitude of the modulating

PANGASINAN STATE UNIVERSITY 17


Data and Digital Communications

(information) signal. Unlike amplitude modulation (AM) or frequency modulation (FM), PM


directly affects the phase of the carrier. The key parameters that define the behavior and
characteristics of PM include the carrier frequency, modulation index, phase deviation,
bandwidth, and power distribution.

1. Carrier Frequency (fc)

The carrier frequency (fc) is the frequency of the unmodulated carrier signal. In PM,
the carrier frequency remains constant in terms of amplitude and frequency but changes its
phase according to the modulating signal.

Example:
 In PM-based communication systems, carrier frequencies can range from MHz to
GHz depending on the application (e.g., satellite communication, digital
broadcasting).

2. Modulating Frequency (fm)

The modulating frequency (fm) refers to the frequency of the input signal that
contains the information to be transmitted. This signal causes the phase of the carrier to vary.
The modulating frequency determines how rapidly the phase changes occur.

Example:
 If the modulating signal is an audio signal, the frequency range can be from 20 Hz to
15 kHz. `

3. Phase Deviation (Δθ)

Phase deviation (Δθ) refers to the maximum change in the phase of the carrier signal
from its unmodulated state. It depends on the amplitude of the modulating signal. A larger
amplitude of the modulating signal causes a greater phase deviation in the carrier.

Formula:
𝛥𝜃 = 𝑘𝑝 𝐴𝑚
Where:
 𝑘𝑝 is the phase sensitivity (a constant that defines how much the phase changes for a
given modulating signal amplitude).
 𝐴𝑚 is the amplitude of the modulating signal.

Example:
 If the amplitude of the modulating signal increases, the phase deviation will increase,
leading to larger phase shifts in the carrier.

4. Modulation Index (β)

The modulation index (β) in PM is the ratio of the maximum phase deviation (Δθ) to
the modulating frequency (fm). It determines the degree of modulation and affects the
bandwidth of the signal.

PANGASINAN STATE UNIVERSITY 18


Data and Digital Communications

Formula:
𝛥𝜃
𝛽=
𝑓𝑚
 A higher modulation index indicates that the phase of the carrier changes more
dramatically in response to the modulating signal.
 Wideband PM (WPM) occurs when the modulation index is greater than 1, while
narrowband PM (NPM) occurs when the modulation index is less than 1.

Example:
 If the maximum phase deviation is 90 degrees (or π/2 radians) and the modulating
π/2
frequency is 5 kHz, the modulation index is: 𝛽 =
5000
5. Bandwidth (B)

The bandwidth (B) of a phase-modulated signal is the range of frequencies occupied


by the modulated signal. The bandwidth in PM is related to both the phase deviation and the
modulating frequency. Larger phase deviations and higher modulating frequencies result in
wider bandwidths.

Carson’s Rule (for approximate bandwidth):


For practical systems, Carson’s Rule is often used to estimate the bandwidth of PM
signals: 𝐵 = 2(𝑓𝑚 + 𝛥𝑓) Where Δf is the peak frequency deviation caused by phase
modulation.

Example: `
 If the maximum frequency deviation is 75 kHz and the highest modulating frequency
is 15 kHz, the bandwidth of the PM signal is approximately: B=2×(15+75)=180 kHz

6. Signal-to-Noise Ratio (SNR)

The signal-to-noise ratio (SNR) in PM is an important measure of the quality of the


modulated signal. Phase modulation, like frequency modulation, is less susceptible to
amplitude noise. This makes PM relatively robust in noisy environments. Capture Effect: In
systems where multiple signals are present, the strongest signal "captures" the receiver,
making PM resistant to interference from weaker signals.

PULSE CODE MODULATION (PCM)


Pulse Code Modulation (PCM) is a method used to digitally represent analog
signals. It is a type of digital signal where the amplitude of an analog signal is sampled and
quantized into discrete levels, and then encoded into a digital binary form. PCM is widely
used in digital audio, telephony, and other communication systems due to its simplicity and
effectiveness in transmitting analog information in a digital format.

PANGASINAN STATE UNIVERSITY 19


Data and Digital Communications

The Pulse Code Modulation process is done through the following steps:
 Sampling
 Quantization
 Encodng
Block diagram of the Pulse Code Modulation process is as shown in the figure below.

 Low-Pass Filter

A low-pass filter (LPF) is an electronic filter that allows signals with a frequency lower
than a specified cutoff frequency to pass through while attenuating (reducing) the amplitude
of frequencies higher than the cutoff. Low-pass filters are commonly used in a variety of
applications, such as signal processing, audio electronics, and communication systems.

 Sampling
`
Sampling involves measuring the amplitude of an analog signal at discrete time
intervals. The rate at which these samples are taken is known as the sampling rate or
sampling frequency. The sampling rate (also known as sampling frequency) is a crucial
parameter in digital signal processing, especially when converting analog signals to digital
form. It defines the number of samples of an analog signal that are taken per second during
the process of digitization. The sampling rate is measured in Hertz (Hz), where 1 Hz
represents one sample per second. In most applications, higher sampling rates capture more
detail from the analog signal, resulting in better fidelity when the signal is reconstructed.
The Nyquist Theorem, also known as the Nyquist-Shannon Sampling Theorem, is
a fundamental principle in digital signal processing that dictates how frequently an analog
signal must be sampled to accurately convert it into a digital form without introducing errors
or distortion. It states:

"To avoid aliasing and fully reconstruct a continuous-time signal from its
samples, the signal must be sampled at a rate that is at least twice the highest frequency
component present in the signal."

This minimum sampling rate is known as the Nyquist rate, and half of the sampling
rate is referred to as the Nyquist frequency.

 Sampling Rate (fs): The number of samples taken per second, measured in Hertz (Hz).
This is the frequency at which the analog signal is sampled.

PANGASINAN STATE UNIVERSITY 20


Data and Digital Communications

 Nyquist Rate: The minimum required sampling rate to avoid aliasing. It must be at least
twice the maximum frequency (fmax) of the signal:
𝑓𝑠 ≥ 2𝑓𝑚𝑎𝑥
where:
 𝑓𝑠 is the sampling rate,
 𝑓𝑚𝑎𝑥 is the highest frequency component of the signal.

 Nyquist Frequency: The Nyquist frequency is half of the sampling rate:


𝑓𝑠
𝑓𝑁 =
2
Signals with frequency components higher than the Nyquist frequency cannot be
correctly sampled and will cause aliasing. Aliasing occurs when a signal is undersampled,
meaning the sampling rate is less than twice the maximum frequency of the signal. In such
cases, higher-frequency components of the signal are misinterpreted as lower-frequency
components, resulting in distortion or a different signal from the original one.

If a signal with a frequency component of 1.5 kHz is sampled at 2 kHz (less than
twice the signal’s frequency), the signal will appear
` as a lower frequency after reconstruction,
causing aliasing. This distorts the signal and leads to inaccurate representation.

Examples

Voice Signals:
 The human voice typically contains frequencies up to 3.4 kHz. According to the
Nyquist theorem, to accurately sample and reconstruct the voice signal, the sampling
rate must be at least twice the highest frequency component:
𝑓𝑠 ≥ 2 x 3.4Khz = 6.8Khz
 In practice, telephone systems use a sampling rate of 8 kHz, slightly higher than the
Nyquist rate, to ensure faithful voice transmission.

Audio CDs:
 Audio CDs are designed to cover the human hearing range, which is approximately 20
Hz to 20 kHz. Using the Nyquist theorem, the sampling rate for CD audio must be at
least twice the maximum frequency:
𝑓𝑠 ≥ 2 x 20Khz = 40Khz
 CDs typically use a sampling rate of 44.1 kHz to ensure high-quality audio
reproduction without aliasing.

Practical Implications of the Nyquist Theorem


1. Anti-Aliasing Filters: In real-world applications, an anti-aliasing filter is applied
before sampling to remove frequency components that are higher than the Nyquist
frequency. This ensures that no frequencies above half the sampling rate are present in
the signal, preventing aliasing.

PANGASINAN STATE UNIVERSITY 21


Data and Digital Communications

2. Oversampling: In some applications, signals are sampled at rates higher than the
Nyquist rate, a process called oversampling. This improves the quality of the digital
representation by reducing noise and providing more data points for signal processing.
3. Digital-to-Analog Conversion (DAC): The Nyquist theorem is also essential in DAC
systems. After a signal is sampled, it can be reconstructed accurately as long as the
sampling rate adheres to the Nyquist criteria.

 Quantization

Quantization is the process of mapping a continuous range of values (such as an


analog signal) into a finite range of discrete values. In digital signal processing, quantization
is a key step in converting an analog signal to a digital signal after it has been sampled. It is
used in analog-to-digital conversion (ADC) and involves assigning each sample of the
analog signal to a specific value within a set of discrete levels.

Types of Quantization
`
1. Uniform Quantization: In uniform quantization, the step size between adjacent
quantization levels is constant. This means that all values of the signal are quantized
with the same precision.
o Example: An 8-bit quantizer has 256 levels (from 0 to 255), and each sample
is mapped to the nearest of these levels.
2. Non-uniform Quantization: In non-uniform quantization, the step sizes are not
equal. This method is often used to allocate more precision to certain parts of the
signal range, such as in companding (μ-law or A-law) for audio signals.
o Example: In speech coding, non-uniform quantization is used to give more
precision to lower amplitude sounds (which are more perceptible to the human
ear).

Quantization Error

Quantization introduces an inevitable error because the continuous signal is mapped to


discrete values. The difference between the actual analog value and the quantized value is
called the quantization error or quantization noise.
 Quantization Error is the round-off error that occurs when the continuous signal is
approximated by one of the quantization levels.
 It is the difference between the input value and the quantized output value.

Formula for Quantization Error

For a signal with a range 𝑉𝑚𝑎𝑥 − 𝑉𝑚𝑖𝑛 and L quantization levels, the quantization step
size Δ is given by:

PANGASINAN STATE UNIVERSITY 22


Data and Digital Communications

𝑉𝑚𝑎𝑥 − 𝑉𝑚𝑖𝑛
𝛥=
𝐿
The maximum possible quantization error (for uniform quantization) is:
𝛥
Max Quantization Error =
2

Example of Quantization Error:

If an audio signal with a range of 0-10 volts is quantized using an 8-bit quantizer (256 levels):
10 − 0
𝛥= = 0.039𝑉
256

The maximum quantization error is:


0.039
Max Quantization Error = = 0.0195𝑉
2

 Encoding
Encoding is the process of converting data from one form to another, typically for the
purpose of communication, storage, or processing. In the context of digital communication
and signal processing, encoding refers to transforming a signal or data into a specific format
that is optimized for transmission, storage, or interpretation by another system. In digital
systems, encoding is crucial for ensuring that information is represented, transmitted, and
decoded efficiently and accurately.
`
Common Line Encoding Schemes:

 Unipolar NRZ (Non-Return to Zero)


 Represents a binary 1 with a positive voltage and a binary 0 with zero voltage.
 Simple but lacks synchronization between sender and receiver.
 Polar NRZ
 Represents a binary 1 with a positive voltage and a binary 0 with a negative voltage.
 More efficient than unipolar but can suffer from synchronization issues over long
sequences of 0s or 1s.
 NRZ-L (Non-Return to Zero-Level)
 Voltage level remains constant during a bit interval, and the level changes based on
the binary value (1 or 0). Negative voltage for “1” and positive for the “0”
 Similar to polar NRZ but specifically defined for logic circuits.
 NRZ-I (Non-Return to Zero Inverted)
 Data is represented by the change in voltage, not the level itself. A transition
represents a binary 1, and no transition represents a binary 0.
 Helps with synchronization as transitions help detect data.

PANGASINAN STATE UNIVERSITY 23


Data and Digital Communications

 Bipolar NRZ (Alternate Mark Inversion - AMI)


 Binary 1s alternate between positive and negative voltages, while 0s are represented
by zero voltage.
 Helps in eliminating the DC component of the signal and allows for better
synchronization.

 Manchester Encoding
 Combines clock and data into one signal by making transitions in the middle of each
bit period.
 A high-to-low transition represents a binary 0, while a low-to-high transition
represents a binary 1.
 Self-clocking, but requires more bandwidth.

 Differential Manchester Encoding


 Similar to Manchester encoding, but data is represented by the presence or absence of
transitions rather than specific voltage levels. Transition at start of a bit period
represents zero. No transition at start of a bit period represents one
 More robust against noise, as it relies on transitions.

PANGASINAN STATE UNIVERSITY 24


Data and Digital Communications

Examples:

 Regenerative Repeater

A regenerative repeater is a device used in digital communication systems to boost


the strength and integrity of transmitted signals over long distances. It plays a crucial role in
preventing signal degradation caused by noise, attenuation, and other impairments that occur
during transmission.

Key Functions of a Regenerative Repeater:

1. Signal Amplification:

oWhen digital signals travel over long distances, their amplitude gradually
decreases due to attenuation in the transmission medium (e.g., copper wires,
fiber optics). The regenerative repeater amplifies the signal to restore its
strength.
2. Noise Filtering:
o Over time, transmitted signals accumulate noise. A regenerative repeater is
capable of distinguishing the original signal from the noise, regenerating the
original digital signal and eliminating most of the noise.
3. Signal Reshaping
o In digital communications, signals may lose their shape (distorted) due to
factors like dispersion or interference. The regenerative repeater rebuilds the
original clean digital signal, correcting the waveform and pulse transitions,
essentially "regenerating" the data in its original form.
4. Timing Recovery
o Regenerative repeaters also help in recovering the timing or clock signal from
the received data stream. They ensure that the receiver and transmitter stay
coordinated to appropriately interpret the data stream.

PANGASINAN STATE UNIVERSITY 25


Data and Digital Communications

 Decoder
A decoder is a critical component responsible for deciphering or interpreting encoded
signals received from a transmitter back into their original data format. Decoding is vital for
rebuilding information that was encoded to guarantee reliable and efficient data transmission
over communications channel.
When decoding encoded signals, a decoder is essential to understanding, analyzing, and
reassembling the original data. Decoders facilitate dependable communication by managing
mistakes, noise, and signal impairments, whether they are recovering data from a
communication link prone to errors or restoring compressed information to its original
format. The primary methods of decoding employed in digital communications are listed
below:

1. Source Decoding Techniques


Source decoding is used to recover data that was compressed or encoded for efficient
storage or transmission. These methods reverses the source encoding applied to the data.
2. Error Control Decoding Techniques
Error control decoding is used to detect and correct errors that occur during
transmission due to noises, interferences, or other disturbances in the communications
channel.
3. Modulation Decoding Techniques
Modulation decoding is the process of extracting the original binary data from the
modulated signal that was transmitted over a channel.
4. Channel Decoding Techniques `
Channel decoding is responsible for reconstructing the original data after transmission
by mitigating the effects of noise and correcting errors.
5. Line Coding Decoding Techniques
Line coding is a technique used to convert binary data into a signal form for
transmission over a communication medium. The decoding process reconstructs the original
binary data from the received line-coded signal.

Decoder plays a key role in ensuring that communications system are efficient, robust,
and capable of handling noise and interference.

 Reconstruction Filter
In digital-to-analog conversion (DAC) systems, a reconstruction filter is a crucial element
that transforms a discrete-time digital signal into a continuous-time analog signal. Following
the DAC's conversion of a digital signal into a series of discrete pulses or samples, the
reconstruction filter smoothest the signal to eliminate high-frequency elements and return the
signal to its original continuous analog waveform.

Key Functions of a Reconstruction Filter:

1. Smoothing the Signal:


A DAC normally produces a digital signal in the form of a "zero-order hold" output,
which is a sequence of pulses or steps. The high-frequency components linked to these

PANGASINAN STATE UNIVERSITY 26


Data and Digital Communications

abrupt transitions are eliminated by the reconstruction filter, producing a continuous,


smooth signal.
2. Removing High-Frequency Aliasing:
During sampling, high-frequency components known as aliases may develop in the
signal if the Nyquist criteria is not rigorously adhered to. These undesirable higher-
frequency elements, which are byproducts of the sampling and conversion processes, are
eliminated by the reconstruction filter, which is often a low-pass filter.
3. Reconstructing the Original Analog Signal:
When sampling, aliases—high-frequency components in the signal—may appear if
the Nyquist criterion is not strictly followed. Reconstruction filters, which are frequently
low-pass filters, remove these unwanted higher-frequency components that are
byproducts of the sampling and conversion operations.

DIGITAL MODULATION
Types of Digital Modulation Techniques:
 Amplitude Shift Keying (ASK)
Amplitude Shift Keying (ASK) is a digital modulation technique where the
amplitude of a carrier wave is varied to represent binary data (0s and 1s). In ASK, the
carrier's amplitude is changed according to the binary information being transmitted, while its
frequency and phase remain constant. ASK is one of the simplest forms of digital modulation
and is widely used in low-bandwidth applications.

 The carrier wave is a continuous analog wave with a fixed frequency.


 The binary data modulates the carrier wave's amplitude:
o Binary 1: A higher amplitude is transmitted.
o Binary 0: A lower amplitude or no signal is transmitted (in the simplest case).

In essence, the signal's amplitude is turned "on" and "off" based on the bit stream
being transmitted, which is why ASK is sometimes referred to as On-Off Keying (OOK)
when dealing with binary signals.

Mathematical Representation:

The ASK signal can be mathematically represented as:

PANGASINAN STATE UNIVERSITY 27


Data and Digital Communications

𝐴 cos(2𝜋𝑓𝑐 𝑡) 𝑖𝑓 𝑏𝑖𝑡 𝑖𝑠 1
𝑠(𝑡) = {
0 𝑖𝑓 𝑏𝑖𝑡 𝑖𝑠 0

Where:
 𝐴 is the amplitude of the
carrier wave.
 𝑓𝑐 is the frequency of the
carrier wave.
 𝑡 is time.

Figure below (a) shows a digital message signal using two voltage levels. One level
represents 1 and the other represents 0. The unmodulated carrier is illustrated in Figure (b).
Figure (c) and (d) are the modulated waveforms using two versions of ASK. Figure (c) uses
OOK, and (d) uses binary ASK, or BASK.

A digital signal may be used to indicate a sequence of zeros and ones, as demonstrated in
Figure A. On the graph, different time periods are shown as a succession of vertical lines with
dots at equal intervals. The sequence 0 1 1 1 0 0 0 1 0 1 is shown from left to right. Each 0
and 1 in the series corresponds to one of the time periods created by the dotted vertical lines.
For one time period, the voltage is initially at zero, climbs to one for three time periods, and
then falls to zero for three time periods. After then, it climbs to one for a single time, falls to
zero for a single time, and then rises to one for a single time.
A sinusoidal waveform beginning at the origin is displayed in Figure B. It climbs
gradually to a rounded one-volt peak. Subsequently, the line descends smoothly to a rounded
trough situated exactly the same distance below the X axis as it does above. After then, it
descends once again to a trough that is the same depth before rising once more to the next

PANGASINAN STATE UNIVERSITY 28


Data and Digital Communications

peak, which is the same height as the previous peak. Many cycles are displayed. The dotted
vertical lines make three cycles every time period, which is the frequency of the sinusoidal
waveform.
Figure A and B are combined to form Figure C. There is no waveform displayed in
where the voltage is 0. A sinusoidal segment with an amplitude of one volt is displayed when
the voltage is one. A set of vertical lines with dots spaced equally apart is used to symbolize
different time periods; these match to the ones shown in Figure A. After being at zero for one
time period, the signal shows a sinusoid for three time periods before reverting to zero for
three more. After then, the sinusoid appears for one time period, disappears for one time
period, and then reappears for one time period.
Figure D is comparable to Figure C, but instead of showing a zero voltage Figure D
shows a waveform with a smaller amplitude. A full amplitude sinusoidal signal with a
voltage of one volt is displayed where Figure A's voltage is one. Time periods are represented
by a series of equidistant dotted vertical lines, as in parts A and C. The sinusoidal signal first
appears with a lesser amplitude of around 0.5 volts for one time period. After that, it shows
three times at its maximum amplitude before returning to the smaller amplitude for three
more time periods. After that, the sinusoid occurs for a single time period at full amplitude,
then for a single time period at tiny amplitude, and finally for a single time period at full
amplitude once again.

Baud rate is a measure of the number of signal changes or symbols that occur per
second in a communication channel. It refers to the number of distinct symbol changes (also
called modulation changes) that the transmission system can handle each second. A symbol
can represent one or more bits, depending on the modulation scheme used. The relationship
between the number of available symbols, M, and` the number of bits that can be represented
by a symbol, n, is: M = 2n

Key Concepts:

1. Baud vs Bit Rate:


1. Baud rate measures the number of signal or symbol changes per second.
2. Bit rate measures the number of bits transmitted per second.
3. In cases where each symbol represents more than one bit (for example, in
modulation schemes like QPSK or 16-QAM), the baud rate can be lower than
the bit rate. This distinction becomes important in complex modulation
schemes.
2. Relation Between Baud Rate and Bit Rate:

𝐵𝑖𝑡 𝑟𝑎𝑡𝑒 = 𝐵𝑎𝑢𝑑 𝑟𝑎𝑡𝑒 𝑥 log 2 𝑀


Where:
 𝑀 is the number of different symbols or signal states.
 log 2 𝑀 indicates how many bits are transmitted per symbol. For instance, in Binary
Modulation (like ASK or BPSK), where M=2, one symbol represents 1 bit, and in 4-
QAM, where M=4, each symbol represents 2 bits.

Example:

 For a simple Binary ASK or BPSK system, each symbol represents 1 bit. So, if the
baud rate is 2400, the bit rate is also 2400 bits per second.

PANGASINAN STATE UNIVERSITY 29


Data and Digital Communications

 For a 16-QAM system, each symbol represents 4 bits (because M=16 and
log 2 (16) = 4). If the baud rate is 2400, the bit rate will be:
Bit rate=2400×4=9600 bits per second

Here, you can transmit 9600 bits per second even though the symbol rate (baud rate)
is 2400 symbols per second.

 Frequency Shift Keying (FSK)


Frequency Shift Keying (FSK) is a digital modulation technique in which the frequency
of the carrier signal is changed according to the digital data being transmitted. It is one of the
simplest and most widely used modulation methods in digital communication systems, where
different frequencies represent different binary values (0s and 1s). In FSK, the carrier
frequency is shifted between two or more discrete frequencies depending on the binary input
signal:
 Binary 1 is represented by one frequency, known as the "mark frequency" (higher
frequency).
 Binary 0 is represented by another frequency, called the "space frequency" (lower
frequency).

Types of FSK:
1. Binary FSK (BFSK):
o The simplest form of FSK where two distinct frequencies are used to represent
binary 1 and 0.
o For example: `
 f₁ (higher frequency) represents binary 1.
 f₂ (lower frequency) represents binary 0.
2. Multiple Frequency Shift Keying (MFSK):
o Instead of just two frequencies, MFSK uses multiple distinct frequencies to
represent more than one bit per symbol.
o For example, in 4-FSK, four different frequencies are used to represent
combinations of two bits (00, 01, 10, 11).

Mathematical Representation:

The FSK signal can be expressed as:

𝐴 cos(2𝜋𝑓1 𝑡) 𝑖𝑓 𝑏𝑖𝑡 𝑖𝑠 1
𝑠(𝑡) = {
𝐴 cos(2𝜋𝑓2 𝑡) 𝑖𝑓 𝑏𝑖𝑡 𝑖𝑠 0

Where:

 𝑓1 is the frequency for binary


1.
 𝑓2 is the frequency for binary 0.
 𝐴 is the amplitude of the carrier wave.
 𝑡 is time

PANGASINAN STATE UNIVERSITY 30


Data and Digital Communications

In Frequency Shift Keying (FSK), the relationship between bit rate, baud rate, and
bandwidth is important for understanding the efficiency of data transmission.

 Bit Rate in FSK:


 The bit rate is the number of bits transmitted per second.
 In Binary FSK (BFSK), where each symbol represents one bit (either a 0 or 1), the
bit rate is equal to the baud rate (since one symbol equals one bit).
 Baud Rate in FSK:
 The baud rate is the number of symbols transmitted per second.
 In BFSK, since one symbol represents one bit, the baud rate and bit rate are the same:
Baud rate (symbols/sec)=Bit rate (bits/sec)
 In Multiple FSK (MFSK), each symbol represents more than one bit. The baud rate
can be less than the bit rate if the modulation scheme uses multiple frequencies to
represent different symbols. For example, in 4-FSK, each symbol represents 2 bits, so
the bit rate would be twice the baud rate.
𝐵𝑖𝑡 𝑟𝑎𝑡𝑒 = 𝐵𝑎𝑢𝑑 𝑟𝑎𝑡𝑒 𝑥 log 2 𝑀

Where 𝑀 is the number of different frequencies used.

Example:
o In 4-FSK, M=4, so log 2 4 = 2
o If the baud rate is 1200 symbols per second, the bit rate would be:
Bit rate=1200×2=2400 bits per second

 Bandwidth of FSK: `
The bandwidth of an FSK signal depends on several factors, including:
 Frequency Deviation (Δf): The difference between the frequencies representing
binary 0 and 1.
 Data Rate (R): The bit rate of the transmission.

The approximate bandwidth of an FSK signal can be calculated using Carson's Rule:
𝐵 ≈ 2 × (Δf + R)
Where:
 𝐵 is the bandwidth.
 Δf is the frequency deviation (half the difference between the two frequencies in
BFSK).
 R is the bit rate.

Example:
 In BFSK, if the frequency for a binary 0 is 1 kHz and for a binary 1 is 2 kHz, the
frequency deviation Δf is 500 Hz (half the difference between 1 kHz and 2 kHz).
 For a bit rate R=1200 bits per second (bps):

B≈2×(500+1200)=2×1700=3400Hz

Thus, the required bandwidth for this FSK signal would be approximately 3.4 kHz.

PANGASINAN STATE UNIVERSITY 31


Data and Digital Communications

In FSK systems, the trade-off between bit rate, baud rate, and bandwidth is important
for optimizing communication, especially in applications where bandwidth is a limited
resource.

 Phase Shift Keying (PSK)


Phase Shift Keying
(PSK) is a digital
modulation technique
where data is
transmitted by varying
the phase of a constant
amplitude carrier wave.
In PSK, the phase of the
carrier signal is
modulated to represent
different data symbols.

Mathematical Representation:
The PSK signal can be expressed as:
𝑠(𝑡) = 𝐴 cos(2𝜋𝑓𝑐 𝑡 + ϕ𝑛 )

Where:
 𝐴 is the amplitude.
 𝑓𝑐 is the carrier frequency. `
 ϕ𝑛 is the phase shift corresponding to the symbol.

Key Types of Phase Shift Keying:


1. Binary Phase Shift Keying (BPSK):
o BPSK is the simplest form of PSK, where the phase of the carrier is shifted
between two values to represent binary data.
o Typically, the phase is shifted by 180 degrees (π radians) to encode binary 0
and binary 1.
o The signal can be represented as: 𝑠(𝑡) = 𝐴 cos(2𝜋𝑓𝑐 𝑡 + ϕ)

Where:
 𝐴 is the amplitude.
 𝑓𝑐 is the carrier frequency.
 ϕ is the phase shift (0 or 𝜋)

2. Quadrature Phase Shift Keying (QPSK):


o QPSK modulates the phase of the carrier wave using four different phase
shifts, allowing it to encode two bits per symbol.
o The phase shifts are typically 0, 90, 180, and 270 degrees (or 0, π/2, π, 3π/2
radians).
o The signal can be represented as: 𝑠(𝑡) = 𝐴 cos(2𝜋𝑓𝑐 𝑡 + ϕ𝑖 ). Where ϕ𝑖 is one
of the four possible phase shifts corresponding to the two-bit combinations
(00, 01, 10, 11).
3. 8-Phase Shift Keying (8-PSK):

PANGASINAN STATE UNIVERSITY 32


Data and Digital Communications

o
8-PSK extends QPSK by using eight different phase shifts to represent three
bits per symbol.
o The phase shifts are spaced by 45 degrees (π/4 radians).
o The signal can be represented as:
𝑠(𝑡) = 𝐴 cos(2𝜋𝑓𝑐 𝑡 + ϕ𝑖 ) Where ϕ𝑖 corresponds to one of the eight possible
phase shifts.
4. M-Phase Shift Keying (M-PSK):
o M-PSK generalizes the concept of PSK to M different phase shifts, where M
is any positive integer.
o Each phase shift represents a unique combination of bits. For example, 16-
PSK uses 16 different phase shifts to encode 4 bits per symbol.

Bit Rate
Bit Rate refers to the rate at which data bits are transmitted over a communication
channel. In PSK, it is directly related to the number of bits transmitted per second.
 For BPSK (Binary Phase Shift Keying), each symbol represents 1 bit.
o Bit Rate (Rb ) = Symbol Rate = Baud Rate.
o Example: If you transmit 1,000 symbols per second, the bit rate is also 1,000
bits per second (bps).
 For QPSK (Quadrature Phase Shift Keying), each symbol represents 2 bits.
o Bit Rate (Rb ) = 2 × Baud Rate.
o Example: If you transmit 1,000 symbols per second, the bit rate is 2,000 bps.
 For 8-PSK, each symbol represents 3 bits.
o Bit Rate (Rb ) = 3 × Baud Rate. `
o Example: If you transmit 1,000 symbols per second, the bit rate is 3,000 bps.

Baud Rate
Baud Rate is the rate at which symbols (distinct signal changes) are transmitted per
second. Each symbol in PSK carries a specific number of bits.
 For BPSK, the baud rate is equal to the bit rate.
o Baud Rate = Bit Rate.
 For QPSK, the baud rate is half of the bit rate.
o Baud Rate = Bit Rate / 2.
 For 8-PSK, the baud rate is one-third of the bit rate.
o Baud Rate = Bit Rate / 3.

Bandwidth
Bandwidth refers to the range of frequencies that a signal occupies. In PSK, the required
bandwidth is closely related to the baud rate.
 For BPSK, the bandwidth BWBWBW is approximately equal to the baud rate.
o BW≈Rs , where Rs is the symbol rate (baud rate).
 For QPSK, the bandwidth is also approximately equal to the baud rate.
o BW≈Rs, where Rs is the symbol rate.
 For 8-PSK, the bandwidth is generally the same as for QPSK.
o BW≈Rs

Example
Suppose you want to transmit data with a bit rate of 1,000 bps:
 BPSK:

PANGASINAN STATE UNIVERSITY 33


Data and Digital Communications

o Baud Rate = 1,000 symbols per second.


o Bandwidth ≈ 1,000 Hz.
 QPSK:
o Baud Rate = 500 symbols per second.
o Bit Rate = 1,000 bps.
o Bandwidth ≈ 500 Hz.
 8-PSK:
o Baud Rate = 333 symbols per second (approx).
o Bit Rate = 1,000 bps.
o Bandwidth ≈ 333 Hz.

These relationships show how different PSK schemes impact the transmission rate
and bandwidth requirements, with higher-order PSK schemes providing higher bit rates but
maintaining similar bandwidth efficiency as lower-order PSK schemes.

Summary

Every analog modulation technique, including AM, FM, and PM, has certain benefits
and drawbacks. FM and PM offer superior noise tolerance at the cost of requiring more
bandwidth than AM, which is simpler but more prone to noise. Even if digital modulation is
becoming more and more prevalent in contemporary communication technologies, these
modulation techniques are still fundamental to communication systems, particularly in
broadcast and telecommunications.
`
Analog signals can be digitalized using pulse code modulation (PCM), which involves
sampling, quantizing, and encoding the signals into a binary representation. It is extensively
utilized in audio, telecommunications, and broadcasting and serves as the foundation of many
digital communication systems. Although PCM demands a significant amount of bandwidth
and cautious quantization error management, it provides high quality and accurate
representation of analog signals.

Digital modulation techniques are selected according to the particular needs of the
communication system and offer a variety of trade-offs between noise resistance, bandwidth
utilization, and system complexity. In digital communication, considerations including
system complexity, noise resistance, power efficiency, and bandwidth availability influence
the modulation technique chosen. Because of their spectral efficiency, QAM and PSK are
typically used for high-speed communication systems, whereas FSK and MSK are preferred
in settings with less bandwidth constraints and noise concerns. Engineers can select the
optimal modulation technique for a particular communication scenario by weighing the trade-
offs between complexity, noise resistance, and bandwidth efficiency offered by each
technique.

Exercises:

1. A carrier wave of frequency f = 1MHz with a pack voltage of 20V is used to modulate a
signal of frequency 1kHz with a pack voltage of 10v. Find out the following (a) modulation
index, (b) Frequencies of the modulated wave, (c) Bandwidth.
Ans.

PANGASINAN STATE UNIVERSITY 34


Data and Digital Communications

2. A carrier is frequency modulated (FM) by a sinusoidal modulating signal x(t) of frequency


2 kHz and results in a frequency deviation Δ𝑓 of 5 kHz. Find the bandwidth occupied by
the FM waveform. The amplitude of the modulating sinusoid is increased by a factor 3 and
its frequency lowered by 1 kHz. Find the new bandwidth
Ans.

3. Speech signal is bandlimited to 3 kHz and sampled at the rate of 8 kHz. To achieve the
same quality of distortion PCM requires 8 bits/sample and DPCM requires 4 bits/sample.
Determine the bit rates required to transmit the PCM and DPCM encoded signals
Ans.

4. Determine (a) the peak frequency deviation, (b) minimum bandwidth, and (c) baud for a
binary FSK signal with a mark frequency of 49 kHz, a space frequency of 51 kHz, and an
input bit rate of 2 kbps.
Ans.

5. Given the following binary digit 1 1 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 1 0 Illustrate the resulting


encoded output using (a) NRZ-L, (b) NRZI, (c) Bipolar-AMI, (d) Machester, (e)
Differential Manchester.
Ans.

PANGASINAN STATE UNIVERSITY 35


Data and Digital Communications

Unit 3
Information Theory

Objectives
• Define Information Theory and explain its significance in communication systems.
• Describe the concept entropy, divergence and mutual information
• Understand how entropy helps effectively on the bulk of information transfer.
• Analyze and reduce redundancy in data
• Evaluate how much information is lost when one probability distribution is used to
represent another.
• Understand and minimize the impact of inaccuracies when approximating or
transforming data.
Introduction to Information Theory
Information is the meaning or element that is sent over a communication system between
a sender and a recipient. It is the core entity
being transmitted, processed, and interpreted.
Any sort of data that has to be transferred from
one place to another, including text, audio, and
video, can be considered information. To
facilitate fast and dependable transmission
across several communication channels, digital
communication entails converting this
information into binary form, or bits. `
Information theory principles, like entropy,
divergence, and mutual information, define the
limits of data transmission and guide the design
of efficient communication systems.
The subject of information theory encompasses the transmission, measurement, and storage
of information. It provides a mathematical structure for understanding the limitations on
transmitting, compressing, and communicating data in noisy environments. Information theory
is essential to various fields such as computer science, machine learning, data compression,
encryption, and telecommunications. In 1948, Claude Shannon established this field with his
influential paper "A Mathematical Theory of Communication."
The fundamental concept of information theory is that the level of "informational value" in
a communicated message is determined by the extent to which the message's content is
unexpected. When a highly probable event occurs, the message contains minimal information.
Conversely, when a highly improbable event occurs, the message is significantly more
informative. For example, knowing that a specific number will not be the winning number in a
lottery conveys very little information, as any chosen number is almost certain not to win.
However, knowing that a particular number will win a lottery holds high informational value
because it indicates the realization of a highly unlikely event

Practical Implications of Shannon's Theory:

1. Data Compression:
o Shannon’s theory defines the limits of compressing data. The minimum average
number of bits required to encode the output of a source is given by the entropy.

PANGASINAN STATE UNIVERSITY 36


Data and Digital Communications

Efficient encoding techniques (like Huffman coding or arithmetic coding)


approach this theoretical limit.

2. Error Detection and Correction:


o Shannon’s theory led to the development of error-correcting codes (like
Hamming codes, Reed-Solomon codes, and turbo codes) that add redundancy
to transmitted data, allowing receivers to detect and correct errors caused by
noise in the channel.

3. Bandwidth and Capacity Trade-off:


o Shannon’s capacity theorem highlights the trade-off between bandwidth and
signal power. Higher bandwidth allows for higher data rates, but it also requires
more signal power to overcome noise.

4. Digital Communication Systems:


o The principles of Shannon’s theory are applied to the design of digital
communication systems, including cell phones, the internet, satellite
communications, and digital broadcasting. The use of modulation, coding, and
compression techniques all stem from his work.

ENTROPHY
Entropy (Measure of Information), denoted as 𝐻(𝑋) is the fundamental measure of
uncertainty or unpredictability in a source of information. It quantifies the average amount of
information produced by a random source. In `data compression, entropy helps define the
theoretical limit for how much a data source can be compressed. The more uncertain an event,
the more information it contains. For instance, a fair coin flip (with equal probability for heads
or tails) has higher entropy than a biased coin.

𝐻(𝑋) = − ∑ 𝑃(𝑥) log 2 𝑃(𝑥)


𝑥∈𝑋

Where: 𝑃(𝑥) is the probability of outcome x.


𝐻(𝑋) Entropy of the source.
X, measured in bits (for base 2
logarithms).
n Number of possible symbols.

The negative sign ensures that the entropy is a positive quantity, as the probabilities
𝑃(𝑥) are always between 0 and 1. When using logarithm base 2, entropy is measured in
bits, which is common in digital communication. If the logarithm is base 10, entropy is
measured in Hartleys or decibels (dB). In natural logarithms (base eee), entropy is measured
in nats.

 High Entropy: When the source generates messages with equal probabilities, the uncertainty
is high, leading to higher entropy. For example, flipping a fair coin has an entropy of 1 bit,
since there’s an equal chance of getting heads or tails, meaning each outcome provides new
information.

PANGASINAN STATE UNIVERSITY 37


Data and Digital Communications

 Low Entropy: When a source consistently generates the same message (i.e., predictable
data), the entropy is low. For example, if the coin always lands heads, the entropy is 0 bits,
since no new information is provided.

For instance, the encoding scheme: A = “0”, D = “1” and I = “01” is efficient but
extremely flawed, as the message "AD" and the message "I" are indistinguishable (both would
be transmitted as 01). In contrast, the encoding scheme
Letter Encoding

A 00001

D 00111

I 11111

The reconstruction of any stream of bits into the original message of the given scheme
is accurate, but it is highly inefficient as it requires significantly fewer bits to reliably transmit
messages. It is common to wonder what the optimal encoding scheme is. While the answer
heavily depends on the application, such as the use of error correcting codes for transmissions
over noisy channels, one plausible answer is "the accurate encoding scheme that minimizes the
expected length of a message." This theoretical minimum is determined by the entropy of the
message.

Properties of Entropy: `

1. Non-Negativity: Entropy is always non-negative. If the source is deterministic (no


uncertainty), the entropy is zero.

2. Maximization: Entropy is maximized when all outcomes are equally likely. For a
source with nnn outcomes, the maximum entropy is log 2 𝑛 bits, which occurs when
each outcome has a probability of 1/n

3. Additivity: If two independent sources are combined, the entropy of the combined
source is the sum of the individual entropies:
𝐻(𝑋, 𝑌) = 𝐻(𝑋) + 𝐻(𝑌)

4. Conditional Entropy: Conditional entropy measures the amount of uncertainty


remaining about one random variable given that another is known. If Y is known, the
conditional entropy of X given Y is:
𝐻(𝑋|𝑌) = − ∑ 𝑝(𝑥, 𝑦) log 2 𝑝(𝑥|𝑦)
𝑥,𝑦
This is useful for analyzing situations where partial information is available.

Examples:

1. Fair Coin Toss: If a coin is fair, the probabilities of heads and tails are both 0.5. The
entropy is:
𝐻(𝑋) = −[0.5 log 2 0.5 + 0.5 log 2 0.5] = 1𝑏𝑖𝑡

PANGASINAN STATE UNIVERSITY 38


Data and Digital Communications

This means, on average, each coin flip provides 1 bit of information.

2. Biased Coin Toss: If a coin is biased, say with probabilities 𝑝(ℎ𝑒𝑎𝑑𝑠) = 0.8 and
𝑝(𝑡𝑎𝑖𝑙𝑠) = 0.2 , the entropy is:
𝐻(𝑋) = −[0.8 log 2 0.8 + 0.2 log 2 0.5] = 1𝑏𝑖𝑡
This reflects that there is less uncertainty (or surprise) since heads is much more likely
than tails.

3. Uniform Distribution: For a random variable with nnn equally probable outcomes,
the entropy is maximized:
𝐻(𝑋) = log 2 𝑛
For example, in a 6-sided fair die:
𝐻(𝑋) = log 2 6 = 2.585𝑏𝑖𝑡𝑠
This indicates that rolling a die produces more uncertainty and information than
flipping a coin.

Applications of Entropy:

1. Data Compression: Entropy provides a lower bound on the average number of bits
needed to encode the output of a source. Efficient compression algorithms, like
Huffman coding or arithmetic coding, approach the entropy limit.

2. Communication Systems: In digital communication, entropy helps define the channel


capacity, which is the maximum rate at which information can be transmitted with
arbitrarily low error probability. The closer` the actual bit rate is to the entropy, the more
efficient the communication.

3. Cryptography: Entropy measures the randomness in cryptographic keys. A high-


entropy key is harder to guess, making it more secure against attacks.

4. Machine Learning: Entropy is used in decision trees to evaluate the information gain
when splitting datasets. Lower entropy after a split indicates a better separation of
classes.

5. Thermodynamics: Shannon’s entropy is analogous to the concept of entropy in


thermodynamics, where it measures the disorder or randomness in a system.

DIVERGENCE
Divergence refers to a measure of how one probability distribution differs from a second,
reference probability distribution. One of the most commonly used divergence measures is the
Kullback-Leibler (KL) Divergence

Kullback-Leibler (KL) Divergence is a measure in information theory that quantifies the


difference between two probability distributions. It tells us how much information is lost when
one distribution (the approximating or assumed distribution Q) is used to represent another
distribution (the true distribution P). It is also known as relative entropy. The KL divergence
from distribution P (the true distribution) to Q (the approximate distribution) is defined as:

PANGASINAN STATE UNIVERSITY 39


Data and Digital Communications

𝑃(𝑥)
𝐷𝐾𝐿 (𝑃||𝑄) = ∑ 𝑃(𝑥) log
𝑄(𝑥)
𝑋

Where:
 𝑃(𝑥) is the probability of event xxx under the
true distribution P.
 𝑄(𝑥) is the probability of event xxx under the
approximating distribution Q.
 The sum is taken over all possible events x.

KL divergence represents the extra number of bits


needed to encode messages drawn from distribution P if
we use the distribution Q to construct our encoding,
rather than the true distribution P. A lower KL divergence indicates that the two
distributions are more similar, meaning the approximating distribution Q closely resembles
the true distribution P.

KL divergence is not symmetric, meaning 𝐷𝐾𝐿 (𝑃||𝑄) ≠ 𝐷𝐾𝐿 (𝑄||𝑃). This means the
"distance" from P to Q is generally different from the "distance" from Q to P. 𝐷𝐾𝐿 (𝑃||𝑄) ≥
0, and it is zero if and only if P = Q (the two distributions are identical). This property shows
that KL divergence measures the discrepancy or inefficiency of using Q in place of P.The
result of KL divergence is expressed in bits if the logarithm is base 2 (common in
information theory) or in nats if the natural logarithm is used.
`
Example:
Consider two discrete distributions:
 P = (0.5,0.5) a fair coin with an equal probability of heads or tails.
 Q = (0.8,0.2) a biased coin where heads is more likely than tails.
The KL divergence between P and Q is:

0.5 0.5
𝐷𝐾𝐿 (𝑃||𝑄) = 0.5 log + 0.5 log ≈ 0.305𝑏𝑖𝑡𝑠
0.8 0.2

This value tells us how inefficient it would be to use the biased distribution Q to approximate
the fair distribution P.

Applications of KL Divergence:

1. Machine Learning:
o KL divergence is used in model training to compare how closely a model's
predicted probability distribution (e.g., the output of a classifier) matches the
true distribution.
o It is a common loss function in variational inference and generative models
like Variational Autoencoders (VAEs).

2. Data Compression:
o In compression algorithms, KL divergence helps determine the efficiency of
different coding schemes. The lower the divergence, the closer the compressed
data is to the optimal encoding for the true data distribution.

PANGASINAN STATE UNIVERSITY 40


Data and Digital Communications

3. Bayesian Statistics:
o KL divergence is used to measure the difference between the prior and posterior
distributions in Bayesian inference, representing how much the evidence has
changed the prior belief.

4. Natural Language Processing (NLP):


o In language models, KL divergence helps quantify how much the distribution
of predicted words differs from the true word distribution, aiding in optimizing
predictive text systems.

5. Information Retrieval:
o KL divergence is used in search engines and information retrieval systems to
measure how different a document's probability distribution is from the
distribution of a user's query, helping rank documents based on relevance.

MUTUAL INFORMATION
Mutual Information (MI) is a key concept in information theory that quantifies the
amount of information shared between two random variables. It measures how much
knowing the value of one variable reduces the uncertainty about the other. In other words, it
captures the degree of dependence between the variables.

H(X) H(Y)
`
H(X|Y) I(X;Y) H(Y|X)

H(X,Y)

Formula for Mutual Information:


For two random variables X and Y, the mutual information I(X;Y) is defined as:

𝑝(𝑥, 𝑦)
𝐼(𝑋; 𝑌) = ∑ ∑ 𝑝(𝑥, 𝑦) log
𝑝(𝑥)𝑝(𝑦)
𝑥∈𝑋 𝑦∈𝑌

Where:
 𝑝(𝑥) is the joint probability distribution of X and Y.
 𝑝(𝑥) and 𝑝(𝑦) are the marginal probability distributions of X and Y, respectively.
 The logarithm is typically base 2, meaning mutual information is measured in bits.

Key Concepts:

1. Joint Probability Distribution:


o 𝑝(𝑥, 𝑦) represents the probability that 𝑋 = 𝑥 and 𝑌 = 𝑦 simultaneously.

PANGASINAN STATE UNIVERSITY 41


Data and Digital Communications

2. Marginal Probability Distributions:


o 𝑝(𝑥) is the probability distribution of 𝑋 alone, obtained by summing over all
possible values of 𝑌: 𝑝(𝑥) = ∑𝑦 𝑝(𝑥, 𝑦)
o 𝑝(𝑦) is the probability distribution of Y alone, similarly obtained by summing
over all possible values of X

3. Interpretation:
o If X and Y are independent: The mutual information 𝐼(𝑋; 𝑌) = 0 because
knowing X provides no information about Y, and vice versa.
o If X and Y are completely dependent: The mutual information is
maximized, as knowing X fully determines Y, and vice versa.
o Mutual information is always non-negative.

Simplified Formula:
Mutual information can also be expressed in terms of entropy:

𝐼(𝑋; 𝑌) = 𝐻(𝑋) − 𝐻(𝑋|𝑌)

Where:
 𝐻(𝑋) is the entropy of X, representing the uncertainty about X.
 𝐻(𝑋|𝑌) is the conditional entropy of X given Y, representing the remaining
uncertainty about X once YYY is known.

Similarly, mutual information can be written as:


`
𝐼(𝑋; 𝑌) = 𝐻(𝑌) − 𝐻(𝑌|𝑋)

This shows that mutual information is the reduction in the uncertainty of X (or Y) due to
knowing Y (or X).

Properties of Mutual Information:

1. Symmetry:
o Mutual information is symmetric, meaning 𝐼(𝑋; 𝑌) = 𝐼(𝑌; 𝑋). It doesn't
matter whether you measure how much X tells you about Y or vice versa.

2. Non-Negativity:
o 𝐼(𝑋; 𝑌) ≥ 0, and it equals zero only if X and Y are independent.

3. Relation to Entropy:
o Mutual information can also be viewed as the difference between the joint
entropy and the sum of the marginal entropies:

𝐼(𝑋; 𝑌) = 𝐻(𝑋) + 𝐻(𝑌) − 𝐻(𝑋, 𝑌)

This shows that mutual information measures the overlap or shared information
between X and Y

Example:

PANGASINAN STATE UNIVERSITY 42


Data and Digital Communications

Consider a scenario where X represents the weather (sunny or rainy) and Y represents
whether a person carries an umbrella. If there's a strong correlation between the weather and
the use of an umbrella, the mutual information 𝐼(𝑋; 𝑌) will be high, because knowing the
weather provides a lot of information about whether the person will carry an umbrella.
If the person’s umbrella usage is completely random and independent of the weather,
then the mutual information 𝐼(𝑋; 𝑌) will be zero.

Find the mutual information 𝐼(𝑋; 𝑌) between the two random variables X and Y whose joint
1 1

pmf matrix is given by 𝑃 = [21 4


].
0
4
𝐼(𝑋; 𝑌) = 𝐻(𝑋) + 𝐻(𝑌) − 𝐻(𝑋, 𝑌) = 0.8113 + .8113 − 1.5 = 0.1226

Applications of Mutual Information:

1. Feature Selection in Machine Learning:


o Mutual information is used to select features that are most relevant to the
target variable. Features with high mutual information with the target are
considered more informative.
o
2. Communication Systems:
o In digital communication, mutual information is used to measure the amount
of information that can be transmitted over a noisy communication channel.
`
o
3. Image Processing:
o Mutual information is used in image registration, where the goal is to align
two images by maximizing the mutual information between the pixel
intensities.

4. Data Clustering:
o In clustering algorithms, mutual information can be used to measure the
similarity between clusters and the true data distribution.

5. Natural Language Processing (NLP):


o Mutual information is used to analyze word associations and dependencies in
text data, helping in tasks like collocation extraction or building probabilistic
language models.

Summary

 Information quantifies the reduction in uncertainty when an event occurs or a


message is received. It is often measured in bits (binary digits). Information can be
thought of as the content or meaning that resolves uncertainty.

 Entropy H(X) measures the uncertainty or unpredictability of a random variable X. It


represents the average number of bits needed to describe an event from the probability
distribution of X.

PANGASINAN STATE UNIVERSITY 43


Data and Digital Communications

𝐻(𝑋) = − ∑ 𝑃(𝑥) log 2 𝑃(𝑥)


𝑥∈𝑋
Entropy is maximized when all events are equally likely, meaning maximum
uncertainty.

 KL Divergence measures the difference between two probability distributions. It


quantifies how much information is lost when using an approximation Q instead of
the true distribution P.
𝑃(𝑥)
𝐷𝐾𝐿 (𝑃||𝑄) = ∑ 𝑃(𝑥) log
𝑄(𝑥)
𝑋
It is widely used in statistics, machine learning, and information retrieval.

 Mutual Information quantifies how much information two random variables share.
It measures the reduction in uncertainty about one variable given knowledge of the
other.
𝑝(𝑥, 𝑦)
𝐼(𝑋; 𝑌) = ∑ ∑ 𝑝(𝑥, 𝑦) log
𝑝(𝑥)𝑝(𝑦)
𝑥∈𝑋 𝑦∈𝑌
Mutual Information is widely used in communication, machine learning, and feature
selection.

Exercises:

1. Suppose that women who live beyond the age of 80 outnumber men in the same age
`
group by three to one. How much information, in bits, is gained by learning that a
person who lives beyond 80 is male?
Ans:

2. Find the mutual information 𝐼(𝑋; 𝑌) between the two random variables X and Y
1 3
1 3
whose 𝑃 = [4 , 4] 𝑎𝑛𝑑 𝑄 = [43 4
1].
4 4
Ans:

3. Consider a variant of the game Twenty Questions in which you have to guess which
one of seven horses won a race. The probability distribution over winning horses is as
follows:
horse 1 2 4. 5. 6. 7. 8.
Prob. Of winning 1⁄4 1⁄4 1⁄8 1⁄8 1⁄8 1⁄16 1⁄16
Ans:

PANGASINAN STATE UNIVERSITY 44


Data and Digital Communications

Unit 4
Introduction to Data Communication

Objectives
 Define data communication and its core concepts: Understand the fundamental
principles of transmitting digital information between devices.
 Identify the historical evolution of data communication: Trace the significant
milestones in data communication technology, from early inventions to modern
networks.
 Explain the key elements involved in data communication: Recognize the roles of
sender, receiver, channel, protocol, and MAC in successful data transmission.
 Discuss the challenges and limitations of data communication: Explore issues like
security, reliability, congestion, and quality of service.

Definition

Data Communication involves the transfer of data or information between multiple


devices, such as computers, servers, or other communication systems, using a transmission
medium like cables, fiber optics, or wireless signals. It requires the use of protocols and
methods to ensure accurate, efficient, and secure data transmission from the sender to the
receiver. Essentially, data communication facilitates data sharing across networks, enabling
information exchange over local or wide-area networks (LAN/WAN), the internet, or between
connected devices.
`
The field of data communication is constantly evolving, fueled by cutting-edge
technologies and the increasing demand for faster, more reliable, and secure data
transmission. Key trends shaping the future include:
 Internet of Things (IoT): Billions of
interconnected devices generating and
sharing data, requiring robust networks
and efficient data management.
 Cloud Computing: Data storage and
processing shifting to remote servers,
necessitating secure and reliable data
transfer over large distances.
 Artificial Intelligence (AI): AI systems
relying on massive data sets for learning
and decision-making, driving the need
for high-speed and low-latency data
transmission.
These trends highlight the critical role data communication plays in driving innovation and
shaping the future of our connected world.

Challenges and Limitations:


Data communication isn't perfect. Some challenges include:
 Security: Protecting data from unauthorized access, modification, or destruction.
 Reliability: Ensuring accurate and complete data delivery despite errors or
disruptions.

PANGASINAN STATE UNIVERSITY 45


Data and Digital Communications

 Congestion: Managing data flow effectively to avoid network slowdown or overload.


 Quality of Service (QoS): Prioritizing and guaranteeing certain levels of service for
different types of data (e.g., streaming video vs. email).
History of Data Communication

The history of data communication traces the evolution of how information has been
transmitted across distances, starting from simple, early methods to today’s advanced digital
networks. Here's an overview of the major milestones:

1. Early Forms of Communication (Pre-1800s):


 Smoke signals, drums, and
semaphore systems were among the
earliest means of transmitting
messages over distances. These
methods, while primitive, laid the
groundwork for later communication
technologies.
 Optical telegraph (late 1700s):
Claude Chappe invented the optical
telegraph, a visual signaling system
using towers and pivoting arms, which enabled transmission of messages over long
distances.
 1753 – earliest means of electrically coded information occured, “running a
communication line comprised of 26 parallel wires , to represent the letter of the
`
alphabet”(impractical)

2. Invention of the Telegraph (1830s-1840s):


 Samuel Morse and Alfred Vail
developed the electric telegraph and
Morse Code (1837). This was the
first practical electrical form of
communication, using pulses of
electrical signals to represent letters
and numbers.
 1832- Samuel F.B. Morse
developed the very first practical
data communications code called
the Morse code. He also invented
the first successful data
communication system called
telegraph
 1833- Carl Friedrich Gauss developed an unusual system based on five by five matrix
representing 25 letters.
 1840- patent for the telegraph
 In 1844, Morse sent the first telegraph message, "What hath God wrought?" over a line
between Washington, D.C., and Baltimore.
 The telegraph revolutionized communication, allowing near-instantaneous
transmission of information over long distances, laying the foundation for future digital
communication.

PANGASINAN STATE UNIVERSITY 46


Data and Digital Communications

3. Invention of the Telephone :


 Alexander Graham Bell and Elisha Gray
developed the telephone, which allowed for
the transmission of voice over electrical
wires. The telephone network later evolved to
carry not just voice but also data, crucial for
the growth of data communication in the 20th
century.
 1874- Emile Baudot invented telegraph
multiplexer that allow signal from six
different telegraph machines to be
transmitted simultaneously over a single wire.
 1875- Alexander Graham Bell invented telephone

4. Radio and Wireless Communication (1890s-1920s):


 Guglielmo Marconi demonstrated the transmission of radio signals across long
distances in the 1890s, leading to the development of wireless communication.
 Voice and data transmission via radio laid the groundwork for modern wireless
communication technologies like Wi-Fi and cellular networks.
 1920- the very first commercial radio stations were installed

5. Development of Early Digital Communication (1940s-1960s):


 Shannon’s Information Theory
(1948): Claude Shannon’s `
groundbreaking work at Bell Labs
introduced the mathematical theory
of communication, which became
the foundation of digital data
transmission, encoding, and
compression. His work defined how
data could be transmitted over noisy
communication channels.
 1940- Bell Telephone Laboratories
developed the first special purpose
computer using electromechanical
relays for performing logical operations
 1946- J.Presper Eckert and John Mauchley developed Electronic Numerical Integrator
And Computer (ENIAC)
 1950- punch cards were used in inputting informations, printers for outputting and
magnetic tape reels for permanently storing information called Batch processing
 1951- Universal Automatic Computer(UNIVAC), built by Remington Rand
Corporation the first mass-produced electronic computer.
 Digital signals and the concept of encoding data in binary (0s and 1s) started gaining
traction, setting the stage for computer-based communication.
 The first modems were developed in the 1950s, enabling digital data to be transmitted
over the public telephone network.

6. The ARPANET and the Birth of the Internet (1960s-1970s):

PANGASINAN STATE UNIVERSITY 47


Data and Digital Communications

 ARPANET (1969), developed by the U.S.


Department of Defense, was the first
packet-switching network and the
precursor to the internet. It allowed
multiple computers to communicate on a
single network.
 Packet-switching technology
revolutionized data communication by
breaking data into small packets, sending
them over various routes, and
reassembling them at the destination. This method was more efficient and resilient than
previous circuit-switched communication.

7. Digital Networks and Standardization (1970s-1980s):


 Ethernet: Invented by Robert Metcalfe
at Xerox PARC in 1973, Ethernet
became a standard for local area
networks (LANs), enabling high-speed
data transfer within businesses and
campuses.
 Transmission Control
Protocol/Internet Protocol (TCP/IP)
was developed in the 1970s,
standardizing how data is transmitted
across networks and the foundation for the` modern internet.
 Integrated Services Digital Network (ISDN) was introduced in the 1980s, allowing
both voice and digital data to be transmitted over telephone networks, an early precursor
to broadband.

8. Rise of the Internet and Wireless Communication (1990s):


 The World Wide Web (WWW), invented by
Tim Berners-Lee in 1989, was a key
innovation that allowed easy access to
information via the internet.
 The 1990s saw the rapid growth of the
internet and the commercialization of
TCP/IP networks, making global data
communication accessible to businesses and
individuals.
 Wireless communication technologies, such as Wi-Fi (1997) and mobile data
networks (GSM, CDMA), further expanded data communication to mobile and portable
devices.

9. 21st Century: Broadband, Fiber Optics, and 5G:


 Broadband internet became widespread in the early 2000s, replacing slower dial-up
connections and enabling faster data transmission.
 Fiber-optic communication: By using light to transmit data over glass or plastic
fibers, fiber optics dramatically increased the speed and capacity of data
communication networks.

PANGASINAN STATE UNIVERSITY 48


Data and Digital Communications

 The advent of 4G (LTE) and later 5G networks in the 2010s and 2020s provided
even faster wireless communication with higher data rates, reduced latency, and
support for a massive number of devices.

10. Present and Future Developments:


 Cloud computing and Internet of
Things (IoT) have emerged as major
areas of growth, allowing devices to
communicate and share data over the
internet without direct human
intervention.
 Quantum communication and
blockchain are emerging technologies
that promise to revolutionize data
communication by providing
unprecedented levels of security and
efficiency.
 Artificial intelligence (AI) is increasingly integrated into data communication
systems for optimizing traffic, improving security, and enabling smarter, more
adaptive networks.
Elements of Data Communications
The elements of data communication are essential components that enable the efficient
transfer of data between devices. These elements work together to ensure the accurate,
timely, and secure transmission of information. The
` main elements are:

1. Message:
 The message is the information or data to be communicated. It can be in various
forms, including text, numbers, images, audio, or video, depending on the application
and the nature of the communication.

2. Sender:
 The sender is the device or entity that generates and sends the data. It could be a
computer, mobile phone, sensor, or other devices capable of transmitting data to
another device.

PANGASINAN STATE UNIVERSITY 49


Data and Digital Communications

 The sender encodes the message into signals that can be transmitted over a
communication medium.

3. Receiver:
 The receiver is the device or entity that receives and processes the transmitted
message. It decodes the signals back into usable data.
 Examples include computers, smartphones, servers, or other end devices capable of
receiving data.

4. Transmission Medium:
 The transmission medium is the physical path or channel through which the message
travels from the sender to the receiver. The medium can be:
o Wired (copper cables, fiber optics)
o Wireless (radio waves, microwaves, satellite links)
 The choice of medium affects the speed, quality, and distance over which data can be
transmitted.

5. Encoder and Decoder:


 Encoder: The encoder is used by the sender to convert the message into a suitable
format for transmission, such as converting digital data into electrical or optical
signals.
 Decoder: The receiver uses the decoder to convert the received signals back into the
original message format.

6. Protocol: `
 A protocol is a set of rules or conventions that define how data should be transmitted
and received. Protocols ensure that devices understand each other and that data is
transmitted accurately and securely.
 Examples of protocols include TCP/IP, HTTP, FTP, and SMTP.

7. Feedback:
 In certain communication systems, feedback is sent from the receiver back to the
sender to confirm that the message has been received and understood correctly.
 This element is particularly important in interactive or real-time communication
systems where the sender needs confirmation, such as error checking or
acknowledgment signals.

Importance of Data Communication:


1. Accuracy:
Ensuring the data is transmitted without errors from the sender to the receiver, with
mechanisms for detecting and correcting errors if they occur.

2. Efficiency:
Optimizing the use of resources like bandwidth and power to transmit data quickly
and cost-effectively, minimizing delays or bottlenecks.

3. Timeliness:

PANGASINAN STATE UNIVERSITY 50


Data and Digital Communications

Guaranteeing that the data reaches its destination within an acceptable timeframe,
which is especially important for real-time applications like video streaming or online
gaming.

4. Security:
Protecting the data from unauthorized access or tampering during transmission,
ensuring confidentiality, integrity, and authentication.

5. Scalability:
Supporting the growth of networks and the ability to add new devices or increase the
volume of data without degrading performance.

6. Interoperability:
Ensuring that devices and systems from different manufacturers or platforms can
communicate effectively using standardized protocols.

7. Flexibility:
Providing the ability to adapt to various communication needs, technologies, and
media, such as wired, wireless, or fiber-optic communication.

8. Reliability:
Ensuring continuous and stable communication, with minimal disruptions, through
mechanisms like error correction, fault tolerance, and retransmission when needed.

Summary
`
Data Communication refers to the sharing or transfer of collection of facts, figures,
etc. between devices capable of such exchanges using some of the other communication
mediums. Whenever we communicate we share facts, ideas, etc. in mutually agreed-upon
language and speed with the maximum accuracy possible. The same is the case in data
communication, here the effectiveness of Data Communication is determined by correctness
in delivery, the accuracy of transfer, timeliness, and lesser variation in packet arrival times.
Key Elements:
1. Message: The data or information being communicated.
2. Sender: The device that sends the data.
3. Receiver: The device that receives and processes the data.
4. Transmission Medium: The physical path (wired or wireless) through which the data
travels.
5. Protocol: A set of rules governing the transmission of data to ensure interoperability
between devices.
6. Encoder/Decoder: Devices or processes that convert data into signals for
transmission and back into usable data upon reception.

Data communication has evolved from early methods like telegraphy to today’s advanced
digital systems, playing a fundamental role in modern computing, networking, and
telecommunications. It enables global connectivity through the internet and networks,
supports real-time applications like video calls, and allows for the automation of industrial
and consumer devices (IoT). Data communication shapes our lives in profound ways, from
personal connections to global business. Understanding its principles empowers us to navigate the
technological landscape and leverage its potential for good.

PANGASINAN STATE UNIVERSITY 51


Data and Digital Communications

Exercises:

Multiple Choice: Choose the correct letter of the correct answer

1. The first operational computer network in the world was the _________ for the United
States Department of Defense
a) ARPANET
b) UNIVAC
c) ERNET
d) SKYNET

2. Which invention marked the beginning of long-distance data communication?


a) Telephone
b) Computer
c) Telegraph
d) Internet

3. What is the role of a sender in data communication?


a) Deciphering encoded data
b) Regulating channel access
c) Initiating data transmission
d) Maintaining network security

4. The ISDN Internetworking Equipment devices` are


a) Terminal Adapters
b) ISDN Bridges
c) ISDN Routers
d) All of these

5. The is the physical path over which a message travels.


a) Protocol
b) Medium
c) Signal
d) All of the above

6. Frequency of failure and network recovery time after a failure are measures of the
_____________ of a network.
a) Performance
b) Reliability
c) Security
d) Interoperability

7. Ensuring that devices and systems from different manufacturers or platforms can
communicate effectively using standardized protocols.
a) Performance
b) Reliability
c) Security
d) Interoperability

PANGASINAN STATE UNIVERSITY 52


Data and Digital Communications

8. Are essential components that enable the efficient transfer of data between devices except
a) Message
b) Noise
c) Sender
d) Receiver

9. What is a major challenge in data communication over wireless networks?


a) Limited bandwidth
b) Physical cable connections
c) High data transmission costs
d) Susceptibility to interference

10. What is the main goal of Quality of Service (QoS) in data communication?
a) Prioritize different types of data traffic
b) Reduce network congestion
c) Improve data security measures
d) Increase overall network speed

PANGASINAN STATE UNIVERSITY 53


Data and Digital Communications

Unit 5
Data Transmission

Objectives
 Explain the fundamentals of data transmission
 Analyze different data transmission channels and discuss the limitations and
advantages of each channel type
 Investigate data transmission protocols and methods and describe different data
transmission methods
 Explore the future of data transmission
Introduction to Data Transmission
Data transmission refers to the process of transferring data between two or more devices
using some form of transmission medium (wired or wireless). It is a key aspect of data
communication, ensuring that information moves from a sender (source) to a receiver
(destination) in an accurate and timely manner.

Types of Data Transmission:

1. Serial Transmission
 In serial transmission, data is sent bit by bit over a single communication channel or
wire.
 Each bit of data is transmitted sequentially, one after another.
 Requires fewer wires or channels than parallel transmission.
 Suitable for long-distance communication since synchronization is easier.

Types:
o Asynchronous Serial Transmission:
 Data is transmitted one byte (or character) at a time with start and stop
bits.
 No synchronization between the sender and receiver clocks.
 Example: Communication between a computer and a peripheral device
(e.g., keyboard or mouse).

PANGASINAN STATE UNIVERSITY 54


Data and Digital Communications

o Synchronous Serial Transmission:


 Data is transmitted as a continuous stream of bits with no start and stop
bits.
 The sender and receiver clocks are synchronized.
 Example: Communication within high-speed networks like Ethernet.

 Applications:
o Used in USB (Universal Serial Bus), RS-232, and long-distance
communication technologies like DSL (Digital Subscriber Line).
 Advantages:
o Efficient for long-distance communication.
o Requires fewer communication lines, reducing costs.
 Disadvantages:
o Slower than parallel transmission for short distances.

2. Parallel Transmission
 Definition: In parallel transmission, multiple bits are transmitted simultaneously over
multiple communication channels or wires.
 Characteristics:
o Several bits (usually a byte or more) are transmitted at the same time, each
over a separate wire or channel.
o Requires more wires or channels than serial transmission.

PANGASINAN STATE UNIVERSITY 55


Data and Digital Communications

o Best suited for short-distance communication since signal synchronization can


be difficult over long distances.

 Applications:
o Used in situations requiring fast data transmission over short distances, such as
within a computer system or between a computer and a printer.
o Examples: Parallel ports, data buses in computers (e.g., PCI Express).
 Advantages:
o Faster than serial transmission for short distances since multiple bits are
transmitted simultaneously.
 Disadvantages:
o Requires more hardware (wires, connectors) and is less efficient over long
distances due to synchronization issues and signal degradation.

DTE and DCE Interface


`
1. DTE (Data Terminal Equipment)
DTE refers to devices that act as the source or destination of data. These devices are
typically end-user devices that generate, receive, or display data.
Examples:
o Computers (PCs, laptops)
o Terminals
o Printers
o Routers (in certain configurations)
DTE devices are responsible for generating and processing data. They send data to
DCE devices for transmission or receive data from DCE devices after transmission. DTE
typically connects to DCE to gain access to a communication network (e.g., a modem or
network interface).

2. DCE (Data Communication Equipment)


DCE refers to devices that establish, maintain, and terminate data transmission to and
from a DTE device. These devices typically serve as an intermediary between DTE devices
and the communication medium.
 Examples:
o Modems
o Network switches
o Hubs
o Multiplexers
o CSU/DSU (Channel Service Unit/Data Service Unit)
DCE devices are responsible for transmitting data between the DTE and the
communication channel (e.g., telephone line, network link). They typically handle the signal

PANGASINAN STATE UNIVERSITY 56


Data and Digital Communications

conversion, timing, and synchronization required for communication. DCE can either receive
data from DTE for transmission or deliver received data to DTE.

In the realm of data communication, DTE (Data Terminal Equipment) and DCE (Data
Communication Equipment) represent two categories of devices that collaborate to enable
data transmission. These designations explain the functions that devices fulfill in a
communication arrangement, especially within serial communication systems. Understanding
the difference between DTE and DCE is crucial for establishing effective communication and
setting up network or serial connections.
In a typical communication setup, a DTE device connects to a DCE device to transmit
data over a network or communication link. For example, when a computer (DTE) connects
to a modem (DCE), the modem takes the digital signals from the computer and converts them
into analog signals for transmission over a phone line. When receiving data, the modem
converts the analog signals back into digital form for the computer to process. In serial
communication (e.g., RS-232), different pin configurations are used for DTE and DCE
devices. This is to ensure proper communication `between the two. When connecting two
DTE devices (e.g., two computers), a null modem cable or adapter is used to simulate the
DCE interface by swapping the transmission (TX) and reception (RX) pins so that both DTE
devices can communicate directly.

Types of Data Transmission Interfaces:

1. Serial Data Transmission Interfaces


In serial communication, data is sent one bit at a time over a single wire or channel.
Serial interfaces are widely used for long-distance communication and in many modern
devices.

a) RS-232 (Recommended Standard 232)


One of the earliest serial communication
standards, commonly used for connecting
computers and peripherals (e.g., modems,
printers).
 Features:
o Uses a point-to-point
connection.
o Asynchronous communication
(each byte of data is
transmitted with start and stop
bits).
o Transmission speed is
relatively low (up to 115 kbps).

PANGASINAN STATE UNIVERSITY 57


Data and Digital Communications

o Limited cable length (up to 50 feet).


 Applications: Legacy devices, industrial systems, modems, serial mice, and
communication between microcontrollers.

b) USB (Universal Serial Bus)


A widely used interface for connecting peripherals
to computers.
 Features:
o Supports both asynchronous and
synchronous data transmission.
o Provides power supply to connected
devices.
o High transmission speeds (up to 40 Gbps
in USB 4.0).
o Hot-swappable and plug-and-play
functionality.
 Applications: Used for connecting devices like
keyboards, mice, external storage, printers, and
mobile phones.

c) UART (Universal Asynchronous


Receiver/Transmitter)
A hardware interface that translates data between
parallel and serial forms.
 Features: `
o Asynchronous communication.
o Used for communication between a
microcontroller and a peripheral device.
 Applications: Serial ports in computers, embedded
systems, and communication between
microcontroller devices.

d) Ethernet
A family of networking technologies commonly
used in local area networks (LANs).
 Features:
o Supports both asynchronous and
synchronous communication.
o Transmission speeds vary from 10 Mbps
(Ethernet) to 100 Gbps (Gigabit
Ethernet).
o Allows multiple devices to communicate
over a shared medium.
 Applications: Networking, internet
communication, and connecting devices within
LANs.

2. Parallel Data Transmission Interfaces


In parallel communication, multiple bits are transmitted simultaneously over multiple
wires or channels. Parallel interfaces are ideal for short-distance, high-speed communication.

PANGASINAN STATE UNIVERSITY 58


Data and Digital Communications

a) Parallel ATA (PATA)


A standard interface for connecting hard drives
and optical drives to a computer’s motherboard.
 Features:
o Transfers 16 bits of data
simultaneously.
o Speeds range from 33 MBps to 133
MBps.
 Applications: Used in older hard drives and
optical drives.

b) SCSI (Small Computer System Interface)


A parallel interface used for connecting and
transferring data between computers and
peripheral devices.
 Features:
o Supports multiple devices (up to
16).
o Data transfer speeds range from
5 MBps to 640 MBps (Ultra-
640).
 Applications: Servers, workstations,
and storage devices. `

c) IEEE 1284 (Parallel Port)


A parallel communication interface used
primarily for connecting printers to computers.
 Features:
o Can transfer data in both directions
(bidirectional).
o Speeds range from 150 KBps to 4
MBps.
 Applications: Printers, scanners, and older
computing devices.

3. Network and Wireless Interfaces


These interfaces facilitate data communication over networks and wireless
connections, often using a mix of serial and parallel techniques to optimize performance and
compatibility.

a) Wi-Fi (Wireless Fidelity)

PANGASINAN STATE UNIVERSITY 59


Data and Digital Communications

A wireless network technology based on


IEEE 802.11 standards.
 Features:
o Allows wireless data transmission
over a range of up to several
hundred meters.
o Speeds vary from 11 Mbps (Wi-
Fi 802.11b) to over 10 Gbps (Wi-
Fi 6E).
 Applications: Wireless internet access
for laptops, smartphones, and IoT devices.

b) Bluetooth
A wireless technology for short-range communication
between devices.
 Features:
o Speeds up to 3 Mbps (Bluetooth 2.0) or up to
50 Mbps (Bluetooth 5.0).
o Typical range of 10 meters, extendable to
100 meters with high-power devices.
 Applications: Wireless headphones, keyboards,
mice, and IoT devices.

c) NFC (Near-Field Communication)


A short-range wireless communication technology.
`
 Features:
o Operates at very short ranges (up to 10 cm).
o Low data rates (up to 424 kbps).

 Applications: Contactless payments, secure identification, and data exchange


between smartphones.

d) Zigbee
A low-power, wireless communication protocol designed for IoT and sensor networks.
 Features:
o Low data rates (up to 250 kbps).
o Suitable for short-range, low-power communication.

PANGASINAN STATE UNIVERSITY 60


Data and Digital Communications

 Applications: Smart home devices, industrial automation, and IoT networks.

4. Specialized Interfaces
These interfaces are used in specific applications, such as audiovisual systems or data
storage.

a) HDMI (High-Definition Multimedia Interface)


 Description: A digital interface for
transmitting high-definition video and
audio data.
 Features:
o Can transmit uncompressed `
audio and video data.
o Supports resolutions up to 8K.
 Applications: Connecting TVs,
monitors, and gaming consoles to
computers or media players.

b) SATA (Serial Advanced Technology Attachment)


A high-speed interface used to connect hard drives and
SSDs to a computer's motherboard.
 Features:
o Serial communication interface, replacing
PATA.
o Speeds up to 6 Gbps (SATA III).
 Applications: Internal and external storage
devices.

PANGASINAN STATE UNIVERSITY 61


Data and Digital Communications

Data Transmission Media and Technologies


Data communication media refers to the physical or logical pathways used to transfer
data between devices in a communication system. The media can be broadly classified into
two categories: wired and wireless. Each medium has its unique characteristics, advantages,
and applications based on factors like speed, distance, reliability, and cost.

Guided Transmission Media


Guided media is a type of transmission media which is also known as wired or
bounded media. These transmission media consist of wires through which the data is
transferred. Guided media is a physical link between transmitter and recipient devices.
Signals are directed in a narrow pathway using physical links. These media types are used for
shorter distances since physical limitation limits the signal that flows through these
transmission media. `

a) Twisted Pair Cable:


 Consists of pairs of insulated
copper wires twisted together to
reduce electromagnetic
interference.
 Commonly used for telephone
networks, local area networks
(LANs), and Ethernet.

Categories:
o Unshielded Twisted Pair (UTP): Most widely used, cost-effective but more
susceptible to interference.
o Shielded Twisted Pair (STP): Includes a shielding layer to protect against
external interference.

 Speed: Up to 10 Gbps for modern Ethernet standards (Cat6a, Cat7).

PANGASINAN STATE UNIVERSITY 62


Data and Digital Communications

b) Coaxial Cable:
 Made up of a central copper conductor, an insulating layer, and a metallic shield that
protects against interference.
 Used for cable TV, broadband internet, and some LANs.

 Speed: Typically supports up to 10 Gbps in data transmission.

c) Fiber Optic Cable:


 Uses light to transmit data through glass or plastic fibers.
 Offers high-speed transmission with minimal loss, immune to electromagnetic
interference.
 Ideal for long-distance communication and high-bandwidth requirements.

Types:
o Single-mode fiber
(SMF): Supports long-
distance communication
(up to 100 km or more).
o Multi-mode fiber
(MMF): Used for shorter
distances (up to 2 km).

 Speed: Up to 100 Gbps or


higher.

Unguided Transmission Media


Also known as unbounded or wireless media, they help in transmitting
electromagnetic signals without using a physical medium. Here, air is the medium. There is
no physical connectivity between transmitter and receiver. These types of transmission media
are used for longer distances however they are less secure than guided media. There are three
main types of wireless transmission media.

PANGASINAN STATE UNIVERSITY 63


Data and Digital Communications

a) Radio Waves:
Used for long-range wireless communication such as AM/FM radio, TV broadcasting,
and mobile networks. Widely used in Wi-Fi, Bluetooth, and cellular networks (2G, 3G, 4G,
5G).
 Range: Can cover distances from a few meters (Wi-Fi, Bluetooth) to several
kilometers (cellular networks).
 Speed: Varies depending on the technology (e.g., Wi-Fi 6 up to 9.6 Gbps, 5G up to 10
Gbps).
b) Microwave Transmission:
Uses high-frequency radio waves to transmit data over long distances in a line-of-sight
fashion. Used for satellite communication and point-to-point communication.
 Range: Can cover several kilometers, requiring unobstructed line-of-sight.
 Speed: Can exceed 1 Gbps for high-capacity links.
c) Infrared: `
Uses infrared light to transmit data over short distances. Commonly used in remote
controls, some IoT devices, and short-range communication systems.
 Range: Limited to a few meters and requires line-of-sight.
 Speed: Varies but typically up to a few Mbps.
d) Satellite Communication:
Data is transmitted between ground stations and orbiting satellites using radio waves.
Used for long-distance communication, broadcasting, and internet access in remote areas.
 Range: Global, with satellites covering large geographical areas.
 Speed: Varies, with some satellite services offering up to 100 Mbps.

Data Transmission Modes


The different ways in which data is sent between devices in a communication system are
known as Data Transmission Modes. These modes establish the flow and direction of data
between the sender and receiver. There are three primary types of data transmission modes:

PANGASINAN STATE UNIVERSITY 64


Data and Digital Communications

1. Simplex Mode
In simplex mode, data is transmitted in one direction only, meaning the communication is
unidirectional. One device acts as the sender, and the other acts as the receiver, with no
possibility of the receiver sending data back to the sender.

 Advantages of Simplex Mode


o Simplex mode is the easiest and most reliable mode of communication.
o It is the most cost-effective mode, as it only requires one communication
channel.
o There is no need for coordination between the transmitting and receiving
devices, which simplifies the communication process.
o Simplex mode is particularly useful in situations where feedback or response
is not required, such as broadcasting or surveillance.
 Disadvantages of Simplex Mode
o Only one-way communication is possible.
o There is no way to verify if the transmitted data has been received correctly.
o Simplex mode is not suitable for applications that require bidirectional
communication.
 Example:
o Television broadcasting: The TV station sends a signal to the TV set, but the
TV set cannot send data back to the` station.
o Radio broadcasting: A radio station transmits audio signals to radio receivers
without any return data.
o
2. Half-Duplex Mode
In half-duplex mode, data transmission can occur in both directions, but not
simultaneously. Devices take turns in transmitting and receiving data, meaning the channel is
alternately used by each device.

 Advantages of Half Duplex Mode


o Half-duplex mode allows for bidirectional communication, which is useful in
situations where devices need to send and receive data.
o It is a more efficient mode of communication than simplex mode, as the
channel can be used for both transmission and reception.
o Half-duplex mode is less expensive than full-duplex mode, as it only requires
one communication channel.
 Disadvantages of Half Duplex Mode
o Half-duplex mode is less reliable than Full-Duplex mode, as both devices
cannot transmit at the same time.

PANGASINAN STATE UNIVERSITY 65


Data and Digital Communications

o There is a delay between transmission and reception, which can cause


problems in some applications.
o There is a need for coordination between the transmitting and receiving
devices, which can complicate the communication process
 Example:
o Walkie-talkie: When one person speaks, the other must wait for the first
person to finish before responding.
o CB radio (Citizens Band radio): Users take turns speaking and listening in a
shared communication channel.

3. Full-Duplex Mode
In full-duplex mode, data transmission occurs simultaneously in both directions. This
allows both the sender and receiver to communicate with each other at the same time, using
separate channels or frequencies for sending and receiving.

 Advantages of Full-Duplex Mode


o Full-duplex mode allows for simultaneous
` bidirectional communication,
which is ideal for real-time applications such as video conferencing or online
gaming.
o It is the most efficient mode of communication, as both devices can transmit
and receive data simultaneously.
o Full-duplex mode provides a high level of reliability and accuracy, as there is
no need for error correction mechanisms.
 Disadvantages of Full-Duplex Mode
o Full-duplex mode is the most expensive mode, as it requires two
communication channels.
o It is more complex than simplex and half-duplex modes, as it requires two
physically separate transmission paths or a division of channel capacity.
o Full-duplex mode may not be suitable for all applications, as it requires a high
level of bandwidth and may not be necessary for some types of
communication.
 Example:
o Telephone system: Both parties can speak and listen at the same time, without
waiting for the other to finish.
o Internet communication: Full-duplex mode is used in data transfer over
networks, where devices can send and receive data at the same time.

Data Communications Standards

Formalized rules and protocols known as Data Communication Standards regulate


the transfer of data between devices to guarantee interoperability, efficiency, and reliability.
Various aspects including speed, signaling, format, error correction, and the physical

PANGASINAN STATE UNIVERSITY 66


Data and Digital Communications

transmission medium are defined by these standards. Data communication standards are
established and maintained by various standards organizations. These organizations
develop, regulate, and promote the adoption of standards that ensure the interoperability,
reliability, and efficiency of data communication systems globally.

Classification of Standards

Open Standards
Open standards are protocols, technologies, or specifications that are developed and
maintained by public or independent organizations. They are generally accessible to anyone,
ensuring broad compatibility and interoperability across different systems and devices.

Characteristics of Open Standards:


 Publicly Available: The standard is available for anyone to use, implement, and
improve.
 Collaboratively Developed: Developed by committees or groups of stakeholders,
including vendors, users, and independent experts.
 Interoperability: Open standards ensure that products from different vendors can
work together seamlessly.
 Transparency: The development process is typically transparent, with public input
often solicited.
 No Licensing Fees: Generally, there are no fees or restrictions on the use of open
standards.

`
Examples of Open Standards in Data Communication:
 TCP/IP: The protocol suite that powers the internet, ensuring devices across the
globe can communicate.
 IEEE 802.11 (Wi-Fi): A widely used standard for wireless networking that ensures
devices from different manufacturers can connect to the same network.
 HTML/CSS: Standards for structuring and presenting web content that ensures
interoperability across web browsers.
 Ethernet (IEEE 802.3): A standard for wired local area networks (LANs) that
enables broad interoperability.

Advantages of Open Standards:


 Widespread Adoption: Open standards are usually adopted on a global scale due to
their accessibility.
 Interoperability: Products from different vendors can communicate, reducing the
risk of vendor lock-in.
 Innovation: Open standards foster innovation, as developers can build upon them
without restrictions.
 Cost-Effective: Organizations don’t need to pay licensing fees to use open standards.

Disadvantages of Open Standards:


 Slower Development: The development process can be slower due to the need for
consensus among various stakeholders.
 Less Control: Since the standards are open, companies have less control over the
direction of development and updates.
 Potential for Fragmentation: Sometimes, open standards can lead to multiple
versions or forks, causing incompatibilities.

PANGASINAN STATE UNIVERSITY 67


Data and Digital Communications

Proprietary Standards
Proprietary standards, also known as closed standards, are developed, controlled, and
owned by a single company or organization. These standards are typically not publicly
available and often require licensing fees for use.

Characteristics of Proprietary Standards:


 Owned and Controlled: Proprietary standards are owned by a specific entity, which
controls their development and distribution.
 Restricted Access: Access to the technology may be limited, and implementation
may require permission or licensing fees.
 Limited Interoperability: Products or technologies based on proprietary standards
often work best within the same vendor ecosystem, leading to vendor lock-in.
 Licensing Fees: Users may have to pay for the right to use or implement the
proprietary standard.

Examples of Proprietary Standards in Data Communication:


 Apple’s Lightning Connector: A proprietary connector used in Apple devices for
data transfer and charging.
 Microsoft’s .docx: The format used in Microsoft Word for document creation and
editing, though it is now more open after standardization.
 Skype Protocol: Initially, Skype used a proprietary communication protocol that was
closed to third-party applications.
 HDMI (High-Definition Multimedia Interface): A standard for transmitting audio
and video data that is controlled by the HDMI
` Forum.

Advantages of Proprietary Standards:


 Fast Innovation: Proprietary standards can be developed quickly, as they don’t
require consensus from multiple stakeholders.
 Tight Control: The owning company can ensure strict quality control and security
within its ecosystem.
 Competitive Advantage: Companies can gain a competitive edge by developing
unique technologies that others cannot use without a license.

Disadvantages of Proprietary Standards:


 Vendor Lock-in: Users of proprietary standards may be locked into a single vendor’s
ecosystem, making it difficult or expensive to switch.
 Limited Interoperability: Products based on proprietary standards may not work
with those from other vendors, leading to compatibility issues.
 Cost: Users may have to pay licensing fees to use proprietary standards, increasing
the overall cost of the technology.

Standard Organizations for Data Communications

1. International Organization for Standardization (ISO)


ISO is an independent, non-governmental international organization that develops and
publishes a wide range of standards, including those for data communication. An
international organizations for standardization and are responsible to create set of rules and
standards for graphics and document exchange and provides models for equipment and
system compatibility, quality enhancement, improved productivity and reduced cost.

PANGASINAN STATE UNIVERSITY 68


Data and Digital Communications

 Role in Data Communication:


o ISO is responsible for the Open Systems Interconnection (OSI) model,
which defines seven layers for communication between systems.
o ISO/IEC 11801 for structured cabling systems.
o Collaborates with other organizations like ITU and IEEE to establish
communication standards.

 Key Standards:
o OSI Model (ISO/IEC 7498): Defines the architecture for data communication
systems.

2. Institute of Electrical and Electronics Engineers (IEEE)


IEEE is a professional association dedicated to advancing technology, with a focus on
electrical and electronic engineering. It plays a major role in defining standards for data
communication, particularly at the physical and data link layers. An international professional
organization comprised of electronics, computer and communications engineers to develop
communications and information processing standards with the underlying goal of advancing
theory, creativity, and product quality in any field associated with electrical engineering.
 Role in Data Communication:
o Develops Ethernet standards (IEEE 802.3), Wi-Fi standards (IEEE 802.11),
and Bluetooth (IEEE 802.15).
o Focuses on standards for wired and wireless local area networks (LANs),
metropolitan area networks (MANs), and personal area networks (PANs).

 Key Standards: `
o Ethernet (IEEE 802.3): The most widely used wired LAN standard.
o Wi-Fi (IEEE 802.11): Standard for wireless LAN communication.
o Bluetooth (IEEE 802.15): Standard for short-range wireless communication.

3. International Telecommunication Union- Telecommunications Sector (ITU-T)


ITU is a specialized agency of the United Nations responsible for issues related to
information and communication technologies (ICTs). ITU’s standards cover a wide range of
communication technologies, including telecommunications and broadcasting.
 Role in Data Communication:
o ITU defines standards for global telecommunications, radio frequency
allocation, and digital broadcasting.
o Provides recommendations for data communication protocols and services
(e.g., ITU-T).

 Key Standards:
o ITU-T G.992: Standard for Digital Subscriber Line (DSL) technology.
o ITU-T X.25: Packet-switched data communication protocol used in early data
networks.

4. Telecommunications Industry Association (TIA)


TIA is a U.S.-based organization that develops standards for information and
communication technology, particularly in telecommunications.
 Role in Data Communication:
o Develops standards for fiber optic networks, structured cabling, and wireless
communication.

PANGASINAN STATE UNIVERSITY 69


Data and Digital Communications

o Provides specifications for networking hardware and infrastructure, including


structured cabling for buildings.

 Key Standards:
o TIA-568: Standard for structured cabling in telecommunications
infrastructure.
o TIA-942: Standard for data centers and telecommunications infrastructure.

6. European Telecommunications Standards Institute (ETSI)


ETSI is a European standards organization that produces globally applicable standards for
ICT, including fixed, mobile, radio, and internet technologies.
 Role in Data Communication:
o ETSI is responsible for standards related to mobile communication (e.g.,
GSM, LTE) and other digital communication technologies.
o Works closely with ITU and 3GPP for the development of global
telecommunications standards.

 Key Standards:
o GSM (Global System for Mobile Communications): Standard for mobile
networks.
o LTE (Long-Term Evolution): 4G mobile communication standard.
o Digital Video Broadcasting (DVB): Standard for broadcasting digital
television.

7. American National Standards Institute (ANSI) `


ANSI is a private, non-profit organization that oversees the development of voluntary
standards for a wide range of industries in the U.S., including data communication
 Role in Data Communication:
o ANSI accredits standards development organizations such as the TIA and
collaborates with international organizations like ISO.
o Works on standards for computer networking, telecommunications, and
information systems.

 Key Standards:
o ANSI T1.105: Standards for digital transmission systems, including SONET.
o ANSI X3.131: Fibre Channel, a high-speed network technology.

8. 3rd Generation Partnership Project (3GPP)


3GPP is a collaboration between telecommunications standard organizations to produce
globally applicable standards for mobile systems.
 Role in Data Communication:
o Develops standards for mobile broadband communication, including 3G, 4G
(LTE), and 5G.
o Ensures that mobile communication networks are interoperable across
different regions.

 Key Standards:
o UMTS (Universal Mobile Telecommunications System): Standard for 3G
mobile networks.
o LTE (Long-Term Evolution): Standard for 4G mobile communication.

PANGASINAN STATE UNIVERSITY 70


Data and Digital Communications

o 5G NR (New Radio): Standard for 5G mobile networks.

9. Wi-Fi Alliance
The Wi-Fi Alliance is an industry consortium that promotes Wi-Fi technology and
ensures interoperability between Wi-Fi devices.
 Role in Data Communication:
o Responsible for certifying that Wi-Fi devices meet the standards set by IEEE
802.11.
o Ensures that Wi-Fi products from different manufacturers work together
seamlessly.

 Key Standards:
o Wi-Fi CERTIFIED™: A certification program that guarantees device
interoperability and security for wireless networks.

10. World Wide Web Consortium (W3C)


W3C is an international community that develops web standards to ensure the long-term
growth and compatibility of the World Wide Web.
 Role in Data Communication:
o Responsible for setting standards related to web technologies like HTML,
CSS, and XML.
o Focuses on ensuring that web technologies are interoperable, secure, and
accessible.

 Key Standards: `
o HTML (HyperText Markup Language): Standard for structuring web
content.
o CSS (Cascading Style Sheets): Standard for presenting web content.
o XML (Extensible Markup Language): Standard for data representation on
the web.
These organizations play a crucial role in establishing the standards that govern data
communication across the world. From networking protocols to wireless technologies, these
standards ensure that devices, systems, and networks can communicate effectively, reliably,
and securely. Their work enables the global exchange of information and underpins much of
the modern digital infrastructure.

Summary
Data transmission is essential for communication across networks and can be carried
out using various modes, types, and media. Depending on the application, different
technologies and standards are used to ensure efficient and reliable data transfer between
devices. The choice of transmission method, interface, and medium depends on factors like
distance, data rate, and the environment in which the transmission occurs.
Data Transmission Modes:
 Simplex: Data flows in one direction only (e.g., television broadcast).
 Half-Duplex: Data can flow in both directions, but not simultaneously (e.g., walkie-
talkies).
 Full-Duplex: Data can flow in both directions simultaneously (e.g., telephone
communication).
Types of Data Transmission:

PANGASINAN STATE UNIVERSITY 71


Data and Digital Communications

 Serial Transmission: Data is transmitted one bit at a time over a single channel (e.g.,
USB, RS232).
 Parallel Transmission: Multiple bits are transmitted simultaneously over multiple
channels (e.g., data transfer in computer buses).
Data Transmission Interfaces:
 DTE (Data Terminal Equipment): Refers to devices like computers or routers that
generate and consume data.
 DCE (Data Circuit-terminating Equipment): Refers to devices like modems or
switches that provide a connection between DTEs for data transmission.
Data Transmission Media:
 Wired (Guided): Data is transmitted through physical media such as twisted pair
cables, coaxial cables, or fiber optics.
 Wireless (Unguided): Data is transmitted through the air or space using
electromagnetic waves (e.g., radio waves, microwaves, infrared).
Data Transmission Standards:
Organization Primary Focus Key Contributions
International standards across various OSI model, structured cabling
ISO
industries (ISO/IEC 11801)
Electrical and electronic engineering Ethernet (IEEE 802.3), Wi-Fi (IEEE
IEEE
standards, including networking 802.11), Bluetooth (IEEE 802.15)
Global telecommunication standards and DSL (ITU-T G.992), X.25, Radio
ITU
protocols frequency allocation
Internet standards, especially TCP/IP and
IETF ` TCP/IP, HTTP, SMTP
networking protocols
Telecommunications and cabling TIA-568 (Structured Cabling), TIA-
TIA
infrastructure standards 942 (Data Centers)
European telecommunications standards,
ETSI GSM, LTE, DVB
mobile communication
ANSI U.S. standards across multiple industries SONET, Fibre Channel
Mobile broadband communication
3GPP UMTS, LTE, 5G NR
standards (3G, 4G, 5G)
Wi-Fi Certification and interoperability of Wi-Fi
Wi-Fi CERTIFIED™
Alliance devices
Web standards for accessibility,
W3C HTML, CSS, XML
interoperability, and security

Data Transmission Standards Classification:


Aspect Open Standards Proprietary Standards
Public or non-profit Controlled by a specific company or
Ownership
organizations entity
Availability Freely available to everyone Restricted, requires licensing
High, fosters compatibility Low, often works only within
Interoperability
across systems vendor ecosystems
Development
Collaborative, often slow Controlled, can be faster
Process

PANGASINAN STATE UNIVERSITY 72


Data and Digital Communications

Aspect Open Standards Proprietary Standards


Encourages widespread Innovation limited to the controlling
Innovation
innovation entity
Cost Typically free to use May involve licensing fees
Decentralized, multiple Centralized, tight control by one
Control
stakeholders company

Exercises:

Multiple Choice: Choose the correct letter of the correct answer

1. The inner core of an optical fiber is _____ in composition.


a) glass or plastic
b) bimetallic
c) copper
d) liquid

2. In a noisy environment, the best transmission medium would be _________


a) twisted pair
b) optical fiber
c) coaxial cable
d) the atmosphere

3. ETSI stands for `


a) European Telecommunication Standards Institute
b) European Telephone Standards Institute
c) European Telecommunication Systems Institute
d) European Telecom Standards Institute

4. In asynchronous serial communication the physical layer provides


a) start and stop signaling
b) flow control
c) both (a) and (b)
d) none of the mentioned

5. Communication between a computer and a keyboard involves


transmission.
a) Simplex
b) half-duplex
c) full-duplex
d) automatic

6. Which agency developed standards for physical connection interfaces and electronic
signaling specifications?
a) EIA
b) ITU-T
c) ANSI
d) ISO

PANGASINAN STATE UNIVERSITY 73


Data and Digital Communications

7. In ___________ transmission, a start bit and a stop bit frame a character byte
a) asynchronous serial
b) synchronous serial
c) parallel
d) (a) and (b)

8. In transmission, we send bits one after another without start or stop bits or gaps. It is the
responsibility of the receiver to group the bits.
a) Synchronous
b) Asynchronous
c) Isochronous
d) none of the above

9. The outer metallic sheath in coaxial cable functions as _______.


a) a connector
b) a second conductor
c) a shield against noise
d) b and c

10. The _______ surrounds the _______ in a fiber-optic cable.


a) fiber; cladding
b) core; fiber
c) core; cladding
d) cladding; core
`

PANGASINAN STATE UNIVERSITY 74


Data and Digital Communications

Unit 6
Communication Protocol

Objectives
 Grasp the basic concepts of communication protocols, including their definitions,
purpose, and importance in network communication.
 Identify and differentiate between various types of communication protocols
 Examine the key functions and features of communication protocols, such as data
integrity, error detection, flow control, and addressing.
 Familiarize with the layered architecture of communication protocols, such as the
TCP/IP model or the OSI model, and understand the role of each layer in facilitating
communication.
 Assess the strengths and weaknesses of different communication protocols based on
criteria like reliability, efficiency, security, and scalability.
Introduction to Communication Protocol
A Communication Protocol is a set of rules, standards, and procedures that enable
devices to communicate with each other by defining how data is formatted, transmitted,
received, and processed. Communication protocols ensure the reliable exchange of
information between devices in a network, regardless of differences in hardware, software, or
location.

Key Components of Communication Protocols:


1. Syntax: Defines the structure and format `of the data (e.g., headers, payloads, and
footers) being transmitted, ensuring that both sender and receiver interpret the
message correctly.
2. Semantics: Dictates the meaning of each part of the message, ensuring that the data
and commands exchanged are interpreted correctly.
3. Timing: Governs the order and speed of message exchanges, including how long to
wait for a response or how to handle delays in communication.

Importance of Communication Protocols:


1. Interoperability:
o Ensure that devices from different vendors and operating on various platforms
can communicate seamlessly.
o Protocols define a common language, enabling diverse hardware and software
to interact effectively.

2. Data Integrity:
o Ensure that the data transmitted between devices arrives without errors or
corruption.
o Protocols incorporate error detection and correction mechanisms (e.g.,
checksums, parity bits) to verify data accuracy.

3. Reliable Data Transmission:


o Provide mechanisms for guaranteed delivery of data, ensuring that information
reaches its intended destination without loss.
o Protocols like TCP ensure reliable communication through acknowledgment
and retransmission of lost packets.

PANGASINAN STATE UNIVERSITY 75


Data and Digital Communications

4. Flow Control:
o Prevent overwhelming a receiving device or network by regulating the rate of
data transmission.
o Flow control mechanisms ensure that the sender transmits data at a rate the
receiver can handle, avoiding congestion or data loss.

5. Efficient Data Transfer:


o Maximize the efficient use of network resources, such as bandwidth, by
minimizing unnecessary retransmissions and optimizing packet size.
o Protocols prioritize the use of network capacity, ensuring smooth
communication even under heavy load.

6. Addressing and Routing:


o Enable devices to be uniquely identified within a network and allow for proper
routing of data from the source to the destination.
o Protocols like IP define addressing schemes (e.g., IP addresses) and routing
algorithms for directing data packets across networks.

7. Synchronization:
o Ensure proper timing and coordination between sender and receiver to
maintain data integrity.
o Synchronization mechanisms are especially important for real-time
communications (e.g., video calls, voice over IP).
`
8. Security:
o Protect data from unauthorized access, tampering, or eavesdropping during
transmission.
o Protocols include encryption, authentication, and integrity checks to safeguard
sensitive information and ensure secure communication (e.g., HTTPS,
SSL/TLS).

9. Scalability:
o Support communication across small to large-scale networks without
significant performance degradation.
o Protocols like TCP/IP are designed to handle the growing demands of the
internet and large distributed systems.

10. Error Detection and Correction:


o Detect and correct errors that may occur during data transmission, ensuring
accurate delivery of information.
o Protocols such as TCP use acknowledgment and retransmission to correct
errors, while others may use forward error correction techniques.

11. Compatibility:
o Ensure that older or less advanced systems can still communicate with newer
technologies.
o Protocols often define backward compatibility features to support legacy
systems.

PANGASINAN STATE UNIVERSITY 76


Data and Digital Communications

12. Minimize Latency:


o Reduce the delay between sending and receiving data, especially for real-time
or critical communications.
o Protocols designed for specific applications, such as voice or video, aim to
minimize latency and provide faster response times.

13. Session Management:


o Establish, maintain, and terminate communication sessions between devices.
o Protocols handle session establishment (e.g., opening a connection),
management (keeping it alive), and termination when no longer needed (e.g.,
closing a connection).

Connection-Oriented Protocols:
Connection-oriented protocols establish a connection before data is transmitted. This
connection ensures that the sender and receiver are synchronized, and data is delivered in the
correct order with error checking.

Key Characteristics:
 Establishment of a Connection: A connection is established between the
communicating devices before any data transmission occurs (a handshake process).
 Reliable Communication: These protocols ensure that the data sent is received
correctly and in sequence. If data is lost or corrupted, the protocol will retransmit it.
 Acknowledgments: The receiver sends acknowledgment (ACK) signals to confirm
that the data packets have been received successfully.
 Flow Control: These protocols implement ` flow control to prevent overwhelming the
receiver with too much data.
 Error Detection and Correction: Errors are detected, and retransmission
mechanisms are in place to correct them.
Examples of Connection-Oriented Protocols:
 TCP (Transmission Control Protocol): A widely used connection-oriented protocol
in the TCP/IP suite. It establishes a reliable, ordered, and error-checked data transfer
between devices, ensuring that data is delivered accurately.
o Use Case: Web browsing, file transfer, email.
 FTP (File Transfer Protocol): Used for transferring files between a client and a
server, ensuring that the files are received as intended.
 SCTP (Stream Control Transmission Protocol): Provides message-oriented
communication with multi-streaming capabilities, often used for telephony
applications.

Advantages:
 Reliability: Guarantees that data will arrive at the destination correctly.
 Orderly Communication: Ensures that packets are delivered in the right order.
 Flow Control: Prevents congestion by controlling the flow of data.

Disadvantages:
 Overhead: The connection setup (handshake) and maintenance result in more
overhead compared to connectionless protocols.
 Latency: The process of establishing and managing the connection can introduce
delays.

PANGASINAN STATE UNIVERSITY 77


Data and Digital Communications

Connectionless Protocols:
Connectionless protocols do not establish a connection before data is transmitted.
Instead, they send data packets (called datagrams) without guaranteeing their delivery, order,
or integrity.

Key Characteristics:
 No Connection Setup: There is no handshake or formal connection established
between the sender and receiver.
 Best-Effort Delivery: The protocol sends data without checking whether the receiver
is ready or whether the data arrives successfully.
 No Acknowledgments: The receiver does not send ACK signals, and there is no
mechanism for retransmission in case of data loss.
 Unreliable Communication: Since there is no guarantee that data will reach its
destination, errors are not corrected by the protocol itself.

Examples of Connectionless Protocols:


 UDP (User Datagram Protocol): A widely used connectionless protocol that
provides fast, low-latency communication without ensuring data delivery or order.
o Use Case: Real-time applications like video streaming, online gaming, and
voice-over-IP (VoIP) where speed is more important than reliability.
 IP (Internet Protocol): Handles addressing and routing of data packets across
networks. It does not establish connections and does not guarantee the successful
delivery of packets.
 ICMP (Internet Control Message Protocol): Used by network devices to send error
messages and operational information, such` as when a service is unavailable or a host
could not be reached.

Advantages:
 Low Overhead: No need for connection establishment or maintenance, reducing
protocol overhead.
 Faster Transmission: Ideal for time-sensitive applications where speed is prioritized
over reliability.

Disadvantages:
 Unreliable Delivery: No guarantees of packet delivery, order, or integrity.
 No Flow Control: There is no mechanism to prevent network congestion.
 No Error Handling: Errors are not detected or corrected by the protocol itself,
making it less suitable for applications requiring high reliability.
Network Topology
Network topology refers to the physical or logical arrangement of nodes and links in
a computer network. It defines how devices (nodes like computers, servers, switches) are
interconnected, and how data flows between them. Network topology plays a critical role in
the performance, efficiency, and fault tolerance of a network.

Types of Network Topologies:

1. Bus Topology
All devices are connected to a single central cable or backbone. Data is broadcast to all
devices but only the intended recipient accepts it.

PANGASINAN STATE UNIVERSITY 78


Data and Digital Communications

 Advantages:
o Easy to install and requires less cabling.
o Cost-effective for small networks.
 Disadvantages:
o Difficult to troubleshoot.
o Performance degrades as more devices are
added.
o A failure in the main cable can bring down
the entire network.
2. Star Topology
All devices are connected to a central hub or switch. Data is sent from one device to the
hub, which then forwards it to the destination device.
 Advantages:
o Easy to install, manage, and troubleshoot.
o Failure of one device does not affect others.
o High performance for small networks.
 Disadvantages:
o Requires more cabling than bus topology.
o If the hub or switch fails, the entire network
goes down.
3. Ring Topology
Devices are connected in a circular loop. Each device is connected to two other devices,
forming a ring. Data travels in one direction (unidirectional) or
both directions (bidirectional), passing through each device
until it reaches its destination. `
 Advantages:
o Simple data transmission and no data collisions.
o Predictable performance even with high traffic.
 Disadvantages:
o A single point of failure can disrupt the network.
o Adding or removing devices can disrupt the
network.
4. Mesh Topology
Every device is connected to every other device in the network. Data can take multiple
paths to reach its destination, making the network very
reliable. Full Mesh: Every node is connected to every other
node. Partial Mesh: Some nodes are connected to multiple
nodes, while others are connected to just one.
 Advantages:
o High fault tolerance as data can travel
through multiple routes.
o Reliable and secure.
 Disadvantages:
o Very expensive due to the number of connections required.
o Difficult to set up and maintain.
5. Tree Topology (also known as Hierarchical Topology)

PANGASINAN STATE UNIVERSITY 79


Data and Digital Communications

A combination of star and bus topologies. Devices are connected in a hierarchical manner,
with groups of star-configured devices connected to a linear
backbone. Data travels from one level to another through the
hierarchical layers.
 Advantages:
o Easy to manage and scalable.
o Good for large networks, such as schools or
large organizations.
 Disadvantages:
o A failure in a segment can affect large portions of the network.
o Maintenance is complex.
6. Hybrid Topology
A combination of two or more topologies, such as a star-bus or star-ring combination.
Depends on the combination of topologies used.
 Advantages:
o Flexible and scalable.
o Can leverage the strengths of different
topologies.
 Disadvantages:
o Complex and costly to design and
maintain.

Network Architecture
Network architecture refers to the design and structure of a computer network. It
defines how different network components interact,
` how data is transmitted, and how
network resources are organized. It encompasses both the hardware and software
components, protocols, communication methods, and standards that enable communication
between devices

PANGASINAN STATE UNIVERSITY 80


Data and Digital Communications

Types of Network Architecture:

1. Client-Server Architecture:
In this model, client devices (e.g., computers or mobile devices) request services from a
central server. The server provides resources, services, or data to clients. Clients depend on
the server for data storage and processing.
 Examples:
o Web services where a browser (client) requests data from a web server.
o Email services using an email server.

2. Peer-to-Peer (P2P) Architecture:


In this decentralized model, each device (peer) can act as both a client and a server. No
central server; all devices share resources equally. Used in file-sharing networks or
distributed systems.
 Examples:
o Torrent networks, where users download files from each other’s computers.
o Blockchain technology, which relies on peer-to-peer architecture.

3. Cloud-Based Architecture:
Services and resources are hosted in remote data centers (the cloud), and users access
them over the internet. Scalable and flexible; users pay only for the resources they use.
Reduces the need for local infrastructure.
 Examples:
o Cloud storage services (e.g., Google Drive, Dropbox).
o Cloud computing platforms (e.g., AWS,` Microsoft Azure).

4. Software-Defined Networking (SDN):


SDN separates the control plane (network management) from the data plane (packet
forwarding) to make network management more flexible and dynamic. Centralized control
over the entire network via software. Easily adaptable to new networking needs and changes.
 Examples:
o Used in modern data centers and cloud environments for efficient management
of network traffic.

5. Hybrid Architecture:
Combines elements of client-server, peer-to-peer, and cloud-based architectures
depending on the needs of the organization or network. Offers flexibility and scalability. A
mix of centralized and decentralized control.
 Examples:
o Enterprises that use local servers for critical data and cloud services for less
sensitive applications.
Open Systems Interconnection (OSI)
The OSI (Open Systems Interconnection) Model is a conceptual framework used to
understand and standardize the functions of a networking system. It defines how data is
transferred across a network in seven distinct layers. Each layer has a specific role, and they
work together to enable communication between devices. The OSI model was developed by
the International Organization for Standardization (ISO) in 1984 to promote interoperability
between different vendors' networking products.

PANGASINAN STATE UNIVERSITY 81


Data and Digital Communications

The Seven Layers of the OSI Model:

1. Physical Layer (Layer 1):


Responsible for the physical connection between devices and the transmission of raw
binary data over that connection. Defines hardware` elements like cables, network interface
cards (NICs), and connectors. Manages the electrical, optical, or radio signals used for
communication. Specifies data rate, voltages, and the physical medium. Ethernet cables, fiber
optics, Wi-Fi (physical aspects), hubs, and repeaters.

2. Data Link Layer (Layer 2):


Provides error-free transfer of data frames between two nodes connected by a physical
layer. Manages the physical addressing (MAC addresses) of devices. Error detection and
correction. Frame synchronization and flow control. Splits data into frames and ensures their
correct order. Sub-layers include MAC (Media Access Control): Manages access to the
physical medium. LLC (Logical Link Control): Handles error checking and flow control.
Examples are Ethernet, Wi-Fi (data link aspects), switches, and network bridges.

3. Network Layer (Layer 3):


Handles routing of data packets across different networks and manages logical addressing
(IP addresses). Routing: Determines the best path for data to travel between source and
destination. Logical addressing: Assigns and manages IP addresses for devices. Packet
forwarding between networks (internetworking). Examples include IP (Internet Protocol),
routers, IPv4, IPv6, and ICMP (ping/traceroute).

4. Transport Layer (Layer 4):


Ensures reliable transmission of data between hosts and manages end-to-end
communication. Segmentation of data into smaller chunks. Flow control: Prevents one side
from sending too much data at once. Error detection and correction: Ensures all data arrives
without errors. Multiplexing: Manages multiple communication sessions simultaneously.

PANGASINAN STATE UNIVERSITY 82


Data and Digital Communications

Transport layer is responsible for transferring data between devices using either User
Datagram Protocol (UDP) or Transmission Control Protocol (TCP). If the data units are too
large and exceed the maximum packet sizes, they are divided into smaller segments before
being sent to Layer 3. Upon reception, the receiving devices reassemble these segments at
Layer 3. Each segment is assigned a sequence number. As the data packets are fragmented
into smaller parts, they must be reorganized correctly for reassembly.
Layer 4 also manages flow control and error control. Flow control ensures that data
moves between senders and receivers at an optimal rate to prevent a fast connection from
overwhelming a slow one. A receiving device with a slower connection acknowledges the
receipt of segments and allows for the transmission of the next ones. Simultaneously, error
control verifies the packet checksums to ensure the completeness and accuracy of the
received data. Layer 4 also specifies the ports that Layer 5 will utilize for various functions.
For instance, DNS uses Port 53, which is typically open on systems, firewalls, and clients for
transmitting DNS queries. Web servers use port 80 for HTTP or 443 for HTTPS.

5. Session Layer (Layer 5):


Manages and controls the dialog between two devices, including session establishment,
maintenance, and termination. Establishes, manages, and terminates sessions between
applications. Handles authentication and authorization. Provides dialog control: Allows
communication between devices in half-duplex or full-duplex modes. NetBIOS, RPC
(Remote Procedure Call), and session management in APIs are the primary protocols that are
used here. The session layer handles user authentication from usernames and passwords, as
well as any other authentication protocols, and user logoff to terminate sessions.
`
6. Presentation Layer (Layer 6):
Translates data between the application layer and the network. Ensures that data is in a
readable format for the application layer. Data encryption and decryption (security). Data
compression to reduce the size of the transmitted data. Data translation and format conversion
(e.g., converting between different character encodings). Example protocol includes
SSL/TLS (encryption), JPEG, GIF, MPEG, ASCII to EBCDIC conversion.

7. Application Layer (Layer 7):


The topmost layer that directly interacts with the end-user and provides network services
to applications. Supports application services such as email, file transfer, and web browsing.
Handles application protocols for communication (e.g., HTTP for web browsing). It include
protocols such as HTTP, FTP, SMTP (email), DNS, POP3, IMAP, Telnet.

Key Features of the OSI Model:


 Layered Approach: Each layer is responsible for a specific part of the
communication process, and layers only interact with the ones directly above or
below them.
 Interoperability: The OSI model promotes interoperability between different
hardware and software vendors, ensuring that devices from different manufacturers
can communicate.
 Modularity: Changes to one layer (e.g., upgrading hardware in the physical layer) do
not affect the other layers.
 Standardization: Provides a universal set of networking rules and protocols that are
followed by networking devices and systems.

PANGASINAN STATE UNIVERSITY 83


Data and Digital Communications

Advantages of the OSI Model:


1. Standardization: Offers a universal framework for vendors to develop interoperable
hardware and software products.
2. Troubleshooting: Simplifies network troubleshooting by isolating issues to specific
layers.
3. Flexibility: Allows for technological evolution as each layer can be updated
independently.
4. Layer Independence: Changes in one layer do not affect others, making network
management easier.
User Datagram Protocol (UDP)
UDP (User Datagram Protocol) is one of the core protocols in the TCP/IP suite, used
for sending short messages called datagrams without establishing a connection. Unlike TCP
(Transmission Control Protocol), UDP is a connectionless and lightweight protocol that
does not provide error checking, flow control, or guaranteed delivery, making it faster but
less reliable. It is ideal for applications where speed is more critical than reliability.

Key Features of UDP:


 Connectionless: No handshake or session establishment is required between sender
and receiver before sending data.
 Unreliable: UDP does not guarantee the delivery of packets, nor does it ensure they
arrive in the correct order.
 No Error Recovery: There is no automatic error checking or correction. If data is lost
or corrupted, it's up to the application to handle the recovery.
 Low Overhead: Since there is no connection ` setup or error recovery, UDP has less
overhead, resulting in faster data transmission.
 No Flow Control: UDP does not regulate data flow between sender and receiver, so
packets may be sent faster than they can be received.

How UDP Works:


1. Sending Data: The sender breaks the data into smaller packets (datagrams) and sends
them to the destination IP address and port.
2. No Acknowledgment: The receiver does not acknowledge the receipt of data packets.
If packets are lost, they are not retransmitted.
3. Stateless: Each UDP packet is treated independently, with no tracking of sent or
received packets, making it suitable for applications that do not require continuous
sessions.

PANGASINAN STATE UNIVERSITY 84


Data and Digital Communications

Applications of UDP:
 Streaming Media: UDP is ideal for real-time applications like video and audio
streaming (e.g., YouTube, Skype), where it's more important to keep the stream
moving than to correct minor errors.
 Online Gaming: Multiplayer online games often use UDP for fast transmission of
game state updates, where occasional packet loss is preferable to delays caused by
retransmissions.
 DNS (Domain Name System): DNS uses UDP to quickly resolve domain names to
IP addresses, as a single request-response exchange is sufficient.
 VoIP (Voice over IP): Voice calls over the internet rely on UDP to minimize delays
in conversation, accepting minor packet loss to ensure real-time communication.

Advantages of UDP:
 Faster Transmission: No connection setup, error correction, or flow control allows
for quick data transmission.
 Low Overhead: Minimal packet header size and no retransmission of lost packets
reduce resource consumption.
 Useful for Real-Time Applications: Ideal for applications that prioritize speed over
perfect reliability, like live streaming or gaming

Transmission Control Protocol / Internet Protocol (TCP/IP)

TCP/IP (Transmission Control Protocol/Internet Protocol) is a set of communication


protocols used to interconnect network devices on the internet or any private network. It
` ensures reliable communication between
governs how data is transmitted over a network and
devices. The TCP/IP protocol suite forms the foundation of the internet and supports the
structure of modern networks.

TCP/IP Model Layers:


TCP/IP uses a simplified 4-layer model, which is derived from the OSI (Open Systems
Interconnection) model. The layers are:

PANGASINAN STATE UNIVERSITY 85


Data and Digital Communications

1. Link Layer (Network Interface Layer):


Handles the physical transmission of data over network media (wired, wireless, etc.) and
controls hardware addressing. Manages data transfer between adjacent network devices.
Defines how bits are transmitted across physical links.
 Protocols:
o Ethernet: A standard for wired networks.
o Wi-Fi: For wireless networks.
o ARP (Address Resolution Protocol): Resolves IP addresses to physical MAC
addresses.

2. Internet Layer:
Responsible for routing data across networks and determining the best path for data packets
to travel. Assigns logical addresses (IP addresses) to devices. Routes packets through
intermediate routers to reach their destination.
 Protocols:
o IP (Internet Protocol): Handles addressing and routing of data packets.
 IPv4: Most commonly used version of IP, with 32-bit addresses.
 IPv6: Newer version with 128-bit addresses to accommodate more
devices.
o ICMP (Internet Control Message Protocol): Sends error messages and
diagnostic information (e.g., used in the "ping" command).
o IGMP (Internet Group Management Protocol): Manages multicasting in
IPv4.
`
3. Transport Layer:
Ensures reliable and error-free transmission of data between devices. It is responsible for
end-to-end communication, flow control, and error handling. Manages data flow between
devices. Ensures data is delivered in the correct order without duplication or loss.
 Protocols:
o TCP (Transmission Control Protocol): A connection-oriented protocol that
ensures reliable, ordered delivery of data.
 Key Features: Three-way handshake (connection establishment), error
checking, retransmission of lost packets.
o UDP (User Datagram Protocol): A connectionless protocol that sends data
without guaranteeing delivery, order, or integrity. It's faster but less reliable than
TCP.
 Key Features: Low-latency transmission, often used for real-time
applications like video streaming or gaming.

4. Application Layer:
Provides high-level services for end-user applications and interfaces with the transport
layer. Facilitates communication between applications (e.g., web browsers, email clients) and
the network. Defines protocols for specific data exchange.
 Protocols:
o HTTP/HTTPS (Hypertext Transfer Protocol / Secure): For web browsing.
o FTP (File Transfer Protocol): For transferring files.
o SMTP (Simple Mail Transfer Protocol): For sending emails.
o DNS (Domain Name System): Resolves domain names to IP addresses.
o POP3/IMAP: For retrieving emails.

PANGASINAN STATE UNIVERSITY 86


Data and Digital Communications

How TCP/IP Works:


1. Data encapsulation: The application layer generates data, which is then passed down
through the layers. Each layer adds its own header information, which is used for
addressing, routing, and ensuring proper delivery.
2. Transmission: The link layer transmits the data as packets across the network, using
hardware such as routers, switches, and network interfaces.
3. Routing: The internet layer routes the data to the appropriate destination based on the
IP address.
4. Delivery and Reassembly: The transport layer reassembles the data in the correct
order, checks for errors, and handles any necessary retransmissions. The application
layer then delivers the data to the target application.

Advantages of TCP/IP:
 Interoperability: Works across different hardware and software platforms.
 Scalability: Supports small local networks as well as the global internet.
 Standardization: Based on open standards, enabling widespread adoption and
compatibility.

Disadvantages of TCP/IP:
 Complexity: Can be complex to configure and manage in larger networks.
 Security: Older versions of TCP/IP have security vulnerabilities (e.g., lack of
encryption in early IP protocols).
 Overhead: TCP adds extra overhead due to its reliability mechanisms (e.g., error
checking, acknowledgments). `

Key Differences between TCP/IP and OSI:


Feature TCP/IP OSI
Developed by the U.S. Department
Development Developed by ISO
of Defense
Layer Count 4 layers 7 layers
More theoretical and
Structure More streamlined and practical
comprehensive
Protocol Each layer has specific functions
Protocols are defined within layers
Specification defined
Rigid structure, which can
Flexibility More adaptable to different protocols
complicate integration
The basis for the Internet and widely Primarily used for teaching and
Use in Practice
used understanding
Supports both connection-oriented Connection-oriented (Transport
Connection (TCP) and connectionless (UDP) layer) and connectionless
communication (Network layer)
Aims to standardize
Focused on practical interoperability
Interoperability communication processes and
of networking products
interfaces
Error Handling Built into Transport layer (e.g., TCP) Distributed across multiple layers

PANGASINAN STATE UNIVERSITY 87


Data and Digital Communications

Feature TCP/IP OSI


Protocols like HTTP, FTP, and
Real-World More of a reference model, not
SMTP are implemented within the
Usage implemented directly in protocols
TCP/IP model

Summary
Communication protocols define how data is transmitted, structured, and received over
networks, ensuring devices can exchange information efficiently and reliably. From the basic
formatting of data packets to ensuring secure and reliable data transmission, protocols are the
backbone of modern communication networks, enabling interoperability, scalability, and
security.
The primary objectives of communication protocols are to ensure seamless and reliable
communication across networks, maintain data integrity and security, and optimize the use of
resources like bandwidth and time. These protocols allow devices and systems to interact in a
standardized way, enabling interoperability, scalability, and compatibility in diverse
networking environments.
The OSI model is a conceptual framework that standardizes the functions of networking
systems into seven layers, ensuring smooth communication across diverse network
environments. It provides guidelines for product developers and offers a common language
for network engineers to design, troubleshoot, and maintain networks. While TCP/IP is more
commonly used in practice, the OSI model remains a key reference model for understanding
and teaching network protocols.
OSI Model (7 Layers):
1. Physical Layer: Manages the transmission of raw data over physical media (e.g.,
`
cables, radio signals).
2. Data Link Layer: Manages error-free data transfer between adjacent nodes (e.g.,
Ethernet).
3. Network Layer: Handles routing of data across networks (e.g., IP).
4. Transport Layer: Ensures reliable delivery of data (e.g., TCP).
5. Session Layer: Manages communication sessions between applications.
6. Presentation Layer: Translates data into a readable format for applications (e.g.,
encryption, compression).
7. Application Layer: Provides services directly to users (e.g., HTTP, FTP, SMTP).

TCP/IP is the most widely used communication protocol suite, enabling devices to
communicate over the internet. With its layered structure, it provides a reliable and flexible
framework for data transmission, routing, and application support across diverse networks. The
combination of TCP's reliability and IP's addressing and routing capabilities makes TCP/IP
essential for modern digital communication.
TCP/IP Model (4 Layers):
1. Link Layer: Combines OSI’s Physical and Data Link layers.
2. Internet Layer: Corresponds to OSI’s Network layer (e.g., IP).
3. Transport Layer: Ensures reliable transmission of data (e.g., TCP, UDP).
4. Application Layer: Combines OSI’s Application, Presentation, and Session layers
(e.g., HTTP, DNS, FTP).

In summary, network architecture encompasses the design, structure, and operation of


networks. It dictates how data flows, how devices interact, and how the network adapts to
changes and demands. Different architectures are suited for various types of applications and

PANGASINAN STATE UNIVERSITY 88


Data and Digital Communications

services, depending on the need for scalability, reliability, security, and cost-efficiency.
Protocols define the rules for data transmission, ensuring reliable, secure, and efficient
communication between devices. Examples include TCP/IP, HTTP, and Ethernet. Standards
are created by organizations like IEEE, IETF, and ISO to ensure that networking technologies
are interoperable, secure, and compatible across various devices and systems.
These elements together create a cohesive framework that supports global communication
networks, such as the internet, ensuring efficient and standardized data transmission.

Exercises:

Multiple Choice: Choose the correct letter of the correct answer


1. _______ refers to the way a network is laid out, either physically or logically
a) Line configuration
b) Topology
c) Transmission mode
d) Modulation mode

2. Seven devices are arranged in a mesh topology. _______ physical channels link these
devices.
a) Seven
b) Six
c) Twenty
d) Twenty-one
`
3. The key element of a protocol is _______.
a) Syntax
b) Semantics
c) Timing
d) All of the above

4. The term HTTP stands for?


a) Hyper terminal tracing program
b) Hypertext tracing protocol
c) Hypertext transfer protocol
d) Hypertext transfer program

5. The session, presentation, and application layers are the_________support layers.


a) User
b) Network
c) both (a) and (b)
d) neither (a) nor (b)

6. The layer is responsible for delivering data units from one station to the next without
errors.
a) Transport
b) Network
c) data link
d) physical

PANGASINAN STATE UNIVERSITY 89


Data and Digital Communications

7. _____________is a process-to-process protocol that adds only port addresses, checksum


error control, and length information to the data from the upper layer.
a) TCP
b) UDP
c) IP
d) none of the above

8. Which of the following is an application layer service?


a) Remote log-in
b) File transfer and access
c) Mail service
d) All the above

9. When data are transmitted from device A to device B, the header from A's layer 4 is read
by B's layer.
a) Physical
b) Transport
c) Application
d) None of the above

10. As the data packet headers moves from upper layer to lower layers headers are .
a) Added
b) Removed
c) Rearranged `
d) Modified

PANGASINAN STATE UNIVERSITY 90


Data and Digital Communications

Unit 7
Error Detection and Correction

Objectives
 Identify the different types of errors that can occur during data transmission and
storage
 Understand the sources of these errors, such as noise, interference, and hardware
failures.
 Review various error detection methods, including parity bits, checksums, and cyclic
redundancy checks (CRC)
 Understand how these methods correct errors and their effectiveness in different
scenarios.
 Compare and contrast different error detection and correction techniques based on
their efficiency, complexity, and practical applications.
 Examine the practical applications of error detection and correction in various fields,
such as telecommunications, computer networks, data storage systems, and
multimedia systems.
Types of Error
Error detection and correction are essential techniques in digital communication and
data storage systems that ensure data integrity by identifying and rectifying errors that may
occur during data transmission or storage. Errors can arise from various sources, including
noise, interference, signal degradation, or hardware malfunctions.
When the information sent and received differ,` it is called an error. Noise in digital
signals during transmission can cause mistakes in the binary bits as they go from source to
recipient. This implies that a bit may shift from 0 to 1 or from 1 to 0. Every time a message
is transmitted, data (implemented at either the Transport Layer or Data Link Layer of the OSI
Model) may become distorted or become jumbled due to noise. Error-detection codes are
appended to digital transmissions as supplementary data in order to stop such mistakes. This
aids in finding any mistakes that could have happened when sending the message.

Types of Error

 Single-Bit Errors:
The term "single-bit error" may sound quite straightforward and innocuous, but in
practice, it has the potential to corrupt whole data sets, providing the recipient with entirely
erroneous information upon decoding. Additionally, as the error is genuinely extremely little
but causes significant data destruction, a highly good error detection system is needed to
identify it.
As an illustration, suppose that 0111 is the data packet that a sender transmitted to a
recipient. And a single-bit mistake happened during transmission, resulting in the recipient
seeing 0011 rather than 0111, or only one bit flipped. When the receiver decodes it to decimal
now, they will obtain 3(0011) rather than the proper data, which is 7(0111). Thus, there will
be significant data corruption when this data is employed for intricate logic. Using error
detection and repair techniques like CRC and parity bits is one way to apply
countermeasures.

PANGASINAN STATE UNIVERSITY 91


Data and Digital Communications

 Burst Errors:
Involve multiple bits being altered in a contiguous block. Can affect two or more bits,
making them more complex to detect and correct. In the same way, it is most likely a single
bit mistake and one type of transmission fault. However, a burst mistake occurs when many
data bits inside a packet are modified, distorted, or altered while being sent. And because of
how quickly this multibit corruption happens, it is known as "Burst." The primary causes of
burst errors are impulsive noise and communication line interference. A data packet can
become completely garbled and worthless as a result of numerous bits being corrupted.
Retransmission is one way to fix burst faults, but it also uses up more network resources and
can introduce new burst issues. Therefore, we must use reliable error detection and correction
methods, such as convolutional or Reed-Solomon codes. These techniques provide redundant
data so that, in the event of a burst mistake, the recipient may piece together the original data.
For instance, when a data block of 110001 is sent, the recipient receives 101101. Here, we
can observe that a burst mistake occurred, resulting in a total of 3 bits of damaged data in a
single occurrence.

Error Detection

Error detection is a crucial aspect of digital communication systems and data


storage, aimed at identifying errors that occur during data transmission or retrieval. By
ensuring data integrity, error detection mechanisms help maintain the reliability of
communication systems.

Common Error Detection Techniques


1. Parity Bits:
A simple error detection method that involves adding an extra bit (parity bit) to a binary
data unit. One extra bit is sent along with the original bits to make number of 1s either even
in case of even parity, or odd in case of odd parity. The sender ensures that the total number
of 1s (including the parity bit) is an even number. For odd parity, the total number of 1s is
made odd. The receiver will simply counts the number of 1s in a frame. If the count of 1s is
even and even parity is used, the frame is considered to be not-corrupted and is accepted. If
the count of 1s is odd and odd parity is used, the frame is still not corrupted.
 Even Parity: The number of 1s in the data plus the parity bit is even.
 Odd Parity: The number of 1s in the data plus the parity bit is odd.

PANGASINAN STATE UNIVERSITY 92


Data and Digital Communications

Limitation: Can only detect single-bit errors; it cannot identify which bit is erroneous. If two
bits are interchanged, then it cannot detect the errors

Example:
 Original data: 1010110 (contains 4 ones, even number)
 Parity bit: 0 (to maintain even parity)
 Transmitted data: 10101100

 Original data: 1010110 (contains 4 ones, even number)


 Parity bit: 1 (to make it odd)
 Transmitted data: 10101101
`

2. Checksums:
Checksum is a simple error detection technique used in data communication to ensure
the integrity of transmitted data. It works by creating a value (the checksum) derived from the
data content that is being sent. This value is then sent alongside the original data, allowing the
receiver to verify whether the data has been transmitted correctly.
The data is split into k segments of m bits each in the checksum error detection technique.
To get the total, the segments are summed at the sender’s end using 1’s complement
arithmetic. To obtain the checksum, a complement of the sum is taken. The checksum
segment is sent with the data segments. To obtain the total, all received segments are summed
using 1’s complement arithmetic at the receiver’s end. The sum is then calculated. If the
result is 0, the data is accepted; otherwise, it is rejected.

Example:

PANGASINAN STATE UNIVERSITY 93


Data and Digital Communications

Limitation: May fail to detect errors if two bits are altered in such a way that the
checksum remains unchanged.

3. Cyclic Redundancy Check (CRC):


Cyclic Redundancy Check (CRC) is a widely used error detection technique in digital
communication systems. It is based on treating the data as a large binary number and
performing mathematical operations to create a checksum that represents the data. The CRC
is sent along with the data, and the receiver can use it to detect errors that may have occurred
during transmission. A more robust method that treats data as a polynomial and performs
polynomial division to generate a fixed-length checksum. The resulting remainder is sent
with the data, and the receiver performs the same operation to check for errors. Highly
effective for detecting burst errors, which are groups of bits that are flipped.

* Sender Side (Generation of Encoded Data from Data and Generator Polynomial or
Key):
1. The binary data is first augmented by adding k-1 zeros in the end of the data
2. Use modulo-2 binary division to divide binary data by the key and store remainder of
division.
3. Append the remainder at the end of the data to form the encoded data and send the
same

* Receiver Side (Check if there are errors introduced in transmission)


1. Perform modulo-2 division again and if the remainder is 0, then there are no errors. In
this article we will focus only on finding the remainder i.e. check word and the code word.
`
Modulo 2 Division:
It is the same as the familiar division process we use for decimal numbers. But instead of
subtraction, we use XOR here. In each step, a copy of the divisor (or data) is XORed with the
k bits of the dividend (or key). The result of the XOR operation (remainder) is (n-1) bits,
which is used for the next step after 1 extra bit is pulled down to make it n bits long. When
there are no bits left to pull down, we have a result. The (n-1)-bit remainder which is
appended at the sender side.

Example:
No Error in
transmission:

PANGASINAN STATE UNIVERSITY 94


Data and Digital Communications

Error in transmission:
Data word to be sent - 100100
Key – 1101 Receiver Side:
Sender Side:
Since the
remainder is not
all zeroes, the
error is detected at
the receiver side.

4. Vertical Redundancy Check (VRC) and Longitudinal Redundancy Check (LRC)


VRC, also known as parity check, is a simple error detection method that adds a parity bit
to each data unit (byte). The parity bit ensures that the number of 1s in the data unit
(including the parity bit) is either even (even parity) or odd (odd parity), depending on the
chosen scheme.

Longitudinal Redundancy Check or 2-D parity check : an error-checking technique, it


includes addition of a parity bit to each character or byte in a block of data and organized in
rows and columns. The parity bit is calculated by taking the exclusive OR (XOR) of all the
bits in the same position in each byte.

Vertical Redundancy Check (VRC) and Longitudinal Redundancy Check (LRC) are
two error detection techniques used to ensure data integrity during transmission. Both
methods involve adding redundancy bits to the data to detect errors that might have occurred.
Error Correction
Error correction is a technique used in digital communication to detect and correct
errors that occur during data transmission. Unlike error detection methods that only identify

PANGASINAN STATE UNIVERSITY 95


Data and Digital Communications

the presence of errors, error correction methods allow the receiver to reconstruct the original
data even if errors are present. Error Correction codes are used to detect and repair mistakes
that occur during data transmission from the transmitter to the receiver. Error correction
techniques rely on the addition of redundant data to the original message. This redundancy
allows the receiver to not only detect errors but also determine their location and make
corrections. Two main approaches to error correction are used:

1. Forward Error Correction (FEC):


o Error correction information is added to the data before transmission.
o The receiver can correct errors without the need for retransmission.
o Commonly used in real-time communication systems where retransmission is
not feasible.

2. Automatic Repeat Request (ARQ):


o Error detection techniques are used to identify errors in the received data.
o If an error is detected, a request is sent back to the sender to retransmit the
data.
o Suitable for situations where a delay in communication is acceptable.

Applications of Error Correction


 Telecommunications: Used in cellular networks, satellite communications, and fiber-
optic data transmission to ensure reliable communication.
 Data Storage: Employed in hard drives, SSDs, and memory devices to correct data
corruption and ensure data integrity.
 Multimedia Streaming: Real-time streaming ` services like video and audio use error
correction to maintain quality without retransmitting data.
 Wireless Communication: Error correction is crucial in Wi-Fi, Bluetooth, and other
wireless technologies to handle interference and signal degradation.

Advantages of Error Correction


 Increased Reliability: Error correction improves the reliability of data transmission
by minimizing the impact of errors.
 No Retransmission Needed: Forward Error Correction (FEC) reduces the need for
retransmissions, making it suitable for real-time applications.
 Efficient Communication: Ensures that data is transmitted accurately, even over
noisy communication channels.

Limitations of Error Correction


 Increased Complexity: Error correction techniques can be computationally intensive,
requiring additional processing power.
 Redundant Data Overhead: Adding redundant data increases the size of the
transmitted message, leading to higher bandwidth usage.
 Not Always Perfect: While effective, some error correction codes might still fail in
highly corrupted transmission conditions.

Hamming Code
Hamming code is a linear error-correcting code developed by Richard Hamming in the
1950s. It is used to detect and correct single-bit errors in data transmission. Hamming code
adds redundancy bits to the original data to create a code word, enabling the detection and

PANGASINAN STATE UNIVERSITY 96


Data and Digital Communications

correction of errors in the transmitted message. To generate a Hamming code, we add parity
bits to the original data bits. The number of parity bits, denoted as r, depends on the length of
the data bits m and satisfies the following condition:
2𝑟 ≥ 𝑚 + 𝑟 + 1
 𝑚 is the number of data bits.
 𝑟 is the number of parity bits.
The parity bits are placed at positions that are powers of 2: 20 , 21 , 22 , … … For example,
in a 7-bit Hamming code, the parity bits would be placed at positions 1, 2, 4, and so on.

Algorithm of Hamming Code


Hamming Code is simply the use of extra parity bits to allow the identification of an error.
Step 1: Write the bit positions starting from 1 in binary form (1, 10, 11, 100, etc).
Step 2: All the bit positions that are a power of 2 are marked as parity bits (1, 2, 4, 8, etc).
Step 3: All the other bit positions are marked as data bits.
Step 4: Each data bit is included in a unique set of parity bits, as determined its bit position
in binary form:

 Parity bit 1 covers all the bits positions whose binary representation includes a 1 in the
least significant position (1, 3, 5, 7, 9, 11, etc).
 Parity bit 2 covers all the bits positions whose binary representation includes a 1 in the
second position from the least significant bit (2, 3, 6, 7, 10, 11, etc).
 Parity bit 4 covers all the bits positions whose binary representation includes a 1 in the
third position from the least significant bit (4–7, 12–15, 20–23, etc).
 Parity bit 8 covers all the bits positions whose binary representation includes a 1 in the
fourth position from the least significant bit `bits (8–15, 24–31, 40–47, etc).
 In general, each parity bit covers all bits where the bitwise AND of the parity position
and the bit position is non-zero.

Step 5: Since we check for even parity set a parity bit to 1 if the total number of ones in the
positions it checks is odd. Set a parity bit to 0 if the total number of ones in the positions it
checks is even.

Example: If the data to be transmitted is 1011001


Number of data bits = 7
Thus, number of redundancy bits = 4
Total bits = 7+4 = 11

Redundant bits are always placed at positions that correspond to the power of 2, so the
redundant bits will be placed at positions: 1,2,4 and 8.

Redundant bits will be placed here:

Thus now, all the 11 bits will look like


this:

Here, R1, R2, R4 and R8 are the redundant bits.


Determining the parity bits:

PANGASINAN STATE UNIVERSITY 97


Data and Digital Communications

R1:

We look at bits 1,3,5,7,9,11 to


calculate R1. In this case, because the number of 1s in these bits together is even, we make
the R1 bit equal to 0 to maintain even parity.

R2:

We look at bits 2,3,6,7,10,11 to calculate R2. In this case, because the number of 1s in these
bits together is odd, we make the R2 bit equal to 1 to maintain even parity.
R4:

We look at bits 4,5,6,7 to calculate R4. In this case, because the number of 1s in these bits
together is odd, we make the R4 bit equal to 1 to maintain even parity.
R8:

`
We look at bits 8,9,10,11 to calculate R8. In this case, because the number of 1s in these bits
together is even, we make the R8 bit equal to 0 to maintain even parity.
Thus, the final block of data which is transferred looks like this:

Summary

Error detection and correction are essential techniques in data communication,


ensuring that transmitted data is accurately received. They help to maintain the integrity and
reliability of information exchanged over communication channels, which can be prone to
errors due to noise, interference, or signal degradation.

Error Detection
Error detection techniques identify whether errors have occurred during data transmission.
These techniques rely on adding extra bits to the original data, which are used to check the
accuracy of the received data. Common error detection methods include:
 Parity Bit: Adds a single bit to make the number of 1s either even (even parity) or
odd (odd parity).
 Checksum: A value calculated from the data, used to verify the integrity of data
received.
 Cyclic Redundancy Check (CRC): A polynomial-based method that generates a
remainder used to check data accuracy.

PANGASINAN STATE UNIVERSITY 98


Data and Digital Communications

 Vertical Redundancy Check (VRC) and Longitudinal Redundancy Check


(LRC): Techniques that use rows and columns of bits to detect errors.

Error Correction
Error correction goes a step further by not only detecting the error but also determining the
location of the error so it can be corrected without the need for retransmission. It involves
adding more redundancy to the transmitted data to enable the receiver to reconstruct the
original data. Common error correction methods include:

Hamming Code: Corrects single-bit errors and detects two-bit errors by using parity bits
placed at specific positions in the data.
2𝑟 ≥ 𝑚 + 𝑟 + 1
Key Differences
 Error Detection identifies the presence of errors but does not correct them, while
Error Correction not only detects but also corrects the errors.
 Error detection methods are simpler and require less computational power compared
to error correction techniques, which are more complex due to the need to locate and
correct errors.

Importance
 Reliability: Ensures data integrity by minimizing errors in digital communication.
 Efficiency: Reduces the need for retransmissions, which can save time and
bandwidth.
 Data Accuracy: Guarantees that information is accurately transmitted and received,
` systems.
which is critical in real-time communication

Error detection and correction are fundamental to reliable data communication. Error
detection methods like parity bits, checksums, and CRC identify errors, while error correction
techniques like Hamming code allow for the recovery of the original data. Together, they
form the backbone of data integrity in communication systems, enhancing the accuracy and
reliability of data transfer.

Exercises:

1. For a 12 bit data string of 101100010010, determine the number of Hamming bits
required, and the Hamming code.
Ans.

2. In CRC if the data unit is 100111001 and the divisor is 1011 then what is divided at the
receiver?
Ans.

PANGASINAN STATE UNIVERSITY 99


Data and Digital Communications

3. A bit stream 1101011011 is transmitted using the standard CRC method. The generator
polynomial is 𝑥 4 + 𝑥 + 1. What is the actual bit string transmitted?
Ans.

4. A hamming code 0110001 is being received. Find the correct code which is being
transmitted.
Ans.

5. If the data transmitted along with checksum is 10101001 00111001 00011101. But the
data received at destination is 00101001 10111001 00011101. Detect the error
Ans. `

PANGASINAN STATE UNIVERSITY 100


Data and Digital Communications

Unit 8
Introduction to Computer Network and Security

Objectives
• Define key terms related to computer networks and security, including nodes, protocols,
encryption, and authentication.
• Discuss different types of networks (LAN, WAN, MAN, etc.) and their characteristics.
• Identify common security threats (malware, phishing, DDoS attacks) and
vulnerabilities that can compromise network security.
• Explain the importance of encryption in securing data in transit and at rest.
• Discuss emerging technologies and trends impacting network security, such as AI,
machine learning, and zero-trust security models.
• Explore the challenges and opportunities presented by advancements in networking
and security technologies.
Computer Network
Computer Network refers to a collection of interconnected devices that share resources
and communicate with each other over various media. The primary purpose of a computer
network is to enable data exchange, resource sharing, and connectivity among computers,
servers, and other devices.

In the context of computer networks, nodes are individual devices or endpoints that are
connected to the network. Each node can send, receive, or forward data, playing a crucial role
in communication and data transfer within the network. Nodes can be computers, servers,
routers, switches, printers, smartphones, or any other device that has a network address and
can communicate over a network. Nodes are essential components of computer networks,
acting as connection points that communicate and share data. Understanding the role of nodes
helps in managing network infrastructure, optimizing data flow, and implementing security
measures. They are critical to the functioning of any network, from small local setups to vast
global systems like the internet.

Types of Nodes
1. End Devices (Hosts):
o Computers: Desktop PCs, laptops, or workstations used by individuals for
various tasks.
o Mobile Devices: Smartphones, tablets, and other portable devices connected
to the network.
o Servers: Powerful machines that store, process, and deliver data to other
nodes on the network.

PANGASINAN STATE UNIVERSITY 101


Data and Digital Communications

o
Printers/Scanners: Network-connected peripherals that can receive print jobs
or data requests.
2. Networking Devices:
o Routers: Nodes that direct data packets between different networks,
determining the best path for data to travel. Routers are networking devices
that use headers and forwarding tables to find the optimal way to forward data
packets between networks. A router is a computer networking device that links
two or more computer networks and selectively exchanges data packets
between them. A router can use address information in each data packet to
determine if the source and destination are on the same network or if the data
packet has to be transported between networks. When numerous routers are
deployed in a wide collection of interconnected networks, the routers share
target system addresses so that each router can develop a table displaying the
preferred pathways between any two systems on the associated networks
o Switches: Devices that connect multiple nodes within the same network,
forwarding data only to the specific device it is meant for.A switch differs
from a hub in that it only forwards frames to the ports that are participating in
the communication, rather than all of the ports that are connected. The
collision domain is broken by a switch, yet the switch depicts itself as a
broadcast domain. Frame forwarding decisions are made by switches based on
MAC addresses.
o Hubs: Basic networking devices that broadcast data to all connected nodes,
regardless of the intended recipient. A device that joins together many twisted
pair or fiber optic Ethernet devices to give the illusion as a formation of a
single network segment. The device ` can be visualized as a multiport repeater.
A network hub is a relatively simple broadcast device. Any packet entering
any port is regenerated and broadcast out on all other ports, and hubs do not
control any of the traffic that passes through them. Packet collisions occur as a
result of every packet being sent out through all other ports, substantially
impeding the smooth flow of communication.
o Access Points: Devices that enable wireless communication between wired
networks and wireless devices.

3. Specialized Nodes:
o Firewalls: Security devices that monitor and filter incoming and outgoing
network traffic based on predefined security rules.

PANGASINAN STATE UNIVERSITY 102


Data and Digital Communications

o Gateways: Nodes that serve as intermediaries between different networks or


communication protocols, enabling data exchange. To provide system
compatibility, a gateway may contain devices such as protocol translators,
impedance matching devices, rate converters, fault isolators, or signal
translators. It also necessitates the development of administrative procedures
that are acceptable to both networks. By completing the necessary protocol
conversions, a protocol translation/mapping gateway joins networks that use
distinct network protocol technologies.
o Modems: Devices that convert digital data from a computer into a format
suitable for transmission over telephone or cable lines, and vice versa.

Types of Computer Networks

1. Local Area Network (LAN)


A network that connects devices
within a limited geographic area, such
as a home, office, or building. LAN
provides a useful way of sharing the
resources between end users. The
resources such as printers, file servers,
scanners, and internet are easily
sharable among computers. High data
transfer rates. Low latency due to short
distances between connected devices.
`
Typically owned, controlled, and managed by a single organization. A small office with
computers, printers, and other devices connected to a central server.

2. Metropolitan Area Network (MAN)


A network that spans a city or a
large campus, providing connectivity
within a specific geographic area. It can
be in the form of Ethernet, Token-ring,
ATM, or Fiber Distributed Data
Interface (FDDI). Larger than a LAN
but smaller than a WAN. Used to
connect multiple LANs within a city or
town. Often operated by governments,
universities, or large organizations. A
city's public Wi-Fi network or a university campus network.

3. Wide Area Network (WAN)


A network that covers a large geographic area, connecting multiple LANs or MANs.
Generally, telecommunication networks are Wide Area Network. These networks provide
connectivity to MANs and LANs. Since they are equipped with very high speed backbone,
WANs use very expensive network equipment. WAN may use advanced technologies such as
Asynchronous Transfer Mode (ATM), Frame Relay, and Synchronous Optical Network
(SONET). WAN may be managed by multiple administration. Can span cities, countries, or
even continents. Lower data transfer speeds compared to LANs due to long-distance
communication. Often uses leased telecommunication lines or the internet for connectivity.
The internet itself is the largest WAN, connecting millions of smaller networks globally.

PANGASINAN STATE UNIVERSITY 103


Data and Digital Communications

4. Personal Area Network (PAN)


A network designed for personal use,
connecting devices within the range of a
single person. A personal area network
(PAN) interconnects technology devices,
typically within the range of a single user,
which is approximately 10 meters or 33 feet.
This type of network is designed to enable
devices in a small office or home office
(SOHO) environment to communicate and
share resources, data and applications either
wired or wirelessly. Uses technologies like
Bluetooth, infrared, or USB for
communication. Suitable for connecting
personal devices like smartphones, tablets, laptops, and wearable gadgets. A wireless
connection between a smartphone and Bluetooth headphones is a typical example of PAN

5. Campus Area Network (CAN)


A network that connects multiple LANs
within a limited area such as a
university campus or business campus.
A campus area network, or CAN, is a
network used in educational
environments such as universities or `
school districts. While each department
in a school might use its own LAN, all
the school's LANs could connect
through a CAN. Campus area networks combine several independent networks into one
cohesive unit. For example, the English and engineering departments at a university might
connect through a CAN to communicate with each other directly. Larger than a LAN but
smaller than a MAN.

6. Storage Area Network (SAN)


A specialized high-speed network that
provides access to consolidated block-level
data storage. A storage area network, or a
SAN, is a network that use to store mass
amounts of sensitive data. It provides a way to
centralize data on a non-localized network
that differs from the main operating one. One
example of a SAN is it stores customer
information on a separate network to maintain
the high speeds of your main network.
Designed to handle large volumes of data
storage, improving performance and
availability. Used in data centers and
enterprise environments for centralized data storage. Example of which is a corporate network
that connects servers to storage devices for efficient data management.

PANGASINAN STATE UNIVERSITY 104


Data and Digital Communications

7. Enterprise Private Network (EPN)


A network specifically designed to
connect multiple sites within a large
organization. Controlled by a single
organization for internal use. An
enterprise private network, or an EPN, is
an exclusive network that businesses
build and operate to share company
resources at high speeds. Ensures
secure communication and data transfer
between different branches or
departments. For example, a high-
security technology company might use an EPN to reduce the risk of data breaches.

8. Virtual Private Network (VPN)


A secure network that uses public
telecommunication infrastructure (like the
internet) to provide remote users with
secure access to their organization's
network. Uses encryption and other security
measures to protect data transmission over
the public internet. A virtual private
network, or VPN, is a private network that's
available through the internet. This type of
network functions similarly to an EPN `
because it provides a secure, private
connection. VPNs typically don't require the same infrastructure as EPNs. Both the general
public and companies can use VPNs to ensure privacy and security.

Key Objectives of Creating and Deploying a Computer Network


1. Resource sharing
Today’s enterprises are spread across the globe, with critical assets being shared
across departments, geographies, and time zones. Clients are no more bound by location. A
network allows data and hardware to be accessible to every pertinent user. This also helps
with interdepartmental data processing. For example, the marketing team analyzes customer
data and product development cycles to enable executive decisions at the top level.

2. Resource availability & reliability


A network ensures that resources are not present in inaccessible silos and are
available from multiple points. The high reliability comes from the fact that there are usually
different supply authorities. Important resources must be backed up across multiple machines
to be accessible in case of incidents such as hardware outages.

3. Performance management
A company’s workload only increases as it grows. When one or more processors are
added to the network, it improves the system’s overall performance and accommodates this
growth. Saving data in well-architected databases can drastically improve lookup and fetch
times.

4.Cost savings

PANGASINAN STATE UNIVERSITY 105


Data and Digital Communications

Huge mainframe computers are an expensive investment, and it makes more sense to
add processors at strategic points in the system. This not only improves performance but also
saves money. Since it enables employees to access information in seconds, networks save
operational time, and subsequently, costs. Centralized network administration also means that
fewer investments need to be made for IT support.

5. Increased storage capacity


Network-attached storage devices are a boon for employees who work with high
volumes of data. For example, every member in the data science team does not need
individual data stores for the huge number of records they crunch. Centralized repositories
get the job done in an even more efficient way. With businesses seeing record levels of
customer data flowing into their systems, the ability to increase storage capacity is necessary
in today’s world.

6. Streamlined collaboration & communication


Networks have a major impact on the day-to-day functioning of a company.
Employees can share files, view each other’s work, sync their calendars, and exchange ideas
more effectively. Every modern enterprise runs on internal messaging systems such as Slack
for the uninhibited flow of information and conversations. However, emails are still the
formal mode of communication with clients, partners, and vendors.

7. Reduction of errors
Networks reduce errors by ensuring that all involved parties acquire information from
a single source, even if they are viewing it from different locations. Backed-up data provides
consistency and continuity. Standard versions of `customer and employee manuals can be
made available to a large number of people without much hassle.

8. Secured remote access


Computer networks promote flexibility, which is important in uncertain times like
now when natural disasters and pandemics are ravaging the world. A secure network ensures
that users have a safe way of accessing and working on sensitive data, even when they’re
away from the company premises. Mobile handheld devices registered to the network even
enable multiple layers of authentication to ensure that no bad actors can access the system.
Encryption and Decryption

Encryption is the process of converting a normal message (plain text) into a


meaningless message (ciphertext). Decryption is the process of converting a meaningless
message (ciphertext) into its original form (plaintext). The major distinction between secret
writing and associated secret writing is the conversion of a message into an unintelligible kind
that’s undecipherable unless decrypted. whereas secret writing is the recovery of the first
message from the encrypted information.

PANGASINAN STATE UNIVERSITY 106


Data and Digital Communications

Encryption is a process used to convert data into a coded format to protect it from
unauthorized access. It is one of the most crucial methods in data security, ensuring that
sensitive information remains confidential and secure during storage or transmission.
Encryption uses algorithms to transform plaintext (readable data) into ciphertext (an
unreadable, encoded format), which can only be decoded by authorized users with the correct
decryption key. Data can be secured with encryption by being changed into an unintelligible
format that can only be interpreted by a person with the proper decryption key. Sensitive data,
including financial and personal information as well as communications over the internet, is
frequently protected with it.
Decryption is the process of converting encrypted data (ciphertext) back into its original,
readable form (plaintext) using a decryption key. It is the reverse of the encryption process,
allowing authorized users to access the information that was securely transmitted or stored.

Algorithm for Encryption / Decryption


1. Symmetric Encryption / Decryption (Private Key Encryption)
Symmetric encryption uses the same key for both encryption and decryption of the data.
Faster than asymmetric encryption due to simpler algorithms. Both the sender and receiver
must have the secret key, making key distribution a challenge. Ideal for encrypting large
amounts of data.

Examples:
 Advanced Encryption Standard (AES): Widely used in modern encryption for
securing sensitive data. AES (Advanced Encryption Standard) was developed for the
government to protect electronic data and eventually replaced DES as the standard.
AES is available for free public or private use. AES encrypts data and does so by using
one of three symmetric keys: 128, 192 or 256 bits. Further, each key has a different
number of encryption rounds.128-bit key: 10 rounds,192-bit key: 12 rounds and 256-
bit key: 14 rounds The longer the key, the more difficult it is for a bad actor to intercept
the message and hack it.

 Data Encryption Standard (DES): An older encryption standard, now largely


replaced by more secure algorithms. DES uses cipher block chaining (CBC) and a 56-
bit key to encrypt 64-bit blocks of plaintext in multiple iterations.

 Triple DES (3DES): An improvement over DES, applying the algorithm three times
to increase security. Developed in 1971 by IBM, DES (Data Encryption Standard) was
the encryption standard soon after its development. In 1976, the U.S. government
adopted DES as its standard and in 1977, it was recognized as a standard by the National
Bureau of Standards. DES uses a symmetric encryption key, but the encryption
algorithm only encrypts 56 bits, which meant the system was limited and vulnerable to
brute force attacks, the most basic form of digital attack. Since DES used a single 64-
bit key, the number of combinations possible in a password was relatively low.

PANGASINAN STATE UNIVERSITY 107


Data and Digital Communications

2. Asymmetric Encryption / Decryption (Public Key Encryption)


Asymmetric encryption uses a pair of keys: a public key for encryption and a private
key for decryption. The public key can be shared openly, while the private key remains
confidential. More secure than symmetric encryption
` but slower due to complex mathematical
operations. Often used to establish secure communication channels.

Examples:
 Rivest-Shamir-Adleman (RSA): One of the most widely used public-key
cryptosystems for secure data transmission. RSA, named after MIT mathematicians
Ronald Rivest, Adi Shamir and Leonard Adelman, is a cipher that relies on asymmetric
encryption and is widely used for the secure transmission of data. It uses the
public/private method of encryption and decryption and has a slower transfer of data
rate, when compared to other encryption algorithms. RSA is a secure cipher that has
been proven to be safe. It is difficult to break due to its use of prime factorization to
generate keys.
 Elliptic Curve Cryptography (ECC): Uses elliptic curves for stronger security with
shorter keys, often used in mobile devices and smart cards. Is a key-based technique
for encrypting data. ECC focuses on pairs of public and private keys for decryption and
encryption of web traffic. Elliptic curve cryptography (ECC) is a public key

PANGASINAN STATE UNIVERSITY 108


Data and Digital Communications

cryptographic algorithm used to perform critical security functions, including


encryption, authentication, and digital signatures. ECC is based on the elliptic curve
theory, which generates keys through the properties of the elliptic curve equation,
compared to the traditional method of factoring very large prime numbers.

The choice of a specific type of encryption is crucial because it directly impacts the security,
performance, and usability of data protection methods. Each type of encryption has its
strengths, weaknesses, and ideal use cases, making it important to choose the right one based
on the needs of a given application.

Security Requirements: The choice of encryption depends on the level of security needed.
For instance, financial institutions or government data require the highest level of encryption,
such as AES-256 or RSA with long key lengths.

Data Sensitivity: Highly sensitive information, like personal identifiable information (PII) and
classified documents, often requires both symmetric and asymmetric encryption to ensure
secure data transfer and storage.

Performance Needs: For applications that require fast processing, symmetric encryption like
AES is preferred due to its lower computational overhead. Asymmetric encryption might be
used only for the initial key exchange, followed by symmetric encryption for data transfer.

Key Management: Asymmetric encryption is ideal when key management is a concern since
it eliminates the need to securely share keys over insecure channels. This is critical in
applications like digital certificates and SSL/TLS` for secure internet communications.

Device Capabilities: Devices with limited computational power, such as IoT devices, often
rely on lightweight symmetric encryption or ECC (a more efficient form of asymmetric
encryption) to ensure data security without overwhelming the system resources.

Compliance Standards: Many industries have specific regulations regarding encryption


methods. For example, AES is a standard requirement in the financial industry and by the U.S.
government for secure communications.

Virus, Worms, and Hacking


Viruses, worms, and hacking are common cybersecurity threats that can compromise
the security, privacy, and functionality of computer systems and networks. Understanding each
of these threats is essential to protecting data and maintaining secure digital environments.

Viruses
A virus is a type of malicious software (malware) that attaches itself to a legitimate program
or file. When the infected program runs, the virus activates and can replicate itself, spreading
to other files and programs on the computer. Requires user interaction to spread (e.g., opening
an infected file or running a malicious program). Can corrupt, delete, or modify files and disrupt
system operations.

Types of Viruses:
 File Infectors: Attach themselves to executable files and spread when the file is run.
As one of the most popular types of viruses, a file-infector virus arrives embedded or

PANGASINAN STATE UNIVERSITY 109


Data and Digital Communications

attached to a computer program file – a file with an .EXE extension in its name. When
the program runs, the virus instructions are activated along with the original program.
The virus carries out the instructions in its code it could delete or damage files on your
computer, attempt to implant itself within other program files on your computer, or do
anything else that its creator dreamed up to cause havoc. The presence of a file-infector
virus can be detected in two major ways: The size of a file may have suspiciously
increased. If a program file is too big, a virus may account for the extra size. At this
point, you need to know two things: What size the file(s) should be when fresh from
the software maker. Whether the virus is a cavity seeker a dangerous type that hides
itself in the unused space in a computer program.
 Boot Sector Viruses: Infect the master boot record of a computer and load when the
system starts up. While less common today, boot-sector viruses were once the mainstay
of computer viruses. A boot-sector occupies the portion (sector) of a floppy disk or hard
drive that the computer first consults when it boots up. The boot sector provides
instructions that tell the computer how to start up; the virus tells the computer to load
itself during that start up.
 Macro Viruses: Infect files like Word documents or Excel spreadsheets by exploiting
macro scripting features. During the late 1990s and early 2000, macro viruses were the
most prevalent viruses. Unlike other virus types, macro viruses are not specific to an
operating system and spread with ease via attachments, floppy disks, Web downloads,
file transfers, and cooperative applications. Popular applications that support macros
(such as Microsoft Word and Microsoft Excel) are the most common platforms for this
type of virus. These viruses are written in Visual Basic and are relatively easy to create.
Macro viruses infect at different points during a file's use, for example, when it is
opened, saved, closed, or deleted. `
Effects:
 Slows down system performance.
 Causes data loss and corruption.
 Can lead to identity theft if personal data is compromised.

2. Worms
A worm is a type of malware that can self-replicate and spread independently without the
need for user interaction. Worms often exploit vulnerabilities in software or operating systems
to propagate. Does not require a host file or user action to spread. Can quickly infect multiple
systems over a network. A worm is a malicious program that originates on a single computer
and searches for other computers connected through a local area network or Internet
Connection. When a worm finds another computer, it replicates itself onto that computer and
continues to look for other connected computers on which to replicate. A worm continues to
attempt to replicate itself indefinitely or until a self-timing mechanism halts the process. It does
not infect other files. A worm code is a stand-alone code. In other words, a worm is a separate
file.

Types of Worms:
 Email Worms: Spread through infected email attachments or links. The email box is
used as a client by the worm. The mail has infected link or attachment which once
opened downloads the worm. This worm searches the email contacts of the infected
system and sends links so that those systems are also destroyed. These worms have
double extensions like mp4 or video extensions so that the user believes it to be media
extensions. These worms do not have a downloadable link but a short link to open the
same. The link is clicked and the worm is downloaded, it either deletes the data or

PANGASINAN STATE UNIVERSITY 110


Data and Digital Communications

modifies the same and the network is destroyed. An example of an email worm is
ILOVEYOU worm which infected computers in 2000.
 Internet Worms: Exploit vulnerabilities in network protocols to spread from one
computer to another. Internet is used as a medium to search other machines vulnerable
and affect them. Those systems where the antiviruses are not installed are affected
easily with these worms. Once the machines are located they are infected and the same
process is started all over again in those systems. This is used to check the recent
updates and security measures if the system hasn’t installed any. The worm spreads
through the internet or local area network connections.
 Instant Messaging Worms: Spread through messaging apps by sending infected links
or files to contacts. These worms work as email worms as the contacts from chat rooms
are taken and messages are sent to those contacts. Once the contact accepts the
invitation and opens the message or link, the system is infected. The worms have either
links to open websites or attachments to download. These worms are not as effective as
other worms. Users can destroy these worms by changing the password and deleting
the messages.
Effects:
 Consumes network bandwidth, leading to slow internet connections.
 Can cause massive data loss and disrupt services.
 May install backdoors that allow hackers to control infected systems.

3. Hacking
Hacking refers to unauthorized access to or manipulation of computer systems, networks,
or data. Hackers use various techniques to exploit vulnerabilities in software, hardware, or
network security. Hacking in cyber security refers ` to the misuse of devices like computers,
smartphones, tablets, and networks to cause damage to or corrupt systems, gather information
on users, steal data and documents, or disrupt data-related activity.
Types of Hackers:
 White Hat Hackers: Ethical hackers who help organizations find and fix security
vulnerabilities. White hat hackers can be seen as the “good guys” who attempt to
prevent the success of black hat hackers through proactive hacking. They use their
technical skills to break into systems to assess and test the level of network security,
also known as ethical hacking. This helps expose vulnerabilities in systems before
black hat hackers can detect and exploit them. The techniques white hat hackers use
are similar to or even identical to those of black hat hackers, but these individuals are
hired by organizations to test and discover potential holes in their security defenses.
 Black Hat Hackers: Malicious hackers who exploit vulnerabilities for personal gain,
financial theft, or to cause damage. Black hat hackers are the "bad guys" of the
hacking scene. They go out of their way to discover vulnerabilities in computer systems
and software to exploit them for financial gain or for more malicious purposes, such as
to gain reputation, carry out corporate espionage, or as part of a nation-state hacking
campaign. These individuals’ actions can inflict serious damage on both computer users
and the organizations they work for. They can steal sensitive personal information,
compromise computer and financial systems, and alter or take down the functionality
of websites and critical networks.
 Gray Hat Hackers: Operate between ethical and unethical hacking, sometimes
violating laws but not with malicious intent. Grey hat hackers sit somewhere between
the good and the bad guys. Unlike black hat hackers, they attempt to violate standards
and principles but without intending to do harm or gain financially. Their actions are

PANGASINAN STATE UNIVERSITY 111


Data and Digital Communications

typically carried out for the common good. For example, they may exploit a
vulnerability to raise awareness that it exists, but unlike white hat hackers, they do so
publicly. This alerts malicious actors to the existence of the vulnerability.

Common Hacking Techniques:


 Phishing: Tricks users into revealing sensitive information through fake emails or
websites. Phishing is a type of cyberattack typically launched via email, although other
types exist. It works by impersonating the identity of a person or company with the aim
of getting the recipient of the message to take some action, such as downloading a file
or clicking on a link, to execute the malware hidden within. This way, the cybercriminal
gains control over a system.
 SQL Injection: Involves inserting malicious SQL queries into a database to gain
unauthorized access or manipulate data. A structured query language (SQL) injection
attack specifically targets servers storing critical website and service data. It uses
malicious code to get the server to divulge information it normally wouldn’t. SQL is a
programming language used to communicate with databases, and can be used to store
private customer information such as credit card numbers, usernames and passwords
(credentials), or other personally identifiable information (PII) – all tempting and
lucrative targets for an attacker.
 Man-in-the-Middle Attack (MITM): Intercepts communication between two parties
to eavesdrop or alter data. A man in the middle (MITM) attack occurs when
cybercriminals intercept and alter network traffic flowing between IT systems. The
MITM attack impersonates both senders and receivers on the network. It aims to trick
both into sending unencrypted data that the attacker intercepts and can use for further
attacks or financial gain. `
 Denial-of-Service (DoS) Attack: Floods a network or website with traffic to make it
unavailable to legitimate users. Denial-of-service (DoS) attacks flood a website with
more traffic than it’s built to handle, thereby overloading the site’s server and making
it near-impossible to serve content to visitors. It’s possible for a denial-of-service to
occur for non-malicious reasons. For example, if a massive news story breaks and a
news organization’s site is overloaded with traffic from people trying to learn more
about the story.

Effects:
 Leads to data breaches and loss of sensitive information.
 Causes financial losses due to theft or downtime.
 Damages an organization's reputation and customer trust.

Network Security

Computer network security involves a set of strategies, technologies, and practices


designed to protect network infrastructure, data, and resources from unauthorized access,
attacks, or damage. Network security is the protection of the underlying networking
infrastructure from unauthorized access, misuse, or theft. It involves creating a secure
infrastructure for devices, applications, users, and applications to work in a secure manner. It
ensures that information remains secure while being transmitted, processed, and stored within
a computer network. Network security combines multiple layers of defenses at the edge and in
the network. Each network security layer implements policies and controls. Authorized users
gain access to network resources, but malicious actors are blocked from carrying
out exploits and threats.

PANGASINAN STATE UNIVERSITY 112


Data and Digital Communications

Goals of Computer Network Security


1. Confidentiality: Ensuring that sensitive information is accessible only to authorized
users. It prevents unauthorized access or data breaches.
2. Integrity: Maintaining the accuracy and consistency of data by preventing
unauthorized modifications. It ensures that data has not been altered or tampered with.
3. Availability: Ensuring that network services, resources, and data are available to
authorized users when needed, even during attacks or technical issues.
4. Authentication: Verifying the identity of users and devices accessing the network to
ensure that only legitimate entities can interact with network resources.
5. Non-repudiation: Ensuring that a party in a communication cannot deny sending or
receiving the message, providing proof of communication between the parties.

Key Components of Network Security


 Firewalls
A firewall is a security device that monitors and controls incoming and outgoing
network traffic based on predefined security rules. It acts as a barrier between a trusted internal
network and untrusted external networks, filtering data packets to prevent unauthorized access.

Key Functions of a Firewall:


1. Traffic Filtering: Firewalls inspect all data packets entering or leaving the network and
decide whether to allow or block them based on a set of security rules.
2. Access Control: Firewalls control who can access the network and what services or
resources they can use, based on criteria such as IP addresses, protocols, and ports.
3. Protection Against Threats: They help` defend the network against various cyber
threats like malware, hacking attempts, denial-of-service (DoS) attacks, and data
breaches.
4. Logging and Monitoring: Firewalls keep logs of all network activities, which can be
used for analyzing traffic patterns, detecting suspicious behavior, and responding to
security incidents.

Types of Firewalls
Packet-Filtering Firewalls
The most basic type of firewall that inspects data packets individually, checking their
source and destination addresses, protocols, and ports. It either permits or denies the packet
based on a set of rules defined in the firewall's configuration. Cannot inspect the contents of
the packets, making it less effective against more sophisticated attacks. A packet filtering
firewall is the most basic type of firewall. It acts like a management program that monitors
network traffic and filters incoming packets based on configured security rules. These firewalls
are designed to block network traffic IP protocols, an IP address, and a port number if a data
packet does not match the established rule-set. While packet-filtering firewalls can be
considered a fast solution without many resource requirements, they also have some
limitations. Because these types of firewalls do not prevent web-based attacks, they are not the
safest.

Stateful Inspection Firewalls


These firewalls keep track of the state of active connections and use this information to
make decisions about which packets to allow or block. They can analyze the traffic context,
such as the connection state and data flow, providing more comprehensive protection. More
secure than packet-filtering firewalls because they understand the context of the network

PANGASINAN STATE UNIVERSITY 113


Data and Digital Communications

traffic. In simple words, when a user establishes a connection and requests data, the SMLI
firewall creates a database (state table). The database is used to store session information such
as source IP address, port number, destination IP address, destination port number, etc.
Connection information is stored for each session in the state table. Using stateful inspection
technology, these firewalls create security rules to allow anticipated traffic. In most cases,
firewalls are implemented as additional security levels. These types of firewalls implement
more checks and are considered more secure than stateless firewalls. This is why stateful packet
inspection is implemented along with many other firewalls to track statistics for all internal
traffic. Doing so increases the load and puts more pressure on computing resources. This can
give rise to a slower transfer rate for data packets than other solutions.

Proxy Firewalls (Application-Level Gateways)


Proxy firewalls act as intermediaries between the client and the server. They filter
traffic at the application level, inspecting the data inside the packets. They can block or allow
traffic based on specific application-layer protocols (like HTTP, FTP, etc.) and examine
content for malicious activity. Provides a high level of security by preventing direct
connections between internal and external networks. Proxy firewalls operate at the application
layer as an intermediate device to filter incoming traffic between two end systems (e.g.,
network and traffic systems). That is why these firewalls are called 'Application-level
Gateways'. Unlike basic firewalls, these firewalls transfer requests from clients pretending to
be original clients on the web-server. This protects the client's identity and other suspicious
information, keeping the network safe from potential attacks. Once the connection is
established, the proxy firewall inspects data packets coming from the source. If the contents of
the incoming data packet are protected, the proxy firewall transfers it to the client. This
approach creates an additional layer of security between
` the client and many different sources
on the network.

Next-Generation Firewalls (NGFWs)


NGFWs combine traditional firewall features with advanced security functions like
intrusion detection and prevention (IDS/IPS), deep packet inspection, and application
awareness. They analyze data packets in-depth, identify applications regardless of the port
used, and detect modern cyber threats like malware. Offer a comprehensive solution for
network security by integrating multiple layers of protection. Many of the latest released
firewalls are usually defined as 'next-generation firewalls'. However, there is no specific
definition for next-generation firewalls. This type of firewall is usually defined as a security
device combining the features and functionalities of other firewalls. These firewalls
include deep-packet inspection (DPI), surface-level packet inspection, and TCP handshake
testing, etc. NGFW includes higher levels of security than packet-filtering and stateful
inspection firewalls. Unlike traditional firewalls, NGFW monitors the entire transaction of
data, including packet headers, packet contents, and sources. NGFWs are designed in such a
way that they can prevent more sophisticated and evolving security threats such as malware
attacks, external threats, and advance intrusion. When multiple devices are used to connect to
the Internet, NAT firewalls create a unique IP address and hide individual devices' IP addresses.
As a result, a single IP address is used for all devices. By doing this, NAT firewalls secure
independent network addresses from attackers scanning a network for accessing IP addresses.
This results in enhanced protection against suspicious activities and attacks. In general, NAT
firewalls works similarly to proxy firewalls. Like proxy firewalls, NAT firewalls also work as
an intermediate device between a group of computers and external traffic.

PANGASINAN STATE UNIVERSITY 114


Data and Digital Communications

Network Address Translation (NAT) Firewalls


NAT firewalls modify IP addresses of incoming and outgoing traffic to hide the internal
network structure from external entities. They prevent attackers from mapping out the internal
network by concealing the IP addresses of devices. Adds a layer of security by keeping internal
IP addresses private. Network address translation or NAT firewalls are primarily designed to
access Internet traffic and block all unwanted connections. These types of firewalls usually
hide the IP addresses of our devices, making it safe from attackers

 Intrusion Detection Systems (IDS) and Intrusion Prevention Systems (IPS)


Detects suspicious activities and alerts the network administrator. Goes a step further
by not only detecting but also taking action to prevent potential threats from entering the

network. `
An Intrusion Detection System (IDS) is a network security solution that monitors
network traffic for suspicious activities and alerts the network administrator when such
activities are detected. The primary role of IDS is to identify potential threats or malicious
activities within the network. When suspicious activities are detected, IDS generates alerts to
notify the network administrators for further analysis. It keeps detailed logs of all detected
activities, which can be used for analyzing attack patterns or forensic investigations. Two main
network deployment locations exist for IDS. Network-based IDS (NIDS) monitors the entire
network's traffic by analyzing data packets flowing through the network. Typically placed at
strategic points within the network to detect potential attacks on the network level. Host-based
IDS (HIDS) installed on individual devices or hosts within the network. Monitors activities
such as file changes, system logs, and processes on the host system for any suspicious behavior.
Apart from its deployment location, IDS also differs in terms of the methodology used for
identifying potential intrusions. Signature-Based Detection uses predefined patterns
(signatures) of known attacks to identify threats. Effective against known threats but may fail
to detect new or unknown attacks. Anomaly-Based Detection establishes a baseline of normal
network behavior and alerts administrators when it detects deviations from this norm. Capable
of identifying previously unknown threats but may produce more false positives.
An Intrusion Prevention System (IPS) is a network security tool that not only detects
malicious activities but also takes action to prevent these activities from succeeding. It actively
monitors network traffic and can block, reroute, or drop malicious data packets to prevent
attacks. In addition to detecting threats, IPS actively works to stop them by taking immediate
action. It can block malicious traffic, prevent unauthorized access, and contain the threat in
real-time. IPS automatically implements countermeasures to minimize damage and prevent
attacks from compromising the network. IPS solutions are placed within flowing network
traffic, between the point of origin and the destination. IPS might use any one of the multiple

PANGASINAN STATE UNIVERSITY 115


Data and Digital Communications

available techniques to identify threats. For instance, signature-based IPS compares network
activity against the signatures of previously detected threats. While this method can easily
deflect previously spotted attacks, it’s often unable to recognize newly emerged threats.
Conversely, anomaly-based IPS monitors abnormal activity by creating a baseline standard for
network behavior and comparing traffic against it in real-time. While this method is more
effective at detecting unknown threats than signature-based IPS, it produces both false positives
and false negatives. Cutting-edge IPS are infused with artificial intelligence (AI) and machine
learning (ML) to improve their anomaly-based monitoring capabilities and reduce false alerts.

 Virtual Private Networks (VPNs)


VPNs create secure, encrypted connections over the internet, allowing users to access
the network remotely as if they were on a private network. They protect data transmissions
from eavesdropping and secure communications across public networks. A VPN hides your IP
address by letting the network redirect it through a specially configured remote server run by a
VPN host. This means that if you surf online with a VPN, the VPN server becomes the source
of your data. This means your Internet Service Provider (ISP) and other third parties cannot
see which websites you visit or what data you send and receive online. A VPN works like a
filter that turns all your data into "gibberish". Even if someone were to get their hands on your
data, it would be useless.

Types of VPN
Remote Access VPN:
Allows individual users to connect to a private network remotely. Commonly used by
remote workers to access company resources securely. Users connect to the VPN server over
the Internet, which then provides access to the internal
` network. A remote access VPN securely
connects a device outside the corporate office. These devices are known as endpoints and may
be laptops, tablets, or smartphones. Advances in VPN technology have allowed security checks
to be conducted on endpoints to make sure they meet a certain posture before connecting. Think
of remote access as computer to network.

Site-to-Site VPN:
Connects entire networks to each other, enabling secure communication between
multiple sites. Often used by businesses with multiple offices to connect their local area
networks (LANs). A VPN gateway is set up at each site, allowing secure traffic between the
networks. Site-to-site VPNs are mainly used in large companies. They are complex to
implement and do not offer the same flexibility as other VPNs. However, they are the most
effective way to ensure communication within and between large departments.

Client-to-Site VPN:
Similar to remote access VPN, but specifically designed for mobile users or devices to
connect to a corporate network. Useful for employees who need to connect to the corporate
network while traveling or working from various locations. Users install a VPN client on their
device to connect securely to the corporate network. The advantage of this type of VPN access
is greater efficiency and universal access to company resources. Provided an appropriate
telephone system is available, the employee can, for example, connect to the system with a
headset and act as if he/she were at their company workplace. For example, customers of the
company cannot even tell whether the employee is at work in the company or in their home
office.

PANGASINAN STATE UNIVERSITY 116


Data and Digital Communications

 Access Control
Access control mechanisms regulate who can access or use network resources and data.
It ensures that only authorized users have permission to access certain areas of the network.
Access control is a data security process that enables organizations to manage who is
authorized to access corporate data and resources. Secure access control uses policies that
verify users are who they claim to be and ensures appropriate control access levels are granted
to users. Implementing access control is a crucial component of web application security,
ensuring only the right users have the right level of access to the right resources. The process
is critical to helping organizations avoid data breaches and fighting attack vectors, such as a
buffer overflow attack, KRACK attack, on-path attack, or phishing attack.

Components of Access Control


1. Authentication
Authentication is the initial process of establishing the identity of a user. For example,
when a user signs in to their email service or online banking account with a username and
password combination, their identity has been authenticated. However, authentication alone is
not sufficient to protect organizations’ data.

2. Authorization
Authorization adds an extra layer of security to the authentication process. It specifies
access rights and privileges to resources to determine whether the user should be granted access
to data or make a specific transaction.
For example, an email service or online bank account can require users to provide two-
factor authentication (2FA), which is typically a combination of something they know (such
as a password), something they possess (such `as a token), or something they are (like a
biometric verification). This information can also be verified through a 2FA mobile app or a
thumbprint scan on a smartphone.

3. Access
Once a user has completed the authentication and authorization steps, their identity
will be verified. This grants them access to the resource they are attempting to log in to.

4. Manage
Organizations can manage their access control system by adding and removing the
authentication and authorization of their users and systems. Managing these systems can
become complex in modern IT environments that comprise cloud services and on-premises
systems.

5. Audit
Organizations can enforce the principle of least privilege through the access control
audit process. This enables them to gather data around user activity and analyze that
information to discover potential access violations.

Techniques:
 Role-Based Access Control (RBAC): Access is based on the user's role within the
organization. RBAC creates permissions based on groups of users, roles that users
hold, and actions that users take. Users are able to perform any action enabled to their
role and cannot change the access control level they are assigned.
 Multi-Factor Authentication (MFA): Requires multiple forms of verification to
grant access, such as passwords and biometric scans.

PANGASINAN STATE UNIVERSITY 117


Data and Digital Communications

Summary
Computer networks are essential for connecting devices, sharing resources, and enabling
communication in various settings, from homes to large businesses and across the internet.
Understanding the types, topologies, protocols, and components of networks is crucial for
designing efficient systems that meet specific communication needs. Networks play a
significant role in both personal and professional environments, driving innovation,
productivity, and collaboration.

Network security is a critical aspect of protecting digital communication and data from a
variety of threats. It includes a range of techniques, technologies, and best practices aimed at
preventing unauthorized access, detecting intrusions, and responding to security incidents. By
implementing robust network security measures, organizations can safeguard their sensitive
information, maintain the integrity of their systems, and ensure the continued availability of
their services.

Encryption and decryption are fundamental to modern data security, protecting information
from unauthorized access and ensuring privacy. Encryption transforms readable data into an
unreadable format, while decryption reverses the process to restore the original data. There are
two main types of encryption: symmetric (using the same key for both encryption and
decryption) and asymmetric (using different keys for encryption and decryption). Encryption
is essential for securing communications, protecting sensitive data, and maintaining trust in
digital interactions. The use of strong encryption methods and proper key management
practices is critical to safeguarding sensitive information against cyber threats.
`
Worms, viruses, and hacking are critical cybersecurity threats that can disrupt computer
systems, steal data, and cause financial and reputational damage. While worms and viruses are
specific types of malware that spread and infect systems, hacking refers to unauthorized
activities aimed at exploiting system vulnerabilities. Employing robust security measures, such
as antivirus software, firewalls, regular updates, and user education, is essential to defend
against these cyber threats and protect digital assets.

Computer Network and Security work together to ensure secure and efficient
communication between devices and systems. Computer networks enable data exchange and
resource sharing, while network security focuses on protecting this data from threats like
hacking, malware, and unauthorized access. Key security measures include encryption,
firewalls, IDS/IPS, and secure authentication mechanisms. These technologies are essential for
safeguarding data, maintaining user privacy, and ensuring the integrity of communication
systems.
Exercises:
Multiple Choice: Choose the correct letter of the correct answer
1. It can be a software program or a hardware device that filters all data packets coming
through the internet, a network, etc. it is known as the_______:
a) Antivirus
b) Firewall
c) Cookies
d) Malware
2. A local area network (LAN) is defined by _______________glass or plastic
a) The geometric size of the network

PANGASINAN STATE UNIVERSITY 118


Data and Digital Communications

b) The maximum number of hosts in the network


c) The maximum number hosts in the network and/or the geometric size of the
network
d) The topology of the network
3. Three security goals are _______________________________________.
a) confidentiality, cryptography, and nonrepudiation
b) confidentiality, encryption, and decryption
c) confidentiality, integrity, and availability
d) None of the choices are correct
4. In __________ cryptography, the same key is used by the sender and the receiver.
a) symmetric-key
b) asymmetric-key
c) public-key
d) None of the choices are correct.
5. ___________ means that the data must arrive at the receiver exactly as they were sent.
a) Nonrepudiation
b) Message integrity
c) Authentication
d) None of the choices are correct.
6. Which of the following statements is correct about the firewall?
a) It is a device installed at the boundary of a company to prevent unauthorized
physical access.
b) It is a device installed at the boundary of an incorporate to protect it against the
unauthorized access.
c) It is a kind of wall built to prevent files
` form damaging the corporate.
d) None of the above.
7. In the asymmetric-key method used for confidentiality, the receiver uses his/her own
______________ to decrypt the message.
a) private key
b) public key
c) no key
d) None of the choices are correct.
8. Which of the following is a malicious software that, on execution, runs its own code and
modifies other computer program?
a) Virus
b) Spam
c) Spyware
d) Worms
9. Are individuals or cybersecurity professionals who use their skills and knowledge for
ethical and legitimate purposes
a) White Hat
b) Black Hat
c) Grey Hat
d) Red Hat
10. Refers to individuals or entities that carry out malicious actions or activities with the
intent to harm, exploit, or compromise systems, networks or data
a) Phisher
b) Spammers
c) Attackers
d) Exploiters

PANGASINAN STATE UNIVERSITY 119


Data and Digital Communications

References:
Textbooks
 Joachim Speidel (2021). Introduction to Digital Communications (2nd ed.). Springer Nature
Switzerland AG

 Stallings, William. (2014). Data and Computer Communications (10th ed.). Upper Saddle
River: Pearson Education, Inc.

 Ziemer, Richard, & Tranter, William. (2015). Principles of Communications: Systems,


Modulation, and Noise (7th ed.). Danvers, MA: John Wiley & Sons, Inc

 Middlestead, Richard. (2018). Digital Communications with Emphasis on Data Modems.


Hoboken, New Jersey: John Wiley & Sons, Inc.

 Leis, John. W. (2018). Communication Systems Principles using MATLAB. Hoboken, NJ:
John Wiley & Sons.

 Sibley, Martin. (2018). Modern Telecommunications: Basic Principles and Practices. Boca
Raton, Fl: CRC Press, Taylor & Francis Group.

Electronic Sources
 TheKnowledgeAcademy. “Digital Communication: Definition, Examples and Its Types.”
Www.theknowledgeacademy.com

 “The History of Communications.” World101 from the Council on Foreign Relations, 17 Dec.
2022, world101.cfr.org/global-era-issues/globalization/two-hundred-years-global-
`
communications.

 eeeguide. “Basic Elements of Digital Communication System - EEEGUIDE.”


EEEGUIDE.COM, 7 Nov. 2022, www.eeeguide.com/basic-elements-of-digital-communication-
system

 Admin. (2022, May 16). Pulse code modulation - modulation, types, advantages and
disadvantages, applications. BYJUS. https://byjus.com/physics/pulse-code-modulation

 Agarwal, T. (2021, March 15). Pulse code modulation and demodulation : Block Diagram & its
working. ElProCus. https://www.elprocus.com/pulse-code-modulation-and-demodulation

 Frenzel, L. (2023, January 12). Understanding modern digital modulation techniques.


Electronic Design.
https://www.electronicdesign.com/technologies/communications/article/21798737/electronic-
design-understanding-modern-digital-modulation-techniques

 Digital Modulation. Digital Modulation - an overview | ScienceDirect Topics.


(n.d.).https://www.sciencedirect.com/topics/engineering/digital-modulation

 https://www.geeksforgeeks.org

 https://data-flair.training (Data Flair Training)

 https://www.spiceworks.com (Spiceworks)

 https://www.cisco.com (CISCO)

 https://www.fortinet.com (FORTINET)

PANGASINAN STATE UNIVERSITY 120

You might also like