0% found this document useful (0 votes)
5 views18 pages

Data Communication

Data communication involves transferring data between devices over distances, focusing on characteristics like delivery, accuracy, timeliness, jitter, and security. Various models such as OSI, TCP/IP, client-server, and peer-to-peer define the frameworks for data transmission, while data flow can be simplex, half-duplex, or full-duplex. Additionally, data representation, transmission modes, and impairments are critical for effective communication, with methods like digital-to-digital and analog-to-digital conversion ensuring compatibility between different formats.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views18 pages

Data Communication

Data communication involves transferring data between devices over distances, focusing on characteristics like delivery, accuracy, timeliness, jitter, and security. Various models such as OSI, TCP/IP, client-server, and peer-to-peer define the frameworks for data transmission, while data flow can be simplex, half-duplex, or full-duplex. Additionally, data representation, transmission modes, and impairments are critical for effective communication, with methods like digital-to-digital and analog-to-digital conversion ensuring compatibility between different formats.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 18

Data Communication

Data communication is the process of transferring data between two or more devices over a
distance. It involves the transmission and reception of information, typically in digital form, using
various communication channels.

Characteristics:
Delivery: Data must be delivered to the intended recipient. Data should be received by the
authorized person.
Accuracy: Data should be transmitted without errors or alterations. The system should be
dependable and minimize data loss.
Timeliness: For applications like video conferencing or voice calls, data must be delivered
promptly. Some applications can tolerate delays, while others require immediate delivery.
Jitter: The variation in packet arrival times should be minimized. Jitter affects the quality of
audio and video transmissions.
Security: Data should be protected from unauthorized access. Data should not be modified
or tampered with during transmission.

Models:
Data communication models provide a framework for understanding how information is transmitted
and received between devices. They define the components, processes, and rules involved in data
communication.

A. OSI (Open Systems Interconnection) Model:


Seven layers:
1. Physical: Transmits raw bits over a physical medium.
2. Data Link: Manages data flow between devices on a network.
3. Network: Handles routing and addressing of data packets.
4. Transport: Ensures reliable delivery of data between end systems.
5. Session: Establishes, manages, and terminates communication sessions.
6. Presentation: Translates data into a format suitable for applications.
7. Application: Provides services to applications, such as file transfer, email, and web
browsing.
B. TCP/IP (Transmission Control Protocol/Internet Protocol)
Model:
Four layers:
1. Application: Provides services to applications, such as HTTP, FTP, and SMTP.
2. Transport: Ensures reliable delivery of data between end systems, using TCP or UDP.
3. Internet: Handles routing and addressing of data packets.
4. Network Access: Transmits data over the physical network.
C. Client-Server Model
In this model, communication occurs between a client (which makes requests) and a server
(which processes the requests and sends back responses). Data flows primarily in one
direction (from the client to the server and back). This model is widely used in web
applications.
D. Peer-to-Peer (P2P) Model
In the Peer-to-Peer model, each device (or "peer") in the network can function as both a
client and a server, allowing direct data sharing without the need for a centralized server.
This model is common in file-sharing systems.

Data Flow:
Data flow refers to the direction and manner in which data moves between devices in a
communication system. There are three primary data flow modes:

a. Simplex: One-way communication: Data flows in only one direction.


Examples: Radio broadcasting, television transmission, remote control.

b. Half-duplex: Two-way communication, but not simultaneously: Data can flow in both
directions, but only one direction at a time.
Examples: Walkie-talkies, two-way radios, CB radios.

c. Full-duplex: Two-way communication simultaneously: Data can flow in both directions at


the same time.
Examples: Telephone lines, most computer networks.
Data Representation:
Data representation is the process of encoding information into a format that can be transmitted,
stored, and processed by computers. In data communication, data is typically represented using
binary code, where each bit (binary digit) can have a value of 0 or 1.

Common Data Representations:


1. ASCII (American Standard Code for Information Interchange): Represents
characters using 7 bits, allowing for 128 unique characters.
2. Unicode: Represents characters using 16 bits (UTF-16) or 32 bits (UTF-32), supporting a
much wider range of characters from different languages.
3. EBCDIC (Extended Binary-Coded Decimal Interchange Code): An 8-bit
character encoding used primarily by IBM mainframe computers.
4. BCD (Binary-Coded Decimal): Represents decimal numbers using 4 bits per digit.
5. Hexadecimal: A base-16 numbering system often used for representing binary data in a
more human-readable format.

Analog and Digital Data:


Analog Data: Analog data represents information in a continuous wave form, allowing for an
infinite number of values within a given range, such as sound waves and traditional video signals.

a. Analog signals vary continuously over time.


b. Often requires modulation (e.g., AM or FM) for effective transmission.

Examples: Includes sound waves, traditional television signals, and temperature readings.

Digital Data: Digital data represents information in discrete values, typically as binary digits
(0s and 1s), allowing for precise encoding and easier manipulation, such as computer files and digital
audio.

a. Digital signals have distinct levels (high and low).


b. Easier to compress, encrypt, and process for storage and transmission.

Examples: Includes computer files, digital audio formats (e.g., MP3), and video files.

Analog and Digital Signals:


Analog Signals: Analog signals are continuous waveforms that vary smoothly over time,
representing information through changes in amplitude, frequency, or phase.

a. Represents an infinite range of values, allowing for smooth transitions.


b. Typically depicted as sine waves.

Examples: Audio signals (e.g., voice in a telephone) Traditional video signals (e.g., broadcast
television)

Digital Signals: Digital signals are discrete waveforms that represent information using
binary values (0s and 1s).
a. Represents distinct values with no intermediate states.
b. Typically shown as square waves, indicating high (1) and low (0) states.

Examples:
1. Data packets in computer networks
2. Digital audio formats (e.g., MP3, WAV)

Bit-rate:
a. Definition: Bit rate is the number of bits transmitted per second, measured in bps, Kbps,
Mbps, Gbps.
b. Types:
1. Net Bit Rate: Actual data rate after overhead.
2. Gross Bit Rate: Total transmission rate, including overhead.
c. Factors Affecting Bit Rate: Transmission medium, network protocols, and signal
quality can influence bit rate.
d. Applications: High bit rates are crucial for streaming media and determining internet
download/upload speeds.

Bandwidth:
Bandwidth refers to the range of frequencies available for data transmission in a communication
channel. It is a crucial factor that determines the maximum achievable data transfer rate. Units of
Bandwidth are: Hertz (Hz), Kilohertz (kHz), Megahertz (MHz), Gigahertz (GHz)

Types of Bandwidths
a. Baseband: The entire frequency range available in a communication channel.
b. Passband: A specific range of frequencies within a wider baseband.

Nyquist Bit Rate:


Nyquist Bit Rate is a theoretical limit on how fast data can be sent over a communication channel. It's
like the maximum speed limit on a highway.

Key points:

a. Depends on bandwidth: The wider the road (bandwidth), the faster you can drive
(higher bit rate).
b. No overlapping: To avoid traffic jams (symbol interference), the cars (bits) need to be
spaced out properly.
c. Theoretical limit: It's the maximum possible speed, but real-world conditions like traffic
(noise) can slow things down.

Shannon Capacity:
Shannon Capacity is like the speed limit on a highway for data. It tells you the maximum speed you
can safely drive (send data) without crashing (errors).
Key points:

a. Depends on road and traffic: The wider the road (bandwidth) and less traffic
(noise), the faster you can drive (higher data rate).
b. Theoretical limit: It's the maximum possible speed, but real-world conditions can slow
things down.
c. Noise matters: Even on a wide, empty road, if there's heavy traffic (noise), you can't go
too fast without crashing.

In simple terms, Shannon capacity is the maximum amount of data you can send over a
communication channel without errors, considering the channel's bandwidth and noise level.

Data transmission modes:


Data transmission modes refer to the ways in which data is transferred between devices. There are
two primary modes:

A. Serial Transmission:
a. Bit by Bit: Data is transmitted one bit at a time, sequentially.
b. Advantages: Simple implementation, efficient for long distances.
c. Disadvantages: Slower transmission speed compared to parallel transmission.

Types of Serial Transmission:

a. Asynchronous: Each character is preceded and followed by start and stop bits for
synchronization.
b. Synchronous: Data is transmitted in continuous blocks, synchronized by a clock signal.
B. Parallel Transmission:
a. Multiple Bits Simultaneously: Data is transmitted in parallel, with multiple bits being
sent at the same time.
b. Advantages: Faster transmission speed, suitable for short distances.
c. Disadvantages: Requires more complex circuitry, prone to noise and interference.

Transmission impairments:
Transmission impairments refer to the factors that can degrade the quality of a signal during data
communication. Here are the main types of transmission impairments:

Attenuation: The reduction in signal strength as it travels over a distance.


a. Cause: Resistance in the transmission medium (e.g., cables, fibre optics).
b. Impact: Can lead to weak signals that may be misinterpreted or lost.

Noise: Unwanted electrical signals that interfere with the transmitted signal.
a. Types:
1. Thermal Noise: Caused by the random motion of electrons in conductors.
2. Interference: External signals (e.g., from other devices) that overlap with the
intended signal.
b. Impact: Can cause errors in data interpretation and reduced communication quality.

Distortion: Changes in the shape or characteristics of the signal during transmission.

a. Cause: Variations in signal speed across different frequencies, often due to the medium.
b. Impact: Can result in misinterpretation of the data, especially in complex signals.

Jitter: Variation in the time delay of received signals.


a. Cause: Network congestion, timing errors, or route changes.
b. Impact: Particularly problematic for real-time applications (e.g., VoIP, video conferencing),
leading to choppy audio or video.

Echo: Reflection of the transmitted signal back into the channel.


a. Cause: Poor termination of the transmission line.
b. Impact: Can confuse the receiver, leading to delays and errors in communication

Guided and Unguided transmission media


1. Guided transmission media: It uses a physical pathway to transmit data, such as a
cable or wire. This provides a more controlled and reliable transmission environment, but it can
be more expensive and less flexible to install. e.g.
a. Twisted Pair Cable: Consists of two insulated copper wires twisted together.
Common in local area networks (LANs) and telephone systems.
b. Coaxial Cable: Has a central conductor surrounded by an insulator and a shield. Used
for cable television and broadband internet.
c. Fibre Optic Cable: Uses light pulses to transmit data through thin glass fibres. Offers
high bandwidth, low attenuation, and immunity to electromagnetic interference.
2. Unguided transmission media: It does not require a physical path and use
electromagnetic waves to transmit data through the air or space. This offers greater flexibility
and can be less expensive, but it is also more susceptible to interference and security risks.eg
a. Radio Waves: Used for various applications, including radio broadcasting, television,
cellular phones, and wireless networks.
b. Microwaves: Higher frequency radio waves used for satellite communication, radar,
and microwave ovens.
c. Infrared Waves: Used for short-range communication, such as remote controls and
infrared sensors.

UNIT 2nd
Digital-to-Digital conversion
In digital communication, digital-to-digital conversion is the process of transforming digital data into
a suitable format for transmission over a physical medium. This involves encoding the data using
various techniques to ensure efficient and reliable transmission.
NRZ (Non-Return-to-Zero)
Digital-to-Digital conversion using Non-Return-to-Zero (NRZ) encoding is a widely used method to
represent binary data (0s and 1s) in the digital domain for communication or storage. In NRZ
encoding, the signal does not return to a baseline (zero voltage level) between consecutive bits.
There are two main types of NRZ encoding:

1. NRZ-Level: NRZ-Level (NRZ-L) is a digital encoding method used in data communication


where the signal level remains constant during a bit period and represents the binary value
directly.
2. NRZ-Inverted: NRZ-Inverted (NRZ-I) is a digital encoding method where the presence
or absence of a signal transition represents the binary data.

Manchester
Manchester encoding is a method of encoding binary data into a physical signal for data
transmission, commonly used in digital communication systems. It combines data and clock signals
to ensure synchronization between the transmitter and receiver.

Key Features of Manchester Encoding:

 Self-Clocking: Each bit contains a transition in the middle, which provides the timing
reference. The transition eliminates the need for a separate clock signal.
 Encoding Rule: A 0 is represented by a transition from high to low (↓) in the middle of
the bit period. A 1 is represented by a transition from low to high (↑) in the middle of the bit
period.

Analog-to-Digital Conversion
Analog-to-Digital Conversion (ADC) is the process of converting an analog signal (continuous in time and
amplitude) into a digital signal (discrete in time and amplitude). This process is crucial for enabling analog data,
such as sound, temperature, or pressure, to be processed by digital devices like computers, microcontrollers,
and digital signal processors (DSPs).

Sampling
Sampling in analog-to-digital conversion (ADC) is the process of measuring the amplitude of an
analog signal at regular intervals (called sampling intervals) to create a discrete digital
representation. These measurements are taken at a specific rate, known as the sampling rate or
sampling frequency, typically measured in samples per second (Hertz, Hz).

Key Points:
1. Sampling Rate: How often the analog signal is measured (Nyquist Theorem: at least
twice the highest frequency).
2. Quantization: Approximating sample values to discrete levels, causing some loss of
accuracy.
3. Resolution: Number of bits per sample, determining the precision of the digital
representation.

PCM (Pulse Code Modulation)


PCM (Pulse Code Modulation) is a method used to digitally represent analog signals. It is one of the
most common techniques for encoding analog signals into digital form, primarily in audio and
telecommunications systems.

Steps in PCM:
1. Sampling: Analog signal is converted into discrete values at regular intervals.
2. Quantization: Discrete values are rounded to the nearest value within a set of levels.
3. Encoding: Quantized values are represented by binary numbers for transmission/storage.

Characteristics of PCM:

1. Bit Depth: Determines the number of quantization levels (higher is better).


2. Sampling Rate: Controls the frequency of sampling (higher captures more detail).
3. Uncompressed: PCM stores audio without compression, resulting in large files.

Quantization
Quantization is the process of converting the continuous amplitude values of an analog signal into
discrete levels. It occurs after sampling in the analog-to-digital conversion process. In simple terms,
quantization rounds the sampled values to a finite set of levels, which can be represented digitally
(usually in binary form).

Here's how it works:

1. Divide Signal Range: Divide the analog signal's range into equal intervals, each
representing a quantization level.
2. Assign Values: Approximate each sampled value to the nearest quantization level.
3. Output as Discrete Levels: Represent the analog signal with these discrete
quantized values.

Key Points:

 Quantization Levels: The number of intervals determines the quantization levels.


More levels result in higher resolution and less quantization error.
 Quantization Error: This is the difference between the original analog value and its
quantized digital representation. It's a source of error in the ADC process.

Digital-to-Analog Conversion (DAC)


Digital-to-Analog Conversion is the process of transforming digital signals (binary data) into
continuous analog signals. It is used in applications like audio playback, video output, and
communication systems.

Key Steps in DAC

1. Input Binary Data: Receives a digital signal as a sequence of binary values.


2. Signal Mapping: Maps each binary input to a corresponding analog voltage or current
level.
3. Reconstruction: Smooths the discrete levels to produce a continuous analog output
signal.

Types of DAC

1. Binary-Weighted DAC: Uses resistors weighted by powers of 2 to convert binary


data into an analog signal.
2. R-2R Ladder DAC: A simpler, more precise circuit using resistors in a repeating R and
2R pattern.

Amplitude Shift Keying (ASK)


Amplitude Shift Keying (ASK) is a digital modulation technique where the amplitude of a carrier wave
is varied to represent binary data (0s and 1s). It's one of the simplest modulation techniques, where
the presence or absence of the carrier signal signifies the binary value.

How ASK Works:

1. Carrier Wave: A high-frequency carrier wave is generated.


2. Binary Data: The digital data to be transmitted is converted into a binary sequence.
3. Modulation: The frequency of the carrier wave is shifted to represent binary data.

 Binary 1: Higher frequency


 Binary 0: Lower frequency
4. Demodulation: The demodulation process involves recovering the original binary data
from the received ASK signal. This is typically done using a simple envelope detector, which
extracts the amplitude variations from the received signal.

Frequency Shift Keying (FSK)


Frequency Shift Keying (FSK) is a digital modulation technique where digital information is encoded
by shifting the frequency of a carrier wave between two or more discrete frequencies. It is widely
used in applications like modems, telemetry systems, and radio communication.

How FSK Works:


1. Carrier Wave: A high-frequency carrier wave is generated.
2. Binary Data: The digital data to be transmitted is converted into a binary sequence.
3. Modulation: The frequency of the carrier wave is shifted to represent binary data.
4. Demodulation: Demodulation in FSK involves detecting the frequency shifts in the
received signal and decoding them back into binary data. This can be achieved using various
techniques, including:
 Frequency Discriminators: These circuits convert frequency shifts into
amplitude variations, allowing detection with a simple envelope detector.
 Phase-Locked Loops (PLLs): PLLs can be used to track the frequency shifts in
the received signal and generate a digital output.

Phase Shift Keying (PSK)


Phase Shift Keying (PSK) is a digital modulation technique where digital information is encoded by
shifting the phase of a carrier wave.

How PSK Works:

1. Carrier Wave: A high-frequency carrier wave is generated.


2. Binary Data: The digital data to be transmitted is converted into a binary sequence.
3. Modulation: Binary data is encoded by shifting the phase of a carrier wave.
4. Demodulation: Demodulation in PSK involves detecting the phase shifts in the received
signal and decoding them back into binary data. This can be achieved using various techniques
like Phase-Locked Loops (PLLs).

Multiplexing
Multiplexing is a technique used in telecommunications to combine multiple signals or data streams
into one medium, enabling the simultaneous transmission of multiple signals over a single
communication channel. It maximizes the use of available bandwidth, thereby increasing efficiency
and reducing costs.

Frequency Division Multiplexing


Frequency Division Multiplexing (FDM) is a technique used in telecommunications to transmit
multiple signals simultaneously over a single communication channel. It works by dividing the
available bandwidth of the communication channel into multiple frequency bands, each capable of
carrying a separate signal.

How FDM Works:

Frequency Division Multiplexing (FDM) works by splitting the available bandwidth into separate
frequency bands. Each signal is assigned a unique frequency, so multiple signals can travel at the
same time over the same channel without interfering with each other. At the receiver, filters pick
out the signals based on their frequencies, and each one is converted back to its original form. This
method makes efficient use of the channel.

Time Division Multiplexing


Time Division Multiplexing (TDM) is a technique used to transmit multiple signals over a single
communication channel by dividing the channel into time slots. Each signal gets a unique time slot,
allowing it to use the entire channel's bandwidth for a brief period.

How TDM Works:

Time Division Multiplexing (TDM) works by splitting the available time on a communication channel
into small slots. Each signal gets its own time slot to transmit its data. The signals are sent one after
another, and both the sender and receiver need to be in sync to make sure the data is received
correctly during the right time slot. This allows multiple signals to share the same channel.

De-Multiplexing
De-multiplexing is a process used in telecommunications and data communications to separate
multiplexed signals. When data from multiple sources is combined and transmitted over a single
communication channel, it's called multiplexing.

De-multiplexing is the reverse process, where the combined signal is separated back into its original
individual signals. This technique is crucial in efficiently using communication resources, allowing
multiple signals to be sent simultaneously without interference. It’s commonly used in systems like
telephone networks, digital television, and internet data transmission.

Modulation
Modulation is the process of changing a high-frequency carrier wave so it can carry information like
sound, video, or data. The carrier wave usually has a high frequency, which makes it suitable for
transmission over long distances.

There are three primary types of modulation:

1. Amplitude Modulation (AM): The amplitude (strength) of the carrier wave is varied in
proportion to the information being sent.
2. Frequency Modulation (FM): The frequency of the carrier wave is varied according to
the information being transmitted.
3. Phase Modulation (PM): The phase of the carrier wave is varied to convey information.

Demodulation
Demodulation is the reverse process of modulation. It is the extraction of the original information
from the modulated carrier wave. When the signal reaches the receiver, it needs to be converted
back into its original form so that the information can be understood or processed.

Just as with modulation, there are corresponding demodulation techniques:

1. AM Demodulation: Extracting the original signal by detecting changes in amplitude.


2. FM Demodulation: Extracting the original signal by detecting changes in frequency.
3. PM Demodulation: Extracting the original signal by detecting changes in phase.
UNIT3rd!!!
Components of a Network
1. Nodes: Devices like computers, servers, printers, and mobile devices that communicate
over the network.
2. Switches: Devices that connect multiple devices in a local area network (LAN) and
manage data transfer between them.
3. Routers: Devices that connect different networks and direct data packets to their
destinations.
4. Modems: Devices that connect a network to the internet by converting signals between
digital and analog.
5. Protocols: Rules that govern data transmission, such as TCP/IP, HTTP, and FTP.
6. Firewall: Security hardware or software that controls and filters network traffic to
prevent unauthorized access.
7. Servers: Systems that provide resources, services, or data to other devices in the
network.

Network Topologies
Network topology refers to the layout or arrangement of devices (nodes) in a communication
network. It defines how devices are connected and interact with each other.

Types of Topologies

1. Bus Topology: All devices are connected to a single central cable (bus). Simple and cost-
effective but prone to failure if the main cable breaks.
2. Star Topology: All devices are connected to a central hub or switch. Easy to
troubleshoot, but the network fails if the central hub goes down.
3. Ring Topology: Devices are connected in a circular manner, with each node connected to
two others. Data travels in one direction, reducing collisions but causing the whole network to
fail if one connection breaks.
4. Tree Topology: A hierarchical layout with a root node and sub-level nodes connected in
a star-like structure. Scalable and easy to manage but dependent on the root node.
5. Hybrid Topology: Combines two or more basic topologies (e.g., star-bus or star-ring).
Flexible and scalable, though more complex to design and manage.

Categories of Networking
LAN

A LAN (Local Area Network) is a network that connects computers and other devices within a limited
area, such as a home, school, office building, or data center. It allows for the sharing of resources like
files, printers, and internet connections among the connected devices. LANs are typically faster and
more secure compared to broader network types like WANs (Wide Area Networks) because they
cover a smaller geographic area and have fewer devices to manage.

WAN
A WAN (Wide Area Network) is a telecommunications network that extends over a large
geographical area, connecting multiple smaller networks such as LANs (Local Area Networks). WANs
are used to connect computers and other devices across cities, countries, or even continents. They
enable the sharing of information and resources over long distances. WANs typically use various
transmission technologies such as leased lines, satellite links, and public networks.

MAN

A MAN (Metropolitan Area Network) is a network that spans a city or a large campus. It is larger than
a LAN (Local Area Network) but smaller than a WAN (Wide Area Network). MANs are used to
connect multiple LANs within a metropolitan area, enabling high-speed communication and resource
sharing over a relatively large geographic area.

The OSI reference model


The OSI (Open Systems Interconnection) reference model is a conceptual framework used to
understand and implement network communications between different systems. It divides network
communication into seven distinct layers, each with specific functions and protocols.

1. Physical Layer: This is the first layer that deals with the physical connection between
devices, including cables, switches, and the raw bitstream.
2. Data Link Layer: This layer ensures reliable data transfer across a physical link by
providing error detection and correction, and framing.
3. Network Layer: Responsible for routing packets of data from one device to another using
IP addresses, ensuring they reach their destination.
4. Transport Layer: This layer provides end-to-end communication services for applications
by managing data flow control, error correction, and segmentation.
5. Session Layer: Manages and controls the connections between computers by establishing,
maintaining, and terminating sessions.
6. Presentation Layer: This layer translates data between the application layer and the
network, handling data encryption, compression, and translation.
7. Application Layer: The topmost layer that interacts directly with user applications,
providing network services such as email, file transfer, and web browsing.

The TCP/IP model


The TCP/IP model, also known as the Internet protocol suite, is a conceptual framework used for
understanding and designing a network protocol stack. TCP/IP stands for Transmission Control
Protocol/Internet Protocol. The model describes how data is transmitted and received over a
network and is the foundation of modern networking, including the internet.

Here's a quick overview of the four layers of the TCP/IP model:

1. Application Layer: Handles high-level protocols and data interactions for network
services like web browsing (HTTP), file transfer (FTP), email (SMTP), and domain name
resolution (DNS).
2. Transport Layer: Ensures reliable data transfer between devices using protocols like TCP
(reliable and ordered delivery) and UDP (faster but less reliable).
3. Internet Layer: Manages logical addressing and routing of data packets using IP (routes
data), ICMP (error reporting), and ARP (maps IP to MAC addresses).
4. Network Access Layer (Link Layer): Deals with the physical transmission of data
over network hardware through protocols like Ethernet and Wi-Fi.

Switching
Switching is the process of transferring data packets from one device to another within a network.
It's a fundamental concept in network communication, ensuring that data reaches its intended
destination efficiently.

Circuit-switched networks
Circuit-switched networks are a type of network in which a dedicated communication path or circuit
is established between two parties for the duration of a communication session. This type of
network is commonly used in traditional telephone systems.

Here's a brief overview of how circuit-switched networks work:

1. Setting Up: A path between the sender and receiver is established before communication
starts. This path is reserved and ready for use.
2. Active Connection: The path stays open and dedicated to the communication as long
as it's needed, ensuring a stable connection.
3. Closing Down: When the communication ends, the path is closed, and the resources are
freed up for others to use.

Datagram Networks
Datagram Networks are also known as packet-switched networks. They work on the principle of
breaking down data into smaller chunks called datagrams or packets. Each packet is transmitted
independently over the network and can take different paths to reach the destination. Here are some

Key points:

 Packet-Based: Data is sent in small packets called datagrams.


 Independent Routing: Each packet can take a different path.
 No Connection Setup: No dedicated path; packets are sent directly.
 Potential Issues: Packets may arrive out of order, delayed, or lost.

Virtual Circuit Networks


Virtual Circuit Networks (VCNs) are a type of network where a logical path (or circuit) is established
between the source and destination before any data is sent. The path appears as a dedicated
connection, even though the underlying physical network may be shared with other virtual circuits.

Key Points
 Logical Path: Sets up a path before data transfer.
 Types: Permanent (PVC) and Switched (SVC) circuits.
 Reliable: Ensures data is delivered in order.
 Resources: Can reserve resources for quality.
 Protocols: Uses Frame Relay, ATM.

Introduction to Routing
Routing is the process of determining the best path for data to travel across a network from the
source to the destination. It involves several key concepts and components:

1. Routers: Devices that direct data packets between networks.


2. Routing Tables: Databases in routers containing information about network paths.
3. Routing Algorithms: Methods used to calculate the best path (e.g., Dijkstra's
algorithm).
4. Types of Routing:
 Static Routing: Fixed paths manually configured by network administrators.
 Dynamic Routing: Paths automatically adjusted using routing protocols.

!!!!UNIT4th!!!!
Network addressing
Network addressing in data communication refers to the method of identifying devices on a network
to facilitate communication. Each device is assigned a unique address to ensure data is transmitted to
the correct recipient.

Physical Address
In data communication, a Physical Address, also known as a Media Access Control (MAC) Address, is
a unique identifier assigned to network interfaces for communication on a physical network
segment. A 48-bit or 64-bit hexadecimal address. It’s used for communication within a local network
(Layer 2 of the OSI model).

Logical address
A Logical Address, commonly called an IP Address, is essential for identifying devices and ensuring
accurate data delivery across networks. It operates at the Network Layer (Layer 3) of the OSI model
and facilitates communication between devices. Logical addresses are of two types: IPv4, a 32-bit
address written in dotted decimal format (e.g., 192.168.1.1), and IPv6, a 128-bit address written in
hexadecimal format (e.g., 2001:0db8::1).
Subnetting in Networking
Subnetting is a method used in networking to divide a larger IP address space (network) into smaller,
more manageable segments called subnets. This improves network performance, enhances security,
and makes management easier. Each subnet has its own IP address range, defined by a subnet mask,
which helps separate the network and host parts of an IP address.

Transmission Control Protocol (TCP)


The Transmission Control Protocol (TCP) is a key protocol that ensures reliable, ordered, and error-
free delivery of data between devices on a network. It establishes a connection between the sender
and receiver, breaks data into smaller parts, and reassembles them at the destination in the correct
order. This makes TCP crucial for applications that need data integrity and reliability, such as web
browsing, email, and file transfers.

UDP (User Datagram Protocol)


UDP (User Datagram Protocol) is a communication protocol used for fast, connectionless data
transmission between devices. Unlike TCP, UDP does not guarantee reliable delivery or data integrity,
making it suitable for applications like streaming, online gaming, and VoIP, where speed is more
critical than perfect accuracy.

IPv4 (Internet Protocol version 4)


IPv4 is a protocol used for addressing and routing data in a network. It is the most widely used
version of the Internet Protocol, providing a unique identifier for devices on a network, enabling
communication between them. IPv4 uses a 32-bit address, shown as four numbers separated by dots
(e.g., 192.168.1.1). Each number can range from 0 to 255, creating about 4.3 billion unique
addresses. These addresses help routers send data to the right place.

Classful addressing
Classful addressing is an old way of dividing IP addresses into groups (A, B, C, D, E) based on network
size. Class A was for big networks, Class B for medium networks, and Class C for small ones. It helped
organize addresses, but it wasted a lot of them. This method was replaced by CIDR, which is more
flexible and saves addresses.

Network Protocols: HTTP


HTTP (Hypertext Transfer Protocol) is the basic way web browsers and servers communicate. It lets
users view web pages, images, and other online content. HTTP works on a request-response model,
meaning each interaction is separate and doesn't remember past actions. It's essential for browsing
and sharing information on the internet.

Network Protocols: FTP


FTP (File Transfer Protocol) is a standard way to transfer files between a computer and a server over
a network. It allows users to upload, download, and manage files, making it useful for website
maintenance and sharing data. FTP works using a client-server system and usually requires a
username and password, though it can also allow anonymous access for public file sharing.

Network Protocols: SMTP


SMTP (Simple Mail Transfer Protocol) is a protocol used for sending emails. It transfers messages
from your email client (like Outlook or Gmail) to the mail server and then to the recipient's server.
SMTP ensures your emails reach the correct destination reliably and efficiently.

Network Protocols: SNMP


SNMP (Simple Network Management Protocol) is a way for network administrators to manage and
monitor network devices like routers, switches, and servers. It helps check the health of the network
and fix problems by sending and receiving messages with these devices.

Network Protocols: DNS


DNS (Domain Name System) converts easy-to-remember domain names (like www.example.com)
into numerical IP addresses (like 192.168.1.1) that computers use to locate each other on the
network. This way, users don't have to remember complex numbers to access websites.

Error Detection & Correction


Error Detection & Correction in data communication ensures the accurate delivery of data by
identifying and fixing errors that occur during transmission. Techniques like Parity Check, Checksum,
and Cyclic Redundancy Check (CRC) detect errors, while methods such as Automatic Repeat reQuest
(ARQ) and Forward Error Correction (FEC) correct them. These processes maintain data integrity and
reliability in communication networks.

Hamming Distance
Hamming Distance is a way to measure how many bits are different between two binary strings. It
helps in error detection and correction by showing how many bit changes are needed to turn one
string into another. For example, the Hamming Distance between "1010" and "1001" is 2 because
they have two bits that are different.

Parity Check
A Parity Check is a simple method to detect errors in data. It adds an extra bit, called a parity bit, to
make the total number of 1s either even (even parity) or odd (odd parity). When the data is received,
the system checks if the parity matches. If it doesn't, an error is found, meaning the data may be
corrupted.

Cyclic Redundancy Check


A Cyclic Redundancy Check (CRC) is a method to detect errors in data. It creates a short code from
the data before sending it. The receiver then generates a new code from the received data and
checks if it matches the original. If they match, the data is correct; if not, there's an error.

Checksum
A Checksum is a simple way to check for errors in data. It creates a numerical value from the data
before sending it. The receiver calculates a new checksum from the received data and compares it
with the original. If they match, the data is correct; if not, there's an error.

You might also like