Networkingpdf
Networkingpdf
Data flow
Data flow refers to the movement of data within and between systems, including its sources, destinations,
and the path it takes.
NETWORKS
Distributed Processing
Distributed processing is a method used to spread the data processing load across multiple systems, which
are often geographically dispersed. This approach is designed to handle large-scale data and computational
tasks more efficiently than a single machine could.
Benefits
1. Efficiency: By distributing tasks across multiple nodes, distributed processing can significantly
reduce the time required for data processing.
2. Scalability: Distributed systems can easily scale out by adding more nodes, allowing them to handle
increased workloads.
3. Cost-Effective: Using a cluster of commodity hardware can be more cost-effective than investing in
a single, high-performance machine.
4. Fault Tolerance: Distributed systems are inherently more resilient, as the failure of one node does
not bring down the entire system.
Challenges
1. Complexity: Designing and managing distributed systems can be complex due to issues such as
synchronization, data consistency, and network latency.
2. Security: Ensuring data security and privacy across multiple nodes and over a network can be
challenging.
3. Latency: Communication between nodes can introduce latency, affecting the overall performance of
the system.
4. Data Consistency: Maintaining data consistency across distributed nodes requires careful
management and coordination.
Network Criteria
Network Criteria refer to the standards and performance metrics used to evaluate networks. These criteria
include:
1. Performance:
o Bandwidth: The maximum data transfer rate of a network.
o Throughput: The actual rate at which data is successfully transferred.
o Latency: The delay before a transfer of data begins following an instruction.
o Jitter: The variation in packet arrival times.
2. Reliability:
o Availability: The proportion of time a network is operational.
o Error Rate: The number of corrupted bits expressed as a percentage or fraction of the total
sent.
o Fault Tolerance: The ability of a network to continue functioning in the event of a failure.
3. Security:
o Confidentiality: Ensuring that data is not accessed by unauthorized users.
o Integrity: Ensuring that data is not altered during transit.
o Availability: Ensuring that authorized users have access to data and resources when needed.
4. Scalability: The ability of a network to grow and manage increased demand.
Physical Structures
Physical Structures refer to the physical layout and connections of network components.
Type of connection: A point-to-point connection provides a dedicated link between two devices. The entire
capacity ofthe link is reserved for transmission between those two devices. A multipoint (also called
multidrop) connection is one in which more than two specific devices share a single link.
These include:
1. Topology:
1. Bus Topology: In a bus topology, all devices are connected to a single central cable, called the
bus or backbone. Data sent from a device is broadcast to all devices on the network.
Advantages:
• Easy to Implement: Simple to set up and extend.
• Cost-Effective: Requires less cable than other topologies.
• Easy to Understand: Simple design and layout.
Disadvantages:
• Limited Length: The length of the bus is limited, which restricts the number of devices.
• Performance Issues: Heavy network traffic can slow down performance.
• Single Point of Failure: If the main cable (bus) fails, the entire network goes down.
• Difficult Troubleshooting: Identifying and isolating faults can be challenging.
2. Star Topology: In a star topology, all devices are connected to a central hub or switch. Data sent
from one device to another passes through the central hub.
Advantages:
• High Performance: Each device has a dedicated connection to the hub, reducing collisions and
increasing performance.
• Easy to Manage: Easy to install and configure. Faults are easy to detect and isolate.
• Scalability: Easy to add new devices without disrupting the network.
Disadvantages:
• Central Point of Failure: If the central hub fails, the entire network becomes inoperable.
• Higher Costs: Requires more cable and the cost of the hub.
3. Ring Topology: In a ring topology, each device is connected to two other devices, forming a
circular data path. Data travels in one direction (or sometimes in both directions in dual-ring
networks).
Advantages:
• Efficient Data Transfer: Data packets travel at high speed in one direction, reducing the
chances of collision.
• Orderly Network: All devices have equal access to resources, which prevents data packet
collision.
Disadvantages:
• Failure Impact: If one device or connection in the ring fails, the entire network can be affected.
• Difficult Troubleshooting: Identifying faults can be complicated.
• Network Expansion: Adding or removing devices can disrupt the network.
4. Mesh Topology: In a mesh topology, each device is connected to every other device in the
network. This can be a full mesh (where every device is connected to every other device) or a partial
mesh (where some devices are only connected to those they exchange data with most frequently).
Advantages:
• Redundancy and Reliability: Multiple paths for data transmission ensure the network is robust
and fault-tolerant.
• Scalability: Can handle high traffic loads and is scalable.
Disadvantages:
• High Cost: Requires a lot of cables and networking hardware, making it expensive.
• Complex Installation: Complicated setup and maintenance due to the large number of
connections.
5. Tree Topology: Tree topology is a hybrid topology that combines characteristics of star and bus
topologies. It consists of groups of star-configured networks connected to a linear bus backbone.
Advantages:
• Scalability: Easy to expand by adding new branches of star networks.
• Hierarchical Management: Suitable for hierarchical networks where different levels have
different functions.
Disadvantages:
• Complexity: More complex to configure and maintain than simpler topologies.
• Dependence on Backbone: The entire network depends on the main bus cable, which if fails,
can bring down the network.
6. Hybrid Topology: Hybrid topology combines two or more different types of topologies, such as
star, bus, ring, or mesh, to leverage their benefits while minimizing their disadvantages.
Advantages:
• Flexibility: Can be designed to suit the specific needs of an organization.
• Reliable and Scalable: Combines the strengths of different topologies to create a more robust
and flexible network.
Disadvantages:
• Complexity and Cost: More complex to design and implement, often resulting in higher costs.
• Maintenance: Requires careful planning and management to ensure compatibility and
efficiency.
2. Transmission media
3. Hardware Components:
o Routers: Direct data packets between networks.
o Switches: Connect devices within a single network.
o Hubs: Basic devices that connect multiple Ethernet devices.
o Modems: Modulate and demodulate signals for transmission over telephone lines.
Network Models
Network Models are frameworks for understanding network functions and interactions. The two primary
models are:
1. OSI Model (Open Systems Interconnection):
o Layers: Physical, Data Link, Network, Transport, Session, Presentation, Application.
THE INTERNET
A Brief History
1. Early Beginnings (1960s-1970s):
o 1969: The first network, ARPANET, was developed by the U.S. Department of Defense to
connect research institutions.
o 1972: The first email was sent by Ray Tomlinson.
o 1973: Vint Cerf and Bob Kahn developed the Transmission Control Protocol (TCP), which
later combined with IP to form TCP/IP.
2. Expansion and Standardization (1980s):
o 1983: ARPANET adopted TCP/IP, and the network became known as the Internet.
o 1985: The National Science Foundation Network (NSFNET) was created, connecting several
supercomputing centers.
o 1986: The first Domain Name System (DNS) was introduced to translate domain names into
IP addresses.
3. Commercialization and Growth (1990s):
o 1991: Tim Berners-Lee proposed the World Wide Web (WWW) as a system for sharing
information through hypertext.
o 1993: The first graphical web browser, Mosaic, was released, making the web more
accessible to the general public.
o 1995: The National Science Foundation lifted restrictions on commercial use of the Internet,
leading to rapid growth of private networks and services.
4. Broadband and Web 2.0 (2000s):
o 2000: The dot-com bubble burst, but the Internet continued to expand.
o 2004: The term "Web 2.0" emerged, reflecting the shift towards user-generated content and
social media platforms like Facebook and YouTube.
o 2007: The iPhone's release accelerated mobile Internet use.
5. Recent Developments (2010s-2020s):
o 2010s: The rise of cloud computing, Internet of Things (IoT), and increased focus on data
privacy and cybersecurity.
o 2020s: Advances in 5G technology, artificial intelligence integration, and the growth of
decentralized technologies like blockchain.
The Internet Today
1. Global Reach:
o The Internet connects billions of devices and users worldwide, providing access to
information, communication, and various online services.
2. Key Services and Technologies:
o Web Browsing: Accessing information through websites and web applications.
o Email: Communication via electronic mail.
o Social Media: Platforms like Facebook, Twitter, and Instagram for social networking and
content sharing.
o Streaming Services: Platforms like Netflix and YouTube for video and music streaming.
o E-Commerce: Online shopping platforms like Amazon and Alibaba.
3. Mobile and Wireless Access:
o Smartphones and Tablets: These devices provide ubiquitous access to the Internet via cellular
networks and Wi-Fi.
o 5G Networks: The latest generation of mobile networks, offering high-speed and low-latency
connectivity.
4. Cloud Computing: Services like AWS, Google Cloud, and Microsoft Azure provide scalable
computing resources and storage solutions over the Internet.
5. Cybersecurity and Privacy: Increasing focus on protecting user data and maintaining secure online
interactions. Issues like data breaches, hacking, and privacy concerns are prominent.
6. Internet of Things (IoT): Growing number of interconnected devices, from smart home appliances
to industrial sensors, creating a network of "smart" devices.
7. Regulation and Policy: Governments and organizations are developing regulations to address issues
such as data privacy (e.g., GDPR), net neutrality, and digital rights.
8. Future Trends:
o Artificial Intelligence: Integration of AI technologies in various applications and services.
o Quantum Computing: Potential to revolutionize data processing and cryptography.
o Decentralized Technologies: Blockchain and other technologies aiming to create more secure
and transparent systems.
PROTOCOLS AND STANDARDS
Protocols, Standards, and Standards Organizations
Protocols are rules and conventions that define how data is exchanged over networks. They ensure that
different systems can communicate effectively. Key aspects of protocols include:
• Syntax: The term syntax refers to the structure or format ofthe data, meaning the order in which they
are presented.
• Semantics: The word semantics refers to the meaning of each section of bits.
• Timing: The termtiming refers to two characteristics: when data should be sent and how fast they can
be sent.
Examples of Network Protocols:
o HTTP (Hypertext Transfer Protocol): Used for transferring web pages.
o FTP (File Transfer Protocol): Used for transferring files between systems.
o SMTP (Simple Mail Transfer Protocol): Used for sending email.
o TCP/IP (Transmission Control Protocol/Internet Protocol): Core protocols for Internet
communication, providing reliable data transfer and addressing.
Standards
Standards are established norms and guidelines that ensure compatibility and interoperability among
different systems and devices. They provide a common framework for:
1. Data communication standards fall into two categories: de facto (meaning "by fact" or "by
convention") and de jure (meaning "by law" or "by regulation").
• Defacto: Standards that have not been approved by an organized body but have been adopted
as standards through widespread use are de facto standards. De facto standards are often
established originally by manufacturers who seek to define the functionality ofa new product
or technology.
• Dejure: Those standards thathave been legislated by an officially recognized body are de
jurestandards.
2. Examples of Standards:
o IEEE 802.11: Standards for wireless networking (Wi-Fi).
o USB (Universal Serial Bus): Standard for connecting peripherals to computers.
o ISO/IEC 27001: Standard for information security management systems.
Standards Organizations
Standards Organizations are bodies responsible for developing and maintaining standards across various
industries. Key organizations include:
1. ISO (International Organization for Standardization):
o Develops and publishes international standards for a wide range of industries and
technologies.
2. IEC (International Electrotechnical Commission):
o Focuses on international standards for electrical, electronic, and related technologies.
3. IEEE (Institute of Electrical and Electronics Engineers):
o Develops standards related to electrical engineering, electronics, and telecommunications.
4. IETF (Internet Engineering Task Force):
o Develops and promotes voluntary Internet standards, particularly for protocols and best
practices.
5. ITU-T (International Telecommunication Union-Telecommunication Sector)
o Develops international standards for telecommunications and ICT to ensure the efficient and
compatible operation of communication systems.
6. ANSI (American National Standards Institute)
Ensures that the U.S. standards system remains globally competitive and that American standards are
recognized internationally.
7. EIA (Electronic Industries Alliance)
Developed standards to ensure the compatibility and interoperability of electronic components and
systems.
8. W3C (World Wide Web Consortium):
o Develops standards for web technologies to ensure compatibility and interoperability across
web platforms.
Internet Standards
Internet Standards are specific protocols and guidelines that govern how the Internet functions. They are
essential for ensuring consistent and reliable communication across the global network. Key standards
include:
2
NETWORK MODELS
LAYERED TASKS
Layered tasks in data communication refer to the division of communication processes into distinct layers,
each with specific responsibilities. This approach helps manage the complexity of data communication,
ensuring efficient and reliable data transfer.
1. Sender:
o Role: The sender is the device that initiates the communication by transmitting data.
o Tasks:
▪ Data generation and encapsulation.
▪ Adding necessary headers for addressing and control information.
▪ Converting data into signals suitable for the transmission medium.
o Examples: Computers, smartphones, servers.
2. Receiver:
o Role: The receiver is the device that receives the transmitted data and processes it.
o Tasks:
▪ Signal reception and decoding.
▪ Stripping headers and extracting the original data.
▪ Presenting data to the application layer for use.
o Examples: Computers, smartphones, servers.
3. Carrier:
o Role: The carrier provides the medium through which data is transmitted between the sender
and receiver.
o Types of Media:
▪ Wired: Twisted pair cables, coaxial cables, fiber optic cables.
▪ Wireless: Radio waves, microwaves, infrared.
Hierarchical Structure
The layered model that dominated data communications and networking literature before 1990 was the Open
Systems Interconnection (OSI) model. Everyone believed that the OSI model would become the ultimate
standard for data communications, but this did not happen. The TCPIIP protocol suite became the dominant
commercial architecture because it was used and tested extensively in the Internet; the OSI model was never
fully implemented.
THE OSI MODEL
It was first introduced in the late 1970s by ISO (International Standards Organization) . An open system is a
set of protocols that allows any two different systems to communicate regardless of their underlying
architecture.
The purpose of the OSI model is to show how to facilitate communication between different systems
without requiring changes to the logic ofthe underlying hardware and software.
• Layered Architecture: The seven-layer model for network communication.
• Peer-to-Peer Processes: Interaction between corresponding layers on different systems.
• Encapsulation: Wrapping data with necessary protocol information at each layer. the data portion
ofa packet at level N- 1 carries the whole packet (data and header and maybe trailer) from level N.
LAYERS IN THE OSI MODEL
1. Physical Layer:
o Function: Deals with the physical connection between devices and the transmission and
reception of raw bit streams over a physical medium.
o Tasks: Signal transmission, bit synchronization, physical topology.
2. Data Link Layer:
o Function: Provides node-to-node data transfer and handles error detection and correction
from the physical layer.
o Tasks: Framing, MAC addressing, error checking.
3. Network Layer:
o Function: Manages logical addressing and routing of data packets between devices on
different networks.
o Tasks: IP addressing, packet forwarding, routing.
4. Transport Layer:
o Function: Ensures reliable data transfer between end-to-end systems.
o Tasks: Segmentation and reassembly, flow control, error correction.
5. Session Layer:
o Function: Manages sessions or connections between applications.
o Tasks: Session establishment, maintenance, and termination, synchronization.
6. Presentation Layer:
o Function: Translates data between the application layer and the network, ensuring data is in
a usable format.
o Tasks: Data encryption, compression, and translation (e.g., converting data formats).
7. Application Layer:
o Function: Provides network services directly to end-user applications.
o Tasks: Protocols and services for file transfers, email, remote login, and other network
software services.
TCP/IP PROTOCOL SUITE
The TCP/IP protocol suite, developed before the OSI model, consists of four original layers: host-to-
network, internet, transport, and application. However, in comparison to the OSI model, these layers can be
mapped as follows:
• Host-to-Network Layer: Equivalent to the physical and data link layers of the OSI model.
• Internet Layer: Equivalent to the network layer of the OSI model.
• Transport Layer: Handles some session layer duties and corresponds to the transport layer of the
OSI model.
• Application Layer: Combines the functions of the session, presentation, and application layers of
the OSI model.
TCP/IP is hierarchical protocol and made up of interactive modules providing specific functionalities. The
protocols within the layers are relatively independent and can be mixed and matched as needed.
PHYSICAL AND DATA LINK LAYERS:
• TCP/IP does not specify protocols for these layers, supporting all standard and proprietary protocols.
Networks can be local-area or wide-area networks.
NETWORK LAYER:
• Internetworking Protocol (IP):
o The transmission mechanism used by TCP/IP.
o Provides an unreliable, connectionless, best-effort delivery service.
o Transports data in packets called datagrams without guaranteeing error checking or tracking.
• Supporting Protocols:
o ARP (Address Resolution Protocol): Maps logical addresses to physical addresses.
o RARP (Reverse Address Resolution Protocol): Discovers a device's internet address from
its physical address.
o ICMP (Internet Control Message Protocol): Sends error messages and operational
information.
o IGMP (Internet Group Message Protocol): Manages group message transmissions.
TRANSPORT LAYER:
• UDP (User Datagram Protocol):
o A simple, connectionless protocol providing basic error control and data length information.
• TCP (Transmission Control Protocol):
o A reliable, connection-oriented protocol that ensures ordered delivery of data segments.
o Manages sequence and acknowledgment numbers for data reordering.
• SCTP (Stream Control Transmission Protocol):
o Combines features of both UDP and TCP, supporting newer applications like voice over the
Internet.
APPLICATION LAYER:
• Equivalent to the session, presentation, and application layers of the OSI model.
• Hosts various standard protocols discussed in later chapters.
ADDRESSING
Physical Addresses:
• Also known as link addresses, these are defined by a node's LAN or WAN.
• Included in the frame used by the data link layer.
• Vary in size and format depending on the network (e.g., Ethernet uses a 6-byte address).
• Used within a local network for frame delivery.
Logical Addresses:
• Necessary for universal communication, independent of the underlying physical network.
• Currently a 32-bit address in the Internet, uniquely identifying a host.
• Used for routing packets across different networks.
Port Addresses:
• Used to identify specific processes within a host.
• A 16-bit label assigned to a process for communication.
• Allows multiple processes to communicate simultaneously over the Internet.
Specific Addresses:
• User-friendly addresses like email addresses and URLs.
• Translated to corresponding port and logical addresses by the sending computer.
3
DATA AND SIGNALS
ANALOG AND DIGITAL
Analog Data:
• Definition: Continuous information with infinitely many possible values.
• Example: An analog clock, which continuously moves its hands, or the continuous sound waves
produced by a human voice.
• Representation: Analog data can be captured and converted to an analog signal or sampled and
converted to a digital signal.
Analog Signals:
• Definition: Signals with infinitely many levels of intensity over time, represented by a continuous
wave.
• Characteristics: Smooth transitions between values, with the signal passing through an infinite
number of points.
Digital Data:
• Definition: Discrete information with a limited number of defined states.
• Example: A digital clock, which changes suddenly from 8:05 to 8:06, or data stored as 0s and 1s in
computer memory.
• Representation: Digital data can be converted to a digital signal or modulated into an analog signal
for transmission.
Digital Signals:
• Definition: Signals with a limited number of defined values, typically represented by discrete steps
or jumps.
• Characteristics: Sudden transitions between values, shown as vertical lines in signal plots.
ii. Period and Frequency: Period refers to the amount of time, in seconds, a signal needs to complete 1
cycle. Frequency refers to the number ofperiods in I s. Period is the inverse of frequency, and
frequency is the inverse of period, as the following formulas show.
Frequency is the rate ofchange with respect to time. Change in a shortspan of time means high
frequency. Change over a long span of time means low frequency.
If a signal does not change at all, its frequency is zero. If a signal changes instantaneously, its
frequency is infinite.
PHASE
Phase describes the position ofthe waveform relative to time O. Phase is measured in degrees or radians. A
phase shift of 360° corresponds to a shift of a complete period; a phase shift of 180° corresponds to a shift of
one-half of a period; and a phase shift of 90° corresponds to a shift of one-quarter ofa period.
WAVELENGTH
Wavelength binds the period or the frequency of a simple sine wave to the propagation speed ofthe medium.
While the frequency of a signal is independent of the medium, the wavelength depends on both the
frequency and the medium. The wavelength is the distance a simple signal can travel in one period.
• Bit Length: Bit length is the physical distance that a single bit occupies on the transmission
medium. It is determined by the propagation speed of the medium and the bit rate. (Units: meters
per bit)
Propagation Speed is the speed at which a signal travels through the medium.
DIGITAL SIGNAL AS COMPOSITE ANALOG SIGNAL:
• According to Fourier analysis, a digital signal can be seen as a composite of multiple analog signals.
• Infinite Bandwidth: A digital signal has an infinite bandwidth. This concept arises from the nature
of digital signals comprising connected vertical and horizontal line segments in the time domain.
o Vertical Line: Represents an infinite frequency (sudden change in time).
o Horizontal Line: Represents a zero frequency (no change in time).
o Transitioning from zero to infinity and vice versa suggests that all intermediate frequencies
are part of the signal.
Fourier Analysis:
• Decomposes digital signals into their frequency components.
• Periodic Digital Signal:
o Rare in data communications.
o When decomposed, it results in a frequency domain representation with discrete frequencies
and infinite bandwidth.
• Nonperiodic Digital Signal:
o More common in data communications.
o When decomposed, it results in a frequency domain representation with continuous
frequencies and infinite bandwidth.
TRANSMISSION OF DIGITAL SIGNALS
Methods and challenges.
TRANSMISSION IMPAIRMENT
• Attenuation, Distortion, Noise: Factors affecting signal quality.
Data Rate Limits
• Nyquist Bit Rate: Theoretical maximum bit rate for a noiseless channel.
• Shannon Capacity: Maximum bit rate for a noisy channel.
Performance
• Bandwidth, Throughput, Latency: Measures of network performance.
• Bandwidth-Delay Product, Jitter: Additional performance metrics.
4
DIGITAL TRANSMISSION
Digital-to-Digital Conversion
• Line Coding: Representing digital data on digital signals.
• Line Coding Schemes: Various methods of line coding.
• Block Coding, Scrambling: Techniques to improve signal integrity.
Analog-to-Digital Conversion
• Pulse Code Modulation (PCM): Converting analog signals to digital.
• Delta Modulation (DM): Simplified form of PCM.
Transmission Modes
• Parallel Transmission: Multiple bits transmitted simultaneously.
• Serial Transmission: Bits transmitted one after another.
5
ANALOG TRANSMISSION
Digital-to-Analog Conversion
• Amplitude Shift Keying, Frequency Shift Keying, Phase Shift Keying, Quadrature Amplitude
Modulation: Techniques for modulating digital signals into analog.
Analog-to-Analog Conversion
• Amplitude Modulation, Frequency Modulation, Phase Modulation: Techniques for modulating
analog signals.
6
BANDWIDTH UTILIZATION: MULTIPLEXING AND
SPREADING
Multiplexing
Multiplexing is a technique used to combine multiple signals for transmission over a single communication
medium, thereby optimizing the use of available bandwidth. It allows for more efficient use of resources and
reduces the cost of transmission. There are several types of multiplexing techniques, each with its own
methods and applications.
• Frequency-Division Multiplexing (FDM), Wavelength-Division Multiplexing (WDM),
Synchronous Time-Division Multiplexing (STDM), Statistical Time-Division Multiplexing:
Techniques for combining multiple signals for transmission over a single medium.
Spread Spectrum
• Frequency Hopping Spread Spectrum (FHSS), Direct Sequence Spread Spectrum (DSSS):
Methods for spreading a signal across a wider bandwidth.
7
TRANSMISSION MEDIA
Transmission media are the physical pathways through which data is transmitted from one device to another.
They can be broadly categorized into two types: guided and unguided media.
1. GUIDED MEDIA
Guided media use physical cables or fibers to transmit data. The main types of guided media are:
• TWISTED PAIR CABLE:
o Description: Consists of pairs of insulated copper wires twisted together. One ofthe wires is
used to carry signals to the receiver, and the other is used only as a ground reference. The
receiver uses the difference between the two.
o Types:
▪ Unshielded Twisted Pair (UTP): Commonly used in Ethernet networks. Less
expensive, but more susceptible to interference.
▪ Shielded Twisted Pair (STP): Includes shielding to reduce electromagnetic
interference. Used in environments with high interference.
o Applications: Telephone networks, local area networks (LANs).
o Advantages:
▪ Cost-effective and widely available.
▪ Easy to install and configure.
▪ Flexible and lightweight.
o Disadvantages:
▪ Limited bandwidth and distance.
▪ Susceptible to electromagnetic interference (EMI) and crosstalk.
▪ Lower data transmission speeds compared to other media.
• COAXIAL CABLE:
o Description: A single copper conductor surrounded by insulation, a metallic shield, and an
outer cover.
o Applications: Cable television systems, broadband internet, some local area networks.
o Advantages:
▪ Higher bandwidth than twisted pair cables.
▪ Better protection from EMI.
▪ Suitable for longer distances without significant signal loss.
o Disadvantages:
▪ More expensive and bulkier than twisted pair cables.
▪ Difficult to install and maintain.
▪ Less flexible than twisted pair cables.
2. UNGUIDED MEDIA
Unguided media use wireless methods to transmit data through the air or space. The main types of unguided
media are:
• RADIO WAVES:
o Description: Electromagnetic waves with frequencies ranging from 3 kHz to 1 GHz.
o Applications: AM/FM radio, television broadcasts, wireless LANs, Bluetooth.
o Advantages:
▪ Can cover large areas and penetrate buildings.
▪ Suitable for mobile communication and broadcast services.
▪ No physical cables required.
o Disadvantages:
▪ Limited bandwidth compared to other wireless media.
▪ Vulnerable to interference and eavesdropping.
▪ Signal degradation due to obstacles and distance.
• MICROWAVES:
o Description: Electromagnetic waves with frequencies ranging from 1 GHz to 300 GHz.
o Types:
▪ Terrestrial Microwaves: Require line-of-sight between antennas. Used for point-to-
point communication.
▪ Satellite Microwaves: Use satellites to relay signals. Used for long-distance
communication, global positioning systems (GPS).
o Applications: Long-distance telephone calls, satellite television, satellite internet.
o Advantages:
▪ High bandwidth and data transfer rates.
▪ Suitable for long-distance communication.
▪ Can support point-to-point and point-to-multipoint transmissions.
o Disadvantages:
▪ Requires line-of-sight between transmitting and receiving antennas.
▪ Affected by weather conditions (e.g., rain, fog).
▪ Installation and maintenance can be challenging and expensive.
• INFRARED:
o Description: Electromagnetic waves with frequencies ranging from 300 GHz to 400 THz.
o Applications: Remote controls, short-range communication, wireless local area networks
(WLANs) within a room.
o Advantages:
▪ High data transmission rates for short distances.
▪ Secure from eavesdropping as it requires line-of-sight.
▪ No interference from other electronic devices.
o Disadvantages:
▪ Limited to short-range communication (within a room).
▪ Requires line-of-sight between devices.
▪ Can be blocked by obstacles like walls or furniture.
8
SWITCHING
A switched network consists of a series of interlinked nodes, called switches. Switches are devices capable
of creating temporary connections between two or more devices linked to the switch. In a switched network,
some of these nodes are connected to the end systems (computers or telephones, for example). Others are
used only for routing.
CIRCUIT-SWITCHED NETWORKS
A circuit-switched network is made ofa set of switches connected by physical links, in which each link is
divided into n channels.
• Circuit Switching occurs at the physical layer and requires resource reservation before
communication begins.
• Resources include channels, switch buffers, and ports, and they remain dedicated throughout the
communication.
• Data Transfer is continuous, without packetization, and switches route data based on the reserved
channels or time slots.
In circuit-switched networks, communication involves three main phases: setup, data transfer, and teardown.
1. Setup Phase: Before communication starts, a dedicated path (circuit) is established. This involves
requesting and reserving channels through switches, with each switch along the path reserving
resources. End-to-end addressing is used for this purpose.
2. Data Transfer Phase: Once the circuit is established, data flows continuously between the end
systems without packetization. The dedicated resources ensure that there is no delay at each switch
during the data transfer.
3. Teardown Phase: After data transfer is complete, a signal is sent to release the reserved resources.
• Efficiency: Circuit-switched networks can be less efficient due to dedicated resources that remain
unavailable for other connections, particularly when there is no data transfer.
• Delay: These networks typically have low delay during data transfer since resources are dedicated
and no waiting time occurs at switches.
DATAGRAM NETWORKS
In packet-switched networks, messages are divided into packets, which are then transmitted through the
network without reserved resources. . The size ofthe packet is determined by the network and the governing
protocol. In packet switching, there is no resource allocation for a packet. Datagram switching is normally
done at the network layer. Datagram networks, a type of packet-switched network:
1. Packet Division and Transmission: Messages are split into packets of fixed or variable size based
on network and protocol specifications. Unlike circuit-switched networks, datagram networks do not
reserve bandwidth or processing time for packets; resources are allocated on demand, often on a
first-come, first-served basis.
2. Datagram Approach:
o Each packet, or datagram, is treated independently, meaning packets from the same message
may take different paths and arrive out of order.
o There is no reservation of resources, which can lead to delays, packet loss, or out-of-order
delivery.
3. Routing and Addressing:
o Routing Tables: Each switch (or router) uses a dynamic routing table based on destination
addresses to forward packets. This table is updated periodically.
o Destination Address: Each packet carries a destination address used by the switch to
determine the appropriate output port. This address remains the same throughout the packet's
journey.
4. Connectionless Nature: Datagram networks do not have setup or teardown phases. Each packet is
treated independently without regard to other packets.
5. Efficiency and Delay:
o Efficiency: Generally better than circuit-switched networks since resources are only used
when needed and can be reallocated between packets.
o Delay: Can be higher due to potential waiting times at switches, variable paths, and lack of
resource reservation. Each packet may experience different delays based on its path and
processing time at switches.
VIRTUAL-CIRCUIT NETWORKS
A virtual-circuit network combines features of both circuit-switched and datagram networks:
1. Three Phases: Similar to circuit-switched networks, virtual-circuit networks involve setup, data
transfer, and teardown phases.
• Data Transfer Phase:
o Each switch uses a table with entries for virtual circuits to route packets, changing their
VCI as they pass through.
• Setup Phase:
o A setup request frame establishes a virtual circuit, creating and updating routing table
entries in each switch.
o An acknowledgment frame finalizes these entries and confirms the setup.
• Teardown Phase:
o After data transfer, a teardown request is sent to remove the virtual circuit entries from
the switches.
2. Resource Allocation: Resources can be reserved during the setup phase (like in circuit-switched
networks) or allocated on demand (like in datagram networks).
3. Packet Handling:
o Packetization: Data is divided into packets, each with a header containing a local address for
the next switch, not an end-to-end address.
o Path Consistency: All packets follow the same pre-established path, ensuring they travel
through the same sequence of switches.
4. Addressing:
o Global Address: Used to establish the virtual circuit and identify source and destination.
o Virtual-Circuit Identifier (VCI): Used for packet forwarding within the network. It changes
as packets pass through switches but is local to each switch.
5. Efficiency:
o Resource Reservation: If resources are reserved during setup, packets experience uniform
delay. If allocation is on demand, delays can vary.
6. Delay:
o Total Delay: Includes setup, data transfer, and teardown delays, along with transmission and
propagation times.
STRUCTURE OF A SWITCH
A packet switch has four components: input ports, output ports, the routing processor, and the switching
fabric.
Input Ports:
• Handle physical and data link functions, including signal reconstruction and error detection.
• Decapsulate packets from frames and buffer them before routing.
Output Ports:
• Perform the reverse of input ports: queue outgoing packets, encapsulate them in frames, and apply
physical layer functions to send the signal.
Routing Processor:
• Operates at the network layer, using the destination address to determine the next hop and the output
port. (aka table lookup)
• Newer designs often integrate this function into input ports for efficiency.
Switching Fabric:
• The core component responsible for transferring packets from input to output ports.
• Historically, used memory or bus systems; modern switches use specialized fabrics, including:
o Crossbar Switch: Directly connects each input port to each output port, allowing for simple
routing.
o Banyan Switch: A multistage switch with microswitches routing packets based on binary
representation of output ports. It includes multiple stages and handles routing based on each
bit of the binary address.
o Batcher-Banyan Switch: Combines a Batcher switch to sort incoming packets with a
Banyan switch to handle routing. Includes a trap module to prevent packet collisions and
manage simultaneous packets heading to the same destination.
9
USING TELEPHONE AND CABLE NETWORKS FOR DATA
TRANSMISSION
TELEPHONE NETWORK
The telephone network, which began as an analog system for voice transmission, has evolved significantly.
Major Components
1. Local Loops:
o Twisted-pair cables connecting subscriber telephones to local offices.
o Bandwidth for voice is 4000 Hz. Local telephone numbers help identify the local office and
loop number.
2. Trunks:
o Transmission media connecting different offices, handling many connections through
multiplexing.
o Typically use optical fibers or satellite links.
3. Switching Offices:
o Include end offices, tandem offices, and regional offices.
o Facilitate connections between local loops and trunks without permanent physical links.
Service Areas
1. LATAs (Local Access Transport Areas):
o A US term for a geographic region where one or more telephone companies or Local
Exchange Companies (LECs) provide telecommunications services .
o Regions within the U.S. defined post-1984 divestiture for local services.
o Can overlap state boundaries.
2. Intra-LATA Services:
o Provided by local exchange carriers (LECs) within a LATA.
o Includes both incumbent local exchange carriers (ILECs) and competitive local exchange
carriers (CLECs).
3. Inter-LATA Services:
o Handled by interexchange carriers (IXCs) for communication between different LATAs.
o Includes major long-distance carriers like AT&T, MCI, and Verizon.
Network Interaction
• Points of Presence (POPs):
o IXCs must have POPs in each LATA to provide inter-LATA services.
o LECs ensure connectivity to all POPs.
Signaling
1. Early Signaling:
o Initially manual with human operators setting up connections.
o Evolved to automatic systems with rotary phones and digital signaling.
2. Modern Signaling:
o Uses separate signaling and data transfer networks.
o Rotary telephones were invented that sent a digital signal defining each digit in a multidigit
telephone number.
o Data Transfer Network: Primarily circuit-switched but can also be packet-switched.
▪ Function: Carries multimedia information.
▪ Protocols and Model: Uses protocols similar to other networks discussed.
o Signaling Network: A packet-switched network suited for managing signaling tasks.
Components:
▪ Signal Points (SPs): Connect user telephones or computers.
▪ Signal Transport Ports (STPs): Nodes that receive and forward signaling messages.
▪ Service Control Point (SCP): Oversees the operation of the signaling network.
▪ Database Center: May store information about the signaling network.
DIAL-UP MODEMS
Telephone Line Bandwidth
• Traditional Bandwidth: 300 to 3300 Hz (3000 Hz bandwidth).
• Effective Bandwidth for Data: 2400 Hz (600 to 3000 Hz).
• Usage:
o Voice: Full range, tolerates interference.
o Data: Requires accuracy, edges of range unused to avoid errors.
Modem Functionality
• Definition: Modem stands for modulator/demodulator.
o Modulator: Converts binary data to an analog signal.
o Demodulator: Converts the analog signal back to binary data.
• Communication: Bidirectional, both computers can send and receive data using the same process.
Modem Standards
• V.32 and V.32bis:
oV.32: Uses trellis-coded modulation (32-QAM) with a baud rate of 2400, data rate of 9600
bps.
o V.32bis: Supports 14,400 bps using 128-QAM at 2400 baud, with an automatic speed
adjustment feature.
• V.34bis:
o Provides 28,800 bps with 960-point constellation and 33,600 bps with 1664-point
constellation.
56K Modems (V.90)
• Data Rate: 56 kbps download, 33.6 kbps upload.
• Mechanism:
o Download: Higher SNR, no quantization noise, not limited by Shannon capacity.
o Upload: Affected by quantization noise, limited to 33.6 kbps.
V.92 Modems
• Features:
o Can adjust speed based on line noise.
o Upload rate up to 48 kbps, download rate still 56 kbps.
o Can interrupt internet connection for incoming calls if call-waiting is enabled.
CABLE TV NETWORKS
Traditional Cable Networks
• Origin: Began in the late 1940s as Community Antenna TV (CATV) to provide broadcast signals to
areas with poor reception.
• Infrastructure:
o Headend: Central office receiving video signals from broadcasters.
o Coaxial Cables: Distributed signals to the community.
o Amplifiers: Used to renew signals weakened by distance, up to 35 amplifiers per line.
o Splitters and Taps: Split cables and connected to subscriber premises.
• Limitations:
o Unidirectional: Communication was one-way, from the headend to the subscribers.
o Signal Attenuation: Extensive use of amplifiers due to signal degradation.
Hybrid Fiber-Coaxial (HFC) Network
• Modernization: Combines fiber-optic and coaxial cables.
• Structure:
o Fiber Optic to Fiber Node: High-bandwidth optical fibers carry signals from the cable TV
office (regional cable head - RCH) to fiber nodes.
o Coaxial Cable to Subscribers: From fiber nodes, signals are split and sent through coaxial
cables to subscriber premises, serving up to 1000 subscribers per coaxial cable.
• Components:
o Regional Cable Head (RCH): Serves up to 400,000 subscribers.
o Distribution Hubs: Serve up to 40,000 subscribers each, handling signal modulation and
distribution.
o Fiber Nodes: Split analog signals to coaxial cables.
• Advantages:
o Reduced Amplifiers: Less than eight amplifiers needed due to reduced signal attenuation.
o Bidirectional Communication: Supports two-way communication, enhancing internet and
interactive services.
10
ERROR DETECTION AND CORRECTION
TYPES OF ERRORS
• Bits are subject to interference, causing signal changes and errors.
• Two types of errors: single-bit and burst errors.
i. Single-Bit Error:
• Only one bit in a data unit (byte, character, or packet) changes from 0 to 1 or 1 to 0.
• Example: 00000010 (ASCII STX) sent, but 00001010 (ASCII LF) received.
• Rarity: Uncommon in serial data transmission due to short noise durations (e.g., at 1 Mbps,
each bit lasts 1 microsecond, and noise typically lasts longer).
ii. Burst Error:
• Two or more bits in a data unit change from 0 to 1 or 1 to 0.
• Example: 0100010001000011 sent, but 0101110101100011 received.
• Characteristics: Errors don't need to be in consecutive bits; measured from the first to the last
corrupted bit.
• Likelihood: More common than single-bit errors due to longer noise durations affecting
multiple bits.
• Impact: Depends on data rate and noise duration (e.g., 1 kbps data rate with 1 ms noise
affects 10 bits; 1 Mbps data rate with the same noise affects 10,000 bits).
Redundancy:
• Purpose: To detect or correct errors by sending extra (redundant) bits along with the data.
• Process: Redundant bits are added by the sender and removed by the receiver, helping to identify
and correct errors.
Detection vs. Correction:
• Error Detection: Identifies whether an error has occurred (simple yes/no).
• Error Correction: Determines the number and locations of errors, making it more complex than
detection.
Methods of Error Correction:
• Forward Error Correction: The receiver uses redundant bits to guess the original message,
effective for small errors.
• Retransmission: The receiver requests the sender to resend the message until it is received correctly.
Coding Schemes:
• Purpose: Achieve redundancy by adding extra bits in a way that forms a relationship with the data
bits.
• Types:
o Block Coding: Focus of this discussion.
o Convolution Coding: More complex and beyond the scope here.
• Process: Sender adds redundant bits through a coding generator; the receiver uses a checker to
validate the data.
Modular Arithmetic:
• Definition: Arithmetic using a limited range of integers (0 to N-1), where N is the modulus.
• Application: Common in various coding schemes.
• Example: Clock system using modulo-12 arithmetic.
Modulo-2 Arithmetic:
• Relevance: Used extensively in error detection and correction.
• Operations:
o Addition/Subtraction: Simple operations without carry, using XOR (exclusive OR).
BLOCK CODING
Block Coding Process:
• Datawords and Codewords: A message is divided into blocks of kkk bits called datawords. Each
dataword is augmented with r redundant bits, making the length n=k+r. These n-bit blocks are called
codewords.
• Validity: The coding process is one-to-one; the same dataword always maps to the same codeword.
Out of 2n possible codewords, only 2k are valid, making the remaining codewords invalid or illegal.
Error Detection:
• Criteria: Errors can be detected if:
1. The receiver has a list of valid codewords.
2. The received codeword is not valid (i.e., changed to an invalid one).
• Limitation: Block coding can detect single errors but may fail to detect multiple errors.
Error Correction:
• Complexity: More challenging than detection as it requires identifying the original codeword.
• Process: The receiver uses more redundant bits and a complex checker to identify and correct errors.
• Hamming Distance: The central concept in error control, representing the number of differing bits
between two codewords. For effective error detection, the minimum Hamming distance dmin must be
s+1, where s is the number of errors to be detected.
Examples:
• Error Detection Example:
o K=2 and n=3.
o Valid codewords: 000, 011, 101, 110.
o Detection: If a received codeword is not in the list, it is discarded.
o Limitation: Two corrupted bits may create an undetectable error.
• Error Correction Example:
o k=2 and n=5.
o Valid codewords: 00000, 01011, 10101, 11110.
o Correction: Receiver checks against valid codewords to correct up to one bit error.
o Hamming distance: The minimum Hamming distance for error correction is dmin =2t+1,
where t is the number of correctable errors.
Hamming Distance:
• Definition: Number of differing bits between two words of the same size.
• Calculation: Use XOR operation and count the number of 1s.
• Examples:
o Hamming distance between 000 and 011 is 2.
o Hamming distance between 10101 and 11110 is 3.
Parameters of a Coding Scheme:
• Three Parameters: Codeword size nnn, dataword size k, and minimum Hamming distance dmin.
• Notation: A coding scheme is denoted as C(n,k) with dmin .
Error Detection and Correction Capability:
• Detection: To detect up to s errors, dmin=s+1.
• Correction: To correct up to t errors, dmin=2t+1.
• Example: A code with dmin=4 can detect up to 3 errors and correct up to 1 error.
CYCLIC CODES
• Definition: Cyclic codes are special types of linear block codes where if a codeword is cyclically
shifted, the result is another codeword.
• Example: If 1011000 is a codeword, then 0110001, obtained by a cyclic left-shift, is also a
codeword.
Cyclic Redundancy Check (CRC):
• Purpose: CRC is a type of cyclic code used primarily for error detection in networks such as LANs
and WANs.
• Table of CRC Codes: Shows linear and cyclic properties of CRC codes.
• Encoder and Decoder Design:
o Encoder: The encoder augments the dataword by adding zeros, performs modulo-2 division
with a predefined divisor, and appends the remainder to the dataword to form the codeword.
o Decoder: The decoder performs the same division. If the syndrome (remainder) is all zeros,
the dataword is accepted; otherwise, it is discarded.
• Example: The division process in both encoder and decoder is done step-by-step using the XOR
operation.
Hardware Implementation:
• Divisor: Repeatedly XORed with part of the dividend. The divisor can be hardwired for efficiency.
• Augmented Dataword: Bits are shifted to align with the divisor bits.
• Remainder: Stored in registers; the design simplifies to use fewer XOR devices and shift registers.
General Design:
• Encoder and Decoder: Use shift registers and XOR devices to process the dataword and check bits.
Polynomials:
• Representation: A binary pattern can be represented as a polynomial.
• Degree: Highest power of the polynomial.
• Operations: Addition/subtraction (modulo-2), multiplication, and division of polynomials are
straightforward.
• Shifting: Left-shift corresponds to multiplying by xnx^nxn, and right-shift corresponds to dividing
by xnx^nxn.
Cyclic Code Analysis:
• Generator Polynomial: Denoted as g(x)g(x)g(x).
• Syndrome: If non-zero, errors are detected; if zero, either no error or undetectable errors are present.
• Error Detection: Errors that are divisible by g(x)g(x)g(x) are not detected.
Advantages of Cyclic Codes:
• Error Detection and Correction: Efficiently identify and correct burst errors.
• Simple Implementation: Hardware-friendly with shift registers and XOR gates.
• Systematic Codes: Simplifies encoding by appending calculated redundancy.
• Cyclic Property: Any cyclic shift of a codeword is also a codeword.
• Polynomial Representation: Simplifies encoding and decoding processes.
• Wide Applicability: Used in data storage, communication systems, and digital signal processing.
Other Cyclic Codes:
• Hamming Codes: Correct single-bit errors and detect two-bit errors; used in memory and storage
systems.
• BCH Codes: Correct multiple errors; customizable for different applications.
• Reed-Solomon Codes: Correct burst errors; used in CDs, DVDs, and QR codes.
• Golay Codes: Correct up to three errors in a 24-bit codeword; used in deep space communication.
• CRC (Cyclic Redundancy Check) Codes: Primarily for error detection; used in network
communication and data storage.
• LDPC (Low-Density Parity-Check) Codes: Near-optimal error correction with low complexity;
used in Wi-Fi and digital television.
CHECKSUM
• Usage: Utilized by several Internet protocols, though not at the data link layer.
• Redundancy-Based: Like linear and cyclic codes, it uses redundancy for error detection.
• Trend: Moving towards CRC (Cyclic Redundancy Check) for improved error detection.
Concept
• Basic Idea: The checksum is the sum of data values, and it is sent along with the data.
• Process: The receiver sums the data and compares it to the checksum to verify integrity.
Examples
1. Simple Sum:
o Data: (7, 11, 12, 0, 6)
o Sent: (7, 11, 12, 0, 6, 36)
o Check: Receiver sums data; if 36, data is accepted; otherwise, rejected.
2. Complemented Sum:
o Data: (7, 11, 12, 0, 6)
o Sent: (7, 11, 12, 0, 6, -36)
o Check: Receiver sums data including -36; if 0, data is accepted; otherwise, rejected.
One's Complement Arithmetic
• Purpose: To handle sums that exceed bit length.
• Method: Wrap extra bits and add to lower bits.
• Example:
o Number 21: Binary 10101 -> Wrap and add: (0101 + 1) = 0110 (or 6).
o Negative 6: Positive 6 is 0110; invert to 1001 (or 9).
Internet Checksum
• 16-bit Checksum: Used traditionally in Internet protocols.
• Sender:
1. Divide message into 16-bit words.
2. Set checksum word to 0.
3. Sum all words using one's complement addition.
4. Complement sum to create checksum.
5. Send checksum with data.
• Receiver:
1. Divide message (including checksum) into 16-bit words.
2. Sum all words using one's complement addition.
3. Complement sum to create new checksum.
4. If checksum is 0, accept message; otherwise, reject.
Example Calculation
• Text: "Forouzan"
o Sender:
▪ Convert to ASCII and sum: F = 46, o = 6F, ... -> Final checksum: 7038 -> Send 7038.
o Receiver:
▪ Sum and complement: If result is 0, message is accepted.
Performance
• Strength: Uses 16 bits, suitable for software implementation.
• Limitations: Less effective than CRC, susceptible to certain errors (e.g., matched
increment/decrement errors, multiples of 65535).
• Improvements: Weighted checksums (Fletcher and Adler) address some issues but CRC is preferred
for newer protocols.
The checksum method provides a simple yet effective means of error detection, especially useful in
software-based systems, though it is gradually being replaced by the more robust CRC in modern
applications.
11
DATA LINK CONTROL
FRAMING
Physical Layer
• Function: Transmits bits as signals from source to destination.
• Synchronization: Ensures sender and receiver align on bit durations and timing.
Data Link Layer
• Framing: Packages bits into distinguishable frames, similar to placing letters in envelopes.
o Addresses: Each frame includes sender and destination addresses for proper delivery and
acknowledgment.
o Size Management: Avoids using a single large frame for a message to improve flow and
error control efficiency.
Types of Framing
1. Fixed-Size Framing:
o Characteristics: Frames have a constant size.
o Example: ATM network frames (cells).
2. Variable-Size Framing:
o Requirement: Needs delimiters to mark frame boundaries.
o Methods:
▪ Character-Oriented Protocols:
▪ Data: Consists of 8-bit characters.
▪ Delimiters: Use special characters as flags to denote frame start and end.
▪ Byte Stuffing: Adds an escape character (ESC) when the flag pattern appears
in data, to avoid misinterpretation.
▪ Unicode Issue: Conflicts with modern coding systems using 16-bit or 32-bit
characters.
▪ Bit-Oriented Protocols:
▪ Data: Sequence of bits for various types of information.
▪ Delimiters: Use an 8-bit pattern (01111110) as flags.
▪ Bit Stuffing: Adds an extra 0 after a sequence of 0 followed by five
consecutive 1s to prevent flag pattern confusion.
Examples
• Character-Oriented Protocol: Adds byte stuffing to escape characters and flags in data.
• Bit-Oriented Protocol: Adds a 0 bit after a sequence of 011111 to avoid misinterpretation of the
flag pattern.
NOISY CHANNELS
The Stop-and-Wait Protocol, while introducing flow control, doesn't account for errors because noiseless
channels are unrealistic. To address errors, we use the Stop-and-Wait Automatic Repeat Request (ARQ)
protocol, which adds error control to the original protocol.
Stop-and-Wait ARQ Overview:
1. Error Detection and Correction:
o Frames are assigned redundancy bits for error detection.
o Corrupted frames are discarded silently by the receiver.
o Lost frames are managed by numbering the frames, which helps in identifying and
retransmitting lost or duplicated frames.
o The sender keeps a copy of each sent frame and uses a timer. If the acknowledgment (ACK)
is not received within the timeout period, the frame is resent.
2. Sequence Numbers:
o Frames are numbered using a modulo-2 system (0, 1). This helps distinguish between frames
and manage retransmissions.
o Sequence numbers wrap around but are sufficient to ensure correct frame identification and
ordering.
3. Acknowledgment (ACK) Numbers:
o ACKs indicate the next expected frame. For instance, if frame 0 is received, ACK 1 is sent,
signaling the expectation of frame 1 next.
4. Protocol Design:
o Sender Algorithm:
▪ Maintains a copy of the last sent frame.
▪ Starts a timer and resends the frame if no ACK is received within the timeout.
▪ Handles events like frame sending, ACK receiving, and timeouts.
o Receiver Algorithm:
▪ Accepts and processes frames with the expected sequence number.
▪ Sends an ACK for the next expected frame, even if the current frame is corrupted or
out of order, to handle potential ACK loss.
5. Efficiency:
o The Stop-and-Wait ARQ protocol can be inefficient in channels with high bandwidth-delay
products (large bandwidth and long delay). This is because the channel capacity is
underutilized while waiting for acknowledgments.
Pipelining is a technique in data transmission where multiple tasks are initiated before previous ones are
completed, enhancing efficiency. However, Stop-and-Wait ARQ does not use pipelining as it waits for an
acknowledgment before sending the next frame. In contrast, Go-Back-N ARQ uses pipelining to allow
several frames to be sent before receiving acknowledgments, improving throughput when the bandwidth-
delay product is large.
Go-Back-N ARQ:
• Sequence Numbers: Frames are sequentially numbered within a defined range based on the bit
length of the sequence number field (e.g., with 4 bits, numbers range from 0 to 15).
• Sliding Window:
o Send Window: Represents the range of sequence numbers for frames that can be sent or are
already sent but not yet acknowledged. Its size is 2m−12^m - 12m−1, where mmm is the
number of bits in the sequence number field.
o Receive Window: A single-slot window that ensures frames are received and acknowledged
in order. Frames outside this window are discarded.
• Timers: A single timer is used to track the first outstanding frame. When this timer expires, all
outstanding frames are resent.
• Acknowledgments:
o The receiver sends positive acknowledgments (ACKs) for correctly received frames. If a
frame is damaged or out of order, the receiver remains silent until the correct frame is
received.
o A cumulative acknowledgment can be sent for several frames, and the sender resends all
outstanding frames if the timer expires.
• Window Size Limitation: The send window size must be less than 2m2^m2m to avoid ambiguities
in frame retransmission. For example, with a window size of 3, duplicates are handled correctly,
while a size of 4 could lead to errors if acknowledgments are lost.
Algorithms:
• Sender Algorithm:
1. Handle events like frame transmission, acknowledgment arrival, and timeout.
2. Resend all outstanding frames upon timeout.
3. Slide the window and manage frame sending based on acknowledgments and data
availability.
• Receiver Algorithm:
1. Discard corrupted or out-of-order frames.
2. Deliver data and send acknowledgment for the next expected frame.
Selective Repeat Automatic Repeat Request (ARQ)
It is a protocol used to handle errors in data transmission over noisy channels. Here’s a summary:
1. Comparison with Go-Back-N ARQ:
o Go-Back-N ARQ: The receiver only tracks one variable and discards out-of-order frames. If
a frame is damaged, all subsequent frames are resent, which can be inefficient on noisy links.
o Selective Repeat ARQ: Only the damaged frame is resent, making it more efficient for noisy
channels. However, it requires more complex processing at the receiver.
2. Window Sizes:
o Send Window: Size is 2m−12^m - 12m−1. For example, if m=4m = 4m=4, the window size
is 8 (compared to 15 in Go-Back-N ARQ).
o Receive Window: Same size as the send window. This allows for out-of-order frames to be
stored until they can be delivered in sequence.
3. Design:
o Sender: Handles sending frames, managing timers, and resending frames as needed. It uses a
sliding window mechanism to keep track of which frames have been sent and acknowledged.
o Receiver: Stores out-of-order frames until all preceding frames have arrived. It manages the
receive window and ensures frames are delivered in order to the network layer.
4. Algorithms:
o Sender Algorithm: Involves waiting for events, sending frames, handling acknowledgments
(ACKs) and negative acknowledgments (NAKs), and managing timeouts for retransmission.
o Receiver Algorithm: Handles incoming frames, sends ACKs or NAKs as appropriate, and
manages the delivery of frames to the network layer only when in order.
5. Timers:
o Each frame sent or resent requires a timer. Timers are used to handle timeouts and
retransmissions efficiently.
6. Example:
o Frame Loss: If a frame (e.g., frame 1) is lost, only that frame is resent once it is detected.
The receiver stores and waits for the missing frame before delivering the set of frames to the
network layer.
PIGGYBACKING
Piggybacking Technique:
• Concept: Piggybacking is a method used to enhance the efficiency of bidirectional data transmission
by combining data frames with control information. Instead of sending separate frames for control
information (ACK/NAK) and data, a frame carrying data from node A to node B can also carry
control information about frames received by node B, and vice versa.
• Implementation: Each node maintains both a send window and a receive window. It uses these to
handle events like frame requests, arrivals, and time-outs, integrating control information and data
transmission in one frame whenever possible.
Go-Back-N ARQ with Piggybacking:
• Design: Each node operates with two windows (send and receive) and uses timers. The arrival event
must process both data and control information, making the algorithm complex.
12
MULTIPLE ACCESS
RANDOM ACCESS
• Overview: In random access (or contention) methods, all stations have equal rights to access the
communication medium without any control from other stations. Transmission is not scheduled, and
stations compete to access the medium. The decision to send data depends on the medium's state
(idle or busy) and follows predefined procedures to avoid collisions.
ALOHA Protocols:
1. ALOHA Basics:
o Developed at the University of Hawaii in the 1970s for wireless LANs.
o Pure ALOHA: Stations send frames whenever they have data, leading to potential collisions
if multiple stations send simultaneously. Collisions result in frames being destroyed or
corrupted. Stations use acknowledgments and back-off strategies to manage collisions.
2. Pure ALOHA Details:
o Collisions: Frames may overlap, causing collisions.
o Back-off: After a collision, stations wait a random time before retrying. The back-off time
increases with each collision attempt.
o Vulnerable Time: The time during which a collision can occur is 2 times the frame
transmission time.
o Throughput: Maximum throughput is 18.4%, calculated as S=G×e−2G, where GGG is the
average number of frames generated per frame time.
3. Slotted ALOHA:
o Improvement: Time is divided into slots, and stations can only send at the start of a slot,
reducing the vulnerable time to one frame transmission time.
o Throughput: Maximum throughput is 36.8%, calculated as S=G×e−G
Throughput Examples:
• Pure ALOHA: Throughput decreases as the load increases.
o For 1000 frames/second: 13.5% throughput.
o For 500 frames/second: 18.4% throughput.
o For 250 frames/second: 15.2% throughput.
• Slotted ALOHA: Throughput is higher than pure ALOHA due to reduced collision chances.
o For 1000 frames/second: 36.8% throughput.
o For 500 frames/second: 30.3% throughput.
o For 250 frames/second: 19.5% throughput.
Carrier Sense Multiple Access (CSMA)
It is a network protocol that reduces the likelihood of collisions by requiring each station to check if the
communication channel is idle before sending data. This approach is based on the principle "sense before
transmit" or "listen before talk."
Key Points:
1. Collision Reduction: CSMA aims to minimize collisions by making stations listen to the medium
before transmitting. However, collisions can still occur due to propagation delay, where a station
may detect the medium as idle even if another station's signal is not yet fully propagated.
2. Vulnerable Time: The vulnerable time in CSMA is the propagation time TpT_pTp, which is the
time it takes for a signal to travel from one end of the medium to the other. During this time, a
collision can occur if another station starts transmitting.
3. Persistence Methods:
o 1-Persistent: The station sends its frame immediately if it finds the channel idle. This method
has a higher chance of collision as multiple stations may send simultaneously.
o Nonpersistent: The station waits a random amount of time before rechecking the channel if it
is busy. This method reduces the chance of collision but can lower efficiency due to idle
periods.
o p-Persistent: Used in time-slotted systems, where a station sends with probability ppp or
waits for the next time slot with probability 1−p1-p1−p. This method balances collision
avoidance and channel utilization.
4. CSMA with Collision Detection (CSMA/CD):
o Collision Detection: In CSMA/CD, stations continue to monitor the channel while
transmitting. If a collision is detected, the transmission is aborted, and the station retries after
a random backoff period.
o Minimum Frame Size: To ensure collision detection, the frame size must be large enough
for the transmission to last at least twice the maximum propagation time.
5. Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA):
o Collision Avoidance: Used in wireless networks where collision detection is less effective.
CSMA/CA includes strategies like waiting for an Interframe Space (IFS), using a contention
window with randomized delays, and acknowledging received frames to avoid collisions and
handle data corruption.
6. Throughput: CSMA/CD generally offers better throughput compared to ALOHA methods, with the
maximum throughput varying depending on the persistence method and network conditions.
CSMA/CA, while reducing collisions in wireless networks, includes additional mechanisms to
ensure successful data transmission.
CONTROLLED ACCESS
1. Reservation:
• Time is divided into intervals with a reservation frame at the start.
• Each station has a minislot in the reservation frame.
• Stations must make a reservation in their designated minislot to send data in that interval.
• Stations that make reservations are allowed to send data frames after the reservation frame.
2. Polling:
• Utilizes a primary device that controls the channel and secondary devices that respond to it.
• Select Function: The primary device notifies a secondary device that it has data to send and waits
for acknowledgment.
• Poll Function: The primary device queries secondary devices to check if they have data to send,
responding with NAK if not, or data and acknowledgment if yes.
3. Token Passing:
• Stations are organized in a logical ring with a token circulating among them.
• Possession of the token grants the right to access the channel and send data.
• After sending data, the station passes the token to the next station.
• Token management includes handling token loss, assigning priorities, and ensuring token release by
low-priority stations.
Physical Topologies for Token Passing:
• Physical Ring: Stations are physically connected in a ring. A failure in the medium can disrupt the
network.
• Dual Ring: Uses two rings (one for normal operation and one as backup) to provide redundancy and
improve reliability.
• Bus Ring: Stations are connected to a single bus but create a logical ring. The Token Bus LAN uses
this topology.
• Star Ring: A central hub acts as a connector with internal wiring forming a logical ring. This
topology enhances reliability and simplifies station management.
CHANNELIZATION
Channelization is a method for dividing the available bandwidth of a communication link among multiple
stations using time, frequency, or code. The main channelization protocols are:
1. Frequency-Division Multiple Access (FDMA):
o Concept: The total bandwidth is divided into separate frequency bands. Each station is
assigned a unique band for continuous data transmission.
o Usage: Each station's data is transmitted in its designated band, separated by guard bands to
prevent interference.
o Difference from FDM: FDMA is a data link layer method for access, whereas Frequency-
Division Multiplexing (FDM) is a physical layer technique for combining multiple signals
into a higher-bandwidth channel.
2. Time-Division Multiple Access (TDMA):
o Concept: Bandwidth is shared over time. Each station is allocated a specific time slot during
which it can transmit data.
o Synchronization: Stations must synchronize to ensure accurate slot timing. Guard times may
be added to account for propagation delays.
o Difference from TDM: TDMA is a data link layer method for channel access, while Time-
Division Multiplexing (TDM) is a physical layer technique for combining multiple signals
into one.
3. Code-Division Multiple Access (CDMA):
o Concept: All stations share the entire bandwidth simultaneously using unique codes. Data
from each station is encoded with a code, allowing simultaneous transmission over the same
channel.
o Encoding and Decoding: Stations use orthogonal codes to encode data, which are then
combined on the channel. Receivers use the same code to decode and retrieve data from other
stations.
o Code Properties: Codes are designed to be orthogonal, meaning their inner product is zero
when codes are different, ensuring minimal interference.
Each method optimizes bandwidth use in different ways, and their applications extend to technologies like
cellular phone systems.
13
WIRED LANS: ETHERNET
IEEE STANDARDS
In 1985, the IEEE Computer Society initiated Project 802 to establish standards for interoperability among
equipment from different manufacturers. Project 802 focuses on defining functions for the physical and data
link layers of LAN protocols, rather than replacing existing OSI or Internet models.
Key Points of Project 802:
• Adoption: The standard was adopted by ANSI in 1987 and later by ISO as ISO 8802.
• Data Link Layer Division: Project 802 subdivides the data link layer into two sublayers:
o Logical Link Control (LLC): Handles framing, flow control, and error control. It provides a
unified protocol across different LAN technologies, making it possible to interconnect
various LANs.
o Media Access Control (MAC): Defines specific access methods for each LAN type (e.g.,
CSMA/CD for Ethernet, token-passing for Token Ring). It also handles part of the framing
function.
Physical Layer:
• The physical layer specifications vary depending on the LAN implementation and type of media
used.
Overall, Project 802 aims to ensure compatibility and interoperability between different LAN technologies
by clearly defining standards for the data link and physical layers.
STANDARD ETHERNET
The original Ethernet, developed in 1976 at Xerox PARC, has evolved through four generations: Standard
Ethernet (10 Mbps), Fast Ethernet (100 Mbps), Gigabit Ethernet (1 Gbps), and Ten-Gigabit Ethernet (10
Gbps). Here’s a summary of Standard Ethernet:
Generations:
1. Standard Ethernet (10 Mbps)
2. Fast Ethernet (100 Mbps)
3. Gigabit Ethernet (1 Gbps)
4. Ten-Gigabit Ethernet (10 Gbps)
MAC Sublayer:
• Function: Manages access methods, frames data from the upper layer, and passes it to the physical
layer.
• Frame Format: Consists of seven fields:
o Preamble: 56 bits for synchronization.
o Start Frame Delimiter (SFD): 1 byte indicating the start of the frame.
o Destination Address (DA): 6 bytes.
o Source Address (SA): 6 bytes.
o Length/Type: Indicates the protocol type or length of the data.
o Data: 46 to 1500 bytes.
o CRC: Error-checking information (CRC-32).
Frame Length:
• Minimum: 64 bytes (512 bits).
• Maximum: 1518 bytes (12,144 bits).
Addressing:
• Format: 6-byte (48-bit) addresses.
• Types:
o Unicast: Single recipient.
o Multicast: Group of recipients.
o Broadcast: All stations on the LAN.
Access Method:
• CSMA/CD: Used for managing access to the network medium. The slot time for detecting collisions
is set to 512 bits, allowing collision detection within this time frame.
Physical Layer Implementations:
1. 10Base5 (Thick Ethernet): Uses thick coaxial cable with a maximum length of 500 meters.
2. 10Base2 (Thin Ethernet): Uses thinner coaxial cable with a maximum length of 185 meters.
3. 10Base-T (Twisted Pair Ethernet): Uses twisted-pair cables with a maximum length of 100 meters.
4. 10Base-F (Fiber Ethernet): Uses fiber-optic cables with a maximum length of 2000 meters.
Encoding and Decoding:
• All implementations use Manchester encoding for digital signaling.
Each implementation provides different physical media and network topologies, including coaxial cables,
twisted pairs, and fiber optics.
FAST ETHERNET
• Objective: Fast Ethernet, standardized as IEEE 802.3u, was developed to offer data rates 10 times
faster than Standard Ethernet, reaching 100 Mbps. It aims to be backward-compatible while
maintaining the same 48-bit address, frame format, and frame length as Standard Ethernet.
MAC Sublayer:
• Topology: Fast Ethernet supports star topology, either with half-duplex connections via a hub or
full-duplex connections via a switch with buffers.
• CSMA/CD: The CSMA/CD protocol is retained for backward compatibility in half-duplex mode,
but not needed in full-duplex mode.
Auto-negotiation:
• Function: Allows devices to negotiate their capabilities and data rates, facilitating compatibility
between different devices and enabling auto-adjustment based on capabilities.
Physical Layer:
• Topology: Fast Ethernet connects stations using point-to-point for two devices or star topology for
three or more devices.
• Implementations:
o 100Base-TX: Uses two pairs of category 5 UTP cables, employing MLT-3 encoding with
4B/5B block coding.
o 100Base-FX: Uses two fiber-optic cables, employing NRZ-I encoding with 4B/5B block
coding.
o 100Base-T4: Uses four pairs of category 3 UTP cables, employing 8B/6T encoding to handle
the lower bandwidth of category 3 cables.
GIGABIT ETHERNET
1. Goals:
oUpgrade data rate to 1 Gbps.
oEnsure compatibility with Standard and Fast Ethernet.
oMaintain the same 48-bit address, frame format, and frame lengths.
oSupport autonegotiation as defined in Fast Ethernet.
2. MAC Sublayer:
o Full-Duplex: Uses a central switch with buffers at each port, eliminating collisions and
making the maximum cable length dependent on signal attenuation.
o Half-Duplex: Less common, uses CSMAlCD. Three methods to handle maximum network
length:
▪ Traditional: Minimum frame size of 512 bits results in a maximum length of 25 m.
▪ Carrier Extension: Increases minimum frame size to 512 bytes, extending maximum
length to 200 m.
▪ Frame Bursting: Multiple frames are sent with padding, making it appear as one
large frame to improve efficiency.
3. Physical Layer:
o Topologies: Supports point-to-point and star topologies, including hierarchical star
configurations.
o Implementations:
▪ 1000Base-SX: Short-wave fiber.
▪ 1000Base-LX: Long-wave fiber.
▪ 1000Base-CX: Shielded twisted pair (STP).
▪ 1000Base-T: Category 5 twisted-pair (UTP).
o Encoding:
▪ 1000Base-SX, LX, CX: Uses NRZ encoding with 8B/10B block encoding.
▪ 1000Base-T: Uses 4D-PAM5 encoding for four twisted pairs of UTP.
4. Summary:
o 1000Base-SX: Fiber, 2 wires, 550m max length.
o 1000Base-LX: Fiber, 2 wires, 5000m max length.
o 1000Base-CX: STP, 2 wires, 25m max length.
o 1000Base-T: UTP, 4 wires, 100m max length.
TEN-GIGABIT ETHERNET (IEEE 802.3AE)
1. Goals:
o Upgrade data rate to 10 Gbps.
o Ensure compatibility with Standard, Fast, and Gigabit Ethernet.
o Maintain the same 48-bit address, frame format, and frame lengths.
o Facilitate interconnection of LANs into MANs or WANs.
o Compatibility with Frame Relay and ATM technologies.
2. MAC Sublayer:
o Operates only in full-duplex mode, eliminating the need for CSMA/CD.
3. Physical Layer:
o Designed for fiber-optic cables over long distances.
o Implementations:
▪ 10GBase-S: Short-wave fiber, 300m max length.
▪ 10GBase-L: Long-wave fiber, 10 km max length.
▪ 10GBase-E: Extended-range fiber, 40 km max length.
14
WIRELESS LANS
IEEE 802.11
IEEE 802.11 defines wireless LANs, covering both physical and data link layers. It includes two main
service types: Basic Service Set (BSS) and Extended Service Set (ESS).
Basic Service Set (BSS)
• Ad Hoc Network: A BSS without an access point (AP) where stations connect directly without
infrastructure.
• Infrastructure Network: A BSS with an AP that provides centralized connectivity and network
access.
Extended Service Set (ESS)
• Comprises multiple BSSs interconnected through a distribution system (usually wired, like Ethernet).
• Allows mobility across BSSs within the ESS, similar to cellular networks with cells and base
stations.
Station Types
• No-Transition: Station stays within a single BSS.
• BSS-Transition: Station moves between BSSs within one ESS.
• ESS-Transition: Station moves between different ESSs.
MAC Sublayer
• Distributed Coordination Function (DCF): Uses CSMA/CA for collision avoidance due to
challenges like hidden stations and signal fading.
• Point Coordination Function (PCF): Optional, used in infrastructure networks for time-sensitive
transmissions with polling by the AP.
Frame Exchange
• CSMA/CA: Involves sensing the medium, sending RTS/CTS control frames to avoid collisions, and
acknowledging received frames.
Fragmentation
• Frames are fragmented into smaller parts to improve reliability and efficiency in noisy environments.
Frame Format
• Consists of fields like Frame Control, Addresses, Sequence Control, Frame Body, and Frame Check
Sequence (FCS).
Frame Types
• Management Frames: For initial communications and network management.
• Control Frames: For channel access and acknowledgments.
• Data Frames: For carrying actual data.
Addressing Mechanism
• Frames use up to four addresses based on the direction of the frame (to/from distribution system) and
the type of communication (e.g., station-to-station, station-to-AP).
Hidden and Exposed Station Problems
• Hidden Station Problem: Occurs when two stations cannot hear each other but can communicate
with a common station, leading to collisions.
• Exposed Station Problem: A station refrains from sending data when it could do so without
interfering with others' transmissions.
Physical Layer Specifications
• IEEE 802.11: Uses FHSS or DSSS with data rates up to 2 Mbps in the 2.4 GHz band.
• IEEE 802.11a: Uses OFDM in the 5 GHz band with data rates from 6 to 54 Mbps.
• IEEE 802.11b: Enhances DSSS in the 2.4 GHz band, supporting rates up to 11 Mbps.
• IEEE 802.11g: Combines DSSS and OFDM in the 2.4 GHz band, supporting rates up to 54 Mbps.
BLUETOOTH
Bluetooth is a wireless communication technology designed for short-range networking between various
devices such as phones, computers, printers, cameras, and even home appliances like coffee makers. It
enables the creation of a temporary, ad hoc network called a piconet. This piconet is formed spontaneously
as devices discover and connect with each other. Bluetooth networks are generally small and limited in
scale, handling up to eight devices at most.
Key Concepts and Architecture:
1. Bluetooth Networks:
o Piconet: This is the basic unit of a Bluetooth network. A piconet consists of one primary device
and up to seven secondary devices. The primary device coordinates communication, while
secondaries synchronize their clocks and hopping sequences with it. A piconet can also include
up to eight additional devices in a parked state, which are not active but are synchronized with
the primary device.
o Scatternet: Multiple piconets can be interconnected to form a scatternet. In this setup, a
secondary device in one piconet can act as a primary in another piconet, enabling communication
between devices across different piconets. This allows for greater flexibility in connecting
devices but introduces complexity.
2. Bluetooth Devices:
o Devices use a 2.4 GHz ISM band and can transmit data at rates up to 1 Mbps. The frequency-
hopping spread spectrum (FHSS) technique is used to minimize interference from other devices
operating on the same band. Bluetooth devices hop between 79 channels at a rate of 1600 times
per second.
3. Bluetooth Architecture:
o Radio Layer: The radio layer is akin to the physical layer in the Internet model. It handles the
transmission of radio signals and operates at a frequency of 2.4 GHz using Gaussian Frequency
Shift Keying (GFSK) modulation.
o Baseband Layer: This layer manages the time-division multiplexing of data through Time
Division Multiple Access (TDMA). The communication in a piconet is organized into time slots,
and the length of these slots determines the timing for data transmission. The communication can
be between the primary and a secondary device, but not directly between secondary devices.
4. Communication Modes:
o Synchronous Connection-Oriented (SCQ) Links: These are used where minimizing latency is
crucial, such as in real-time audio applications. SCQ links reserve specific time slots for
communication but do not guarantee data integrity. If a packet is damaged, it is not retransmitted.
o Asynchronous Connection-Less (ACL) Links: These are used when data integrity is more
important than latency. ACL links allow for retransmission of corrupted data and can achieve
higher data rates. They handle error correction and data retransmission to ensure reliability.
5. Frame Formats:
o One-Slot Frame: Uses one time slot and can carry up to 366 bits of data.
o Three-Slot Frame: Occupies three time slots and can carry up to 1616 bits of data.
o Five-Slot Frame: Uses five time slots, allowing for up to 2866 bits of data. Each frame type has
specific lengths and uses slots for different types of data transmission.
6. L2CAP (Logical Link Control and Adaptation Protocol):
o The L2CAP layer provides services for data exchange, including multiplexing, segmentation and
reassembly, quality of service (QoS), and group management.
o Multiplexing: L2CAP handles multiple data streams from higher layers, framing them for
transmission and directing them to the appropriate baseband layer channels.
o Segmentation and Reassembly: Large data packets from higher layers are segmented into
smaller pieces to fit the baseband layer's frame size and then reassembled at the receiving end.
o QoS: Bluetooth supports quality-of-service levels, ensuring that data transmission meets the
required performance criteria.
o Group Management: Allows for the creation of logical groups of devices that can communicate
together, similar to multicasting.
15
CONNECTING LANS, BACKBONE NETWORKS, AND
VIRTUAL LANS
CONNECTING DEVICES
1. Devices Below the Physical Layer:
o Passive Hubs: Simply connect wires from different branches without processing signals.
They act as collision points in a star topology.
2. Devices at the Physical Layer:
o Repeaters: Regenerate and amplify weak or corrupted signals to extend the reach of a
network segment. They connect segments of the same LAN but not different LANs.
3. Devices at the Physical and Data Link Layers:
o Bridges: Regenerate signals and filter traffic based on MAC addresses. They connect and
filter between network segments to reduce unnecessary traffic.
4. Devices at the Physical, Data Link, and Network Layers:
o Routers: Direct packets based on IP addresses and connect different LANs or WANs. They
use dynamic routing tables updated by routing protocols.
5. Devices at All Five Layers:
o Gateways: Operate across all layers, translating messages between different network
architectures (e.g., OSI to Internet models) and providing additional functionalities like
security.
Detailed Device Functions:
• Passive Hubs: Connect wires without processing; function below the physical layer.
• Repeaters: Extend LAN length by regenerating signals. They are not used for interconnecting
different LANs or protocols.
• Active Hubs: Act as multi-port repeaters in a star topology, creating multiple hierarchical levels.
• Bridges: Operate at both the physical and data link layers. They regenerate signals and make
forwarding decisions based on MAC addresses, filtering traffic and learning addresses to improve
efficiency.
• Transparent Bridges: These are unaware of their presence in the network. They automatically build
forwarding tables and prevent loops using the Spanning Tree Protocol (STP), which maintains a
loop-free topology.
• Source Routing Bridges: Specify routes through the network within the frame itself, used primarily
in Token Ring LANs.
• Two-Layer Switches: Combine the functionality of a bridge with higher performance due to
multiple ports and faster frame processing.
• Three-Layer Switches: Function as advanced routers with improved performance and faster
switching capabilities, handling both routing and switching tasks.
• Gateways: Operate at all layers to interconnect networks using different protocols, handling
complex translations and security tasks.
BACKBONE NETWORKS
In this chapter, various connecting devices are used to create a backbone network that links multiple LANs
(Local Area Networks). The backbone network itself is a LAN that connects other LANs, with each
connection forming a separate LAN.
Backbone Network Architectures:
1. Bus Backbone:
o Topology: Linear bus configuration.
o Protocols: Examples include 10Base5 or 10Base2.
o Usage: Typically used to connect buildings or multiple LANs on a campus.
o Example: Connecting LANs in single- or multi-floor buildings. Each building's LAN
connects through a bus backbone, which links different buildings or floors.
2. Star Backbone:
o Topology: Star configuration with a central switch.
o Usage: Commonly used within buildings to connect LANs on different floors.
o Example: A switch acts as the backbone, connecting various LANs throughout the building.
The switch may be located in a central location like a basement or main floor.
Connecting Remote LANs:
• Remote Bridges: Used to connect LANs over long distances, often through leased lines or ADSL.
These bridges connect LANs via point-to-point links, functioning as a LAN without stations.
Overall, the backbone network allows for efficient interconnection of LANs within and between buildings,
using either bus or star topologies based on the specific needs and layout of the network.
VIRTUAL LANS
Virtual Local Area Network (VLAN) is a software-based method for creating logical LANs, allowing for
virtual rather than physical segmentation of network stations. Here’s a summary:
Concept of VLANs:
• Definition: VLANs are local area networks configured by software rather than physical wiring.
• Example: In an engineering firm with multiple workgroups, VLANs enable logical grouping of
stations across different physical LANs without the need for physical reconfiguration.
Advantages of VLANs:
1. Flexibility: VLANs allow easy reconfiguration by software, avoiding the need for physical network
changes when stations are moved between groups.
2. Broadcast Domains: VLANs create separate broadcast domains, so members of a VLAN only
receive broadcast messages intended for their VLAN.
3. Remote Grouping: VLANs enable grouping of stations across different switches and locations, like
in multi-building setups.
Configuration Methods:
• Manual: Administrators manually assign stations to VLANs.
• Automatic: VLAN membership is automatically assigned based on predefined criteria.
• Semiautomatic: Initial setup is manual, but changes are automatic.
Communication Between Switches:
• Table Maintenance: Switches maintain and periodically exchange tables showing VLAN
memberships.
• Frame Tagging: Frames are tagged with VLAN information as they travel between switches.
• Time-Division Multiplexing (TDM): Switch trunk connections are divided into time-shared
channels for different VLANs.
IEEE Standard:
• 802.1Q: Defines frame tagging format and standards for VLANs, enabling compatibility across
different vendors’ equipment.
Benefits:
• Cost and Time Reduction: Easier and cheaper to move stations between VLANs compared to
physical reconfigurations.
• Virtual Work Groups: Facilitates communication among members of virtual teams regardless of
their physical location.
• Security: Ensures that broadcast messages are only received by intended VLAN members,
enhancing network security.
16
WIRELESS WANS: CELLULAR TELEPHONE AND
SATELLITE NETWORKS
CELLULAR TELEPHONY
Cellular telephony allows communication between mobile units or between a mobile and a stationary unit. It
involves tracking callers, assigning channels, and managing handoffs as users move.
Cell Structure:
• Cells: Service areas are divided into cells, each with an antenna and a base station (BS) controlled by
a mobile switching center (MSC).
• Base Station (BS): Manages communication within its cell.
• Mobile Switching Center (MSC): Coordinates between base stations and the telephone central
office, handling call connections, information recording, and billing.
Frequency Reuse:
• Cells use different frequency sets to avoid interference. Frequencies are reused in non-adjacent cells
based on patterns with reuse factors (e.g., 4 or 7).
Call Handling:
• Placing a Call: The mobile station scans for a strong signal, sends the phone number to the nearest
base station, which forwards it to the MSC and then to the telephone central office.
• Receiving a Call: The MSC locates the mobile station through paging, sends a ringing signal, and
assigns a voice channel.
Handoff:
• Hard Handoff: Communication is broken with the old base station before establishing a new
connection.
• Soft Handoff: Allows simultaneous communication with two base stations to ensure smooth
transitions.
Roaming:
• Users can access services in different coverage areas through agreements between service providers.
Generations of Cellular Technology:
• First Generation (1G): Analog systems like AMPS for voice communication.
• Second Generation (2G): Digital systems including D-AMPS (TDMA), GSM (TDMA), and IS-95
(CDMA).
o D-AMPS: Evolved from AMPS with digital voice and TDMA.
o GSM: Uses TDMA with digital voice and operates in the 900 MHz band.
o IS-95: Uses CDMA and DSSS with frequency reuse factors of 1.
• Third Generation (3G): Aims to provide high-quality voice and data services, including internet
access. Standards include IMT-2000, evolving from 2G technologies.
Each generation introduces improvements in voice quality, data rate, and service capabilities.
SATELLITE NETWORKS
• Definition: Satellite networks consist of nodes such as satellites, Earth stations, and user terminals,
facilitating communication between any two points on Earth.
• Preferred Satellites: Artificial satellites are used over natural ones (like the Moon) because they can
be equipped with electronic equipment to regenerate signals and manage communication delays.
Orbits:
• Types: Satellites can have equatorial, inclined, or polar orbits.
• Kepler’s Law: Determines the orbital period based on distance from Earth's center. For example,
satellites in geosynchronous orbit (35,786 km above Earth) have a 24-hour period, matching Earth's
rotation.
Footprint:
• Definition: The area a satellite’s signal covers. The signal strength is strongest at the center of the
footprint and diminishes towards the edges.
Categories of Satellites:
1. Geostationary Earth Orbit (GEO):
o Altitude: ~35,786 km.
o Characteristics: Satellites match Earth's rotation, appearing fixed over a spot. At least three
satellites are needed for global coverage.
2. Medium-Earth Orbit (MEO):
o Altitude: Between 5,000 and 15,000 km.
o Example: GPS satellites, which take 6-8 hours to orbit Earth and use trilateration for location
determination.
3. Low-Earth Orbit (LEO):
o Altitude: Between 500 and 2,000 km.
o Characteristics: Quick orbit (90-120 minutes), low propagation delay, and often used for
cellular-type access.
o Types:
▪ Little LEOs: Operate under 1 GHz, for low-data-rate messaging.
▪ Big LEOs: Operate between 1-3 GHz, examples include Globalstar and Iridium.
▪ Broadband LEOs: Provide high-speed communication, like Teledesic.
Frequency Bands:
• Ranges: Various bands are used for different types of communication, from L-band (1.5 GHz) to
Ka-band (20-30 GHz).
Notable Satellite Systems:
• Iridium: Uses 66 LEO satellites for global communication, providing voice, data, and navigation
services.
• Globalstar: Uses 48 LEO satellites for communication, requiring both satellites and Earth stations.
• Teledesic: Aims to provide high-speed broadband internet using 288 LEO satellites, with data rates
up to 1.2 Gbps.
17
SONET/SDH
ARCHITECTURE
• Signals: Types and structure of signals in SONET/SDH.
• SONET Devices: Devices used in SONET networks.
• Connections: Connections in SONET networks.
SONET LAYERS
The SONET (Synchronous Optical Network) standard includes four functional layers that correspond to
both the physical and data link layers: photonic, section, line, and path layers.
Path Layer
• Responsibility: Manages the movement of signals from the optical source to the optical destination.
• Functions: Converts signals from electronic to optical form, multiplexes them with other signals,
and encapsulates them in frames. At the destination, it demultiplexes the frames and converts them
back to electronic form.
• Devices: STS multiplexers provide path layer functions.
• Overhead: Path layer overhead is added to the frame at this layer.
Line Layer
• Responsibility: Handles the movement of signals across a physical line.
• Functions: Adds line layer overhead to the frame.
• Devices: STS multiplexers and add/drop multiplexers.
• Overhead: Line layer overhead is added to the frame at this layer.
Section Layer
• Responsibility: Manages the movement of signals across a physical section.
• Functions: Handles framing, scrambling, and error control.
• Overhead: Section layer overhead is added to the frame at this layer.
Photonic Layer
• Responsibility: Corresponds to the physical layer of the OSI model.
• Functions: Includes physical specifications for the optical fiber channel, sensitivity of the receiver,
and multiplexing functions.
• Encoding: Uses NRZ (Non-Return to Zero) encoding with the presence of light representing 1 and
the absence of light representing 0.
Device-Layer Relationships
• STS Multiplexer: A four-layer device.
• Add/Drop Multiplexer: A three-layer device.
• Regenerator: A two-layer device.
SONET FRAMES
• Format: Each synchronous transfer signal (STS) is composed of 8000 frames per second, with each
frame being a two-dimensional matrix of bytes (9 rows by 90 x n columns).
• Transmission: Bytes are transmitted from left to right, top to bottom. Each byte's bits are
transmitted from most significant to least significant.
Example Data Rates
• STS-1: 51.840 Mbps
• STS-3: 155.52 Mbps
• Frame Duration: 125 µs for any STS-n frame.
STS-1 Frame Format
• Structure: 9 rows by 90 columns (810 bytes), with the first three columns reserved for section and
line overhead, and the rest for the synchronous payload envelope (SPE) containing user data and path
overhead.
Overheads
1. Section Overhead: Consists of 9 bytes used for framing, synchronization, error checking, and
management.
2. Line Overhead: Consists of 18 bytes for error checking, management, automatic protection
switching, and pointers.
3. Path Overhead: Contains 9 bytes related to the user data, including error checking, signal labeling,
and status information.
Key Overhead Functions
• Alignment Bytes (A1, A2): Used for framing and synchronization.
• Parity Bytes (B1, B2, B3): Used for bit interleaved parity.
• Management Bytes (D1-D12): Used for operation, administration, and maintenance.
• Order Wire Byte (E1, E2): Communication between devices.
• Pointers (H1, H2, H3): Indicate the offset of the SPE in the frame.
• Automatic Protection Switching (K1, K2): For detecting problems in line-terminating equipment.
• Path Trace Byte (J1): Used for tracking the path.
Encapsulation
• Offsetting: Allows an SPE to span two frames using pointers (H1 and H2).
• Justification: Manages slight differences in transmission rates by inserting extra bytes or leaving
bytes empty using the H3 byte.
STS MULTIPLEXING
1. Multiplexing and Demultiplexing:
• Lower-rate frames are synchronously time-division multiplexed (TDM) into higher-rate frames.
• Example: Three STS-1 signals combine into one STS-3 signal.
• Synchronization is achieved using a master clock.
2. Byte Interleaving:
• Synchronous TDM in SONET uses byte interleaving.
• Example: Multiplexing three STS-1 signals into one STS-3 signal, with each set of 3 bytes in the
STS-3 signal corresponding to 1 byte from each STS-1 signal.
• Byte positions in STS-1 frames are maintained but shifted to different columns.
• Section and line overhead bytes are also interleaved, simplifying demultiplexing.
3. Higher Rate Multiplexing:
• Example: Four STS-3 signals are first demultiplexed into 12 STS-1 signals, which are then
multiplexed into an STS-12 signal.
4. Concatenated Signals:
• STS-n signals can be concatenated to form a single channel (e.g., STS-3c), which cannot be
demultiplexed into lower-rate signals.
• These are used for higher data rates and have a single path overhead.
5. Add/Drop Multiplexer (ADM):
• ADMs replace signals within an STS-n signal without traditional demultiplexing/multiplexing.
• Operates at the line layer, allowing insertion or removal of STS-1 signals within an STS-n stream.
SONET NETWORKS
1. Linear Networks:
• Point-to-Point Networks:
o Consist of an STS multiplexer, STS demultiplexer, and regenerators without ADMs.
o Can be unidirectional or bidirectional.
• Multipoint Networks:
o Use ADMs for communication between multiple terminals.
o ADMs remove and add signals belonging to connected terminals.
2. Automatic Protection Switching (APS):
• Provides redundancy to protect against failures.
• One-plus-One APS:
o Two lines (working and protection), both active.
o Receiver chooses the line with better quality.
• One-to-One APS:
o One working line and one protection line.
o Protection line used only upon failure.
• One-to-Many APS:
o One protection line for multiple working lines.
o Protection line replaces only one failed working line at a time.
3. Ring Networks:
• Unidirectional Path Switching Ring (UPSR):
o Two rings: one working and one protection.
o Data flows in both rings; receiver selects the better quality signal.
o Fast failure recovery but bandwidth-inefficient.
• Bidirectional Line Switching Ring (BLSR):
o Four rings: two working and two protection.
o Provides bidirectional communication.
o Efficient and robust failure recovery.
4. Mesh Networks:
• More scalable than ring networks.
• Use cross-connect switches for handling multiplexing and demultiplexing.
• Provide better performance for high-traffic networks.
VIRTUAL TRIBUTARIES
Purpose:
• SONET (Synchronous Optical Network) is designed to carry broadband payloads.
• Current digital hierarchy data rates (DS-1 to DS-3) are lower than STS-1.
• Virtual tributaries (VTs) are included in SONET to make it backward-compatible with the current
digital hierarchy.
Virtual Tributaries (VTs):
• A VT is a partial payload inserted into an STS-1 and combined with other partial payloads to fill the
frame.
• The Synchronous Payload Envelope (SPE) of an STS-1 can be subdivided into VTs.
Types of VTs:
1. VT1.5:
o Accommodates the U.S. DS-1 service (1.544 Mbps).
o Uses 3 columns in the STS-1 frame.
o Data rate: 1.728 Mbps.
2. VT2:
o Accommodates the European CEPT-1 service (2.048 Mbps).
o Uses 4 columns in the STS-1 frame.
o Data rate: 2.304 Mbps.
3. VT3:
o Accommodates the DS-1C service (fractional DS-1, 3.152 Mbps).
o Uses 6 columns in the STS-1 frame.
o Data rate: 3.456 Mbps.
4. VT6:
o Accommodates the DS-2 service (6.312 Mbps).
o Uses 12 columns in the STS-1 frame.
o Data rate: 6.912 Mbps.
Interleaving:
• When multiple tributaries are inserted into a single STS-1 frame, they are interleaved column by
column.
• SONET provides mechanisms for identifying each VT and separating them without demultiplexing
the entire stream.
Note:
• Detailed mechanisms and control issues for VTs are beyond the scope of this summary.
18
VIRTUAL-CIRCUIT NETWORKS: FRAME RELAY AND ATM
FRAME RELAY
• Frame Relay is a virtual-circuit WAN developed in response to the need for a new type of WAN in
the late 1980s and early 1990s.
• It addresses the limitations of the X.25 network used previously.
Limitations of X.25:
1. Low Data Rate:
o X.25 has a low data rate of 64 kbps, insufficient by the 1990s.
2. High Overhead:
o Extensive flow and error control at both the data link and network layers, leading to slow
transmission.
3. Encapsulation Overhead:
o Designed for private use, X.25 encapsulates user data, doubling the overhead when used with
the Internet.
Drawbacks of Leasing T-1/T-3 Lines:
1. Cost:
o Organizations with multiple branches need numerous lines, which are costly and
underutilized.
2. Fixed-Rate Limitation:
o T-1/T-3 lines are designed for fixed-rate data, unsuitable for bursty data requiring variable
bandwidth.
Frame Relay Features:
1. High Speed:
o Operates at 1.544 Mbps (T-1) and 44.376 Mbps (T-3).
2. Layer Operation:
o Operates at physical and data link layers, making it a suitable backbone for network layer
protocols like the Internet.
3. Supports Bursty Data:
o Allows data to be sent in bursts rather than at a constant rate.
4. Large Frame Size:
o Allows a frame size of 9000 bytes, accommodating all LAN frame sizes.
5. Cost-Effective:
o Less expensive than traditional WANs.
6. Simplified Error Control:
o Only error detection at the data link layer without flow control or error correction.
Architecture:
• Provides permanent and switched virtual circuits.
• Utilizes Data Link Connection Identifiers (DLCIs) for virtual circuit identification.
Types of Virtual Circuits:
1. Permanent Virtual Circuits (PVCs):
o Always available but costly and inflexible.
2. Switched Virtual Circuits (SVCs):
o Temporary connections established as needed.
Switches:
• Use tables to route frames based on incoming and outgoing port-DLCI combinations.
Frame Relay Layers:
• Operates at the physical and data link layers, with error detection but no flow or error control.
FRADs (Frame Relay Assembler/Disassembler):
• Devices that assemble/disassemble frames from other protocols to be carried by Frame Relay.
Voice Over Frame Relay (VOFR):
• Allows voice to be sent as data frames, though with lower quality compared to circuit-switched
networks.
Local Management Information (LMI):
• Protocol providing management features like keep-alive, multicast, and switch status checking.
Congestion Control and Quality of Service:
• Frame Relay includes features for congestion control and QoS, which will be discussed further in
later chapters.
ATM (ASYNCHRONOUS TRANSFER MODE)
• ATM is a cell relay protocol developed by the ATM Forum and adopted by ITU-T.
• It integrates with SONET for high-speed global network interconnectivity.
• ATM is seen as the backbone for international communications.
Design Goals:
1. High Data Rate Optimization: Utilize high-bandwidth media, like optical fiber, minimizing noise
issues.
2. Interfacing with Existing Systems: Seamless integration without reducing effectiveness or
necessitating replacements.
3. Cost-Effective Implementation: Affordable to ensure widespread adoption.
4. Support Existing Telecommunications Hierarchies: Compatibility with current local and long-
distance carriers.
5. Connection-Oriented: Ensures predictable and accurate delivery.
6. Hardware Implementation: Maximize hardware functions for speed, reducing reliance on software.
Problems with Existing Systems:
• Frame Networks: Variability in frame sizes causes inefficiencies and complex header management.
• Mixed Network Traffic: Unpredictable frame sizes complicate traffic management, often causing
delays in mixed data types like audio and video.
Cell Networks:
• ATM uses small, fixed-size cells (53 bytes) for uniform and predictable data exchange.
• Cells eliminate problems with mixed frame sizes and allow interleaving for consistent data flow.
• Asynchronous TDM: Multiplexing method where cells from different channels fill slots based on
availability, not synchronization.
Architecture:
• ATM networks use cells for data units, ensuring uniformity.
• Endpoints connect via user-to-network interfaces (UNI) and switches via network-to-network
interfaces (NNI).
• Data transmission involves physical connections (Transmission Paths), grouped into Virtual Paths
(VPs), which contain Virtual Circuits (VCs) for specific data routes.
Cell and Switching:
• Cell: 53 bytes (5-byte header, 48-byte payload).
• Switching: Involves updating VPI and VCI values to route cells correctly.
• Connection Types: Permanent Virtual Circuits (PVCs) and Switched Virtual Circuits (SVCs).
ATM Layers:
1. Application Adaptation Layer (AAL): Manages payload types and segmentation.
2. ATM Layer: Handles cell creation, routing, traffic management, and multiplexing.
3. Physical Layer: Various carriers (including SONET), defining cell boundaries.
Header and Identifier:
• Headers: Different formats for UNI and NNI cells.
• VPI/VCI: Virtual Path and Circuit Identifiers for routing cells through networks.
AAL Sub-layers:
• Convergence Sublayer (CS): Prepares data for segmentation.
• Segmentation and Reassembly (SAR): Segments payload into cells and reassembles them at the
destination.
• AAL Types: AAL1 (constant bit rate, e.g., video, voice), AAL2 (variable bit rate), AAL3/4, and
AAL5 (common in data communications).
ATM LANS
• High Data Rate: 155 and 622 Mbps, ideal for high-speed LANs.
• Advantages:
o Supports permanent and temporary connections.
o Facilitates multimedia communication with guaranteed bandwidth.
o Easy expansion for organizations.
ATM LAN Architectures
1. Pure ATM LAN:
o Uses ATM switches to connect stations, similar to Ethernet switches.
o Stations use virtual path identifier (VPI) and virtual circuit identifier (VCI) instead of
addresses.
o Requires a new build, cannot upgrade existing LANs.
2. Legacy ATM LAN:
o ATM backbone connects traditional LANs.
o Data is exchanged through converters that adapt frame formats.
o Multiplexes outputs from various LANs for high data rate input to ATM switch.
3. Mixed Architecture:
o Combines pure and legacy ATM LANs.
o Allows gradual migration from traditional LANs to ATM LANs.
o Mixed stations can communicate using their respective formats.
LAN Emulation (LANE)
• Challenges:
o Connectionless (Ethernet) vs. connection-oriented (ATM) protocols.
o Physical addresses vs. virtual-circuit identifiers.
o Multicasting and broadcasting in traditional LANs vs. point-to-multipoint in ATM.
o Interoperability between legacy LAN stations and ATM-connected stations.
• Solution:
o LANE: Emulates connectionless service on a connection-oriented ATM network.
o Uses client/server model with:
▪ LAN Emulation Client (LEC): Installed on all ATM stations.
▪ LAN Emulation Configuration Server (LECS): Manages initial connections.
▪ LAN Emulation Server (LES): Establishes virtual circuits for data frames.
▪ Broadcast/Unknown Server (BUS): Handles multicasting, broadcasting, and
unknown destinations.
Mixed Architecture with Client/Server Model
• Servers and Clients:
o Servers (LECS, LES, BUS) connect to the ATM switch.
o Directly connected ATM stations and traditional LAN stations via converters act as LEC
clients.
19
NETWORK LAYER: LOGICAL ADDRESSING
IPV4 ADDRESSES
An IPv4 address is a 32-bit unique identifier for a device's connection to the Internet. Each address ensures
universal connectivity, with no two devices sharing the same address simultaneously. IPv4 addresses are
necessary for each network layer connection a device has.
Address Space:
IPv4's 32-bit address space consists of 2322^{32}232 or 4,294,967,296 addresses, though practical usage
restrictions reduce this number.
Notations:
• Binary Notation: 32 bits displayed directly.
• Dotted-Decimal Notation: Four decimal numbers separated by dots, e.g., 192.168.1.1.
Classful Addressing:
IPv4 initially used classful addressing, dividing the address space into five classes (A, B, C, D, E), each with
a fixed block size:
• Class A: Large blocks, 16,777,216 addresses.
• Class B: Midsize blocks, 65,536 addresses.
• Class C: Small blocks, 256 addresses.
• Class D: Used for multicasting.
• Class E: Reserved for future use.
Netid and Hostid:
In classes A, B, and C, addresses are divided into a network identifier (netid) and a host identifier (hostid).
Mask: A 32-bit number with contiguous 1s followed by contiguous 0s, used to distinguish the netid from the
hostid. Default masks for classful addressing are:
• Class A: 255.0.0.0 (/8)
• Class B: 255.255.0.0 (/16)
• Class C: 255.255.255.0 (/24)
Subnetting:
Divides large address blocks into smaller groups (subnets), increasing the number of 1s in the mask.
Supernetting:
Combines multiple class C blocks into a larger address range, decreasing the number of 1s in the mask.
Address Depletion:
The fixed size of class blocks led to inefficiencies and depletion of available addresses, especially for classes
A and B.
Classless Addressing:
Implemented to address depletion issues, classless addressing allocates address blocks without predefined
classes, using:
• Address Blocks: Granted in powers of 2, ensuring contiguous addresses and simplified management.
• CIDR Notation: Uses a suffix (/n) to denote the number of bits in the network portion of the
address.
Classless addressing allows flexible and efficient allocation of IP addresses, overcoming the limitations of
classful addressing.
Address Block Definition
1. Mask Definition:
o An IPv4 mask is a 32-bit number where the first n bits are 1s and the remaining 32 - n bits are
0s.
o The mask is written in CIDR notation as /n.
2. Address Block Representation:
o An address block is represented as x.y.z.t/n, where x.y.z.t is an address within the block and n
is the mask length.
3. Finding Addresses:
o First Address: Set the last 32 - n bits of the address to 0.
o Last Address: Set the last 32 - n bits of the address to 1.
o Number of Addresses: Calculated as 2^(32 - n).
Examples:
1. Address Block 205.16.37.39/28:
o First Address: Convert 205.16.37.39 to binary and set last 4 bits to 0 (for /28). Result:
205.16.37.32.
o Last Address: Set the last 4 bits to 1. Result: 205.16.37.47.
o Number of Addresses: 2^(32 - 28) = 16.
2. Another Example Calculation:
o Mask: /28 or 11111111 11111111 11111111 11110000.
o First Address: Use AND operation with the address and mask. Result: 205.16.37.32.
o Last Address: Use OR operation with the address and the complement of the mask. Result:
205.16.37.47.
o Number of Addresses: 2^(32 - 28) = 16.
Network Addresses and Hierarchy
1. Network Address:
o The first address in a block often serves as the network address, representing the organization
on the internet.
2. Hierarchy:
o Two-Level Hierarchy: Consists of a network prefix and a host suffix. For example, in
x.y.z.t/n, the prefix is the first n bits, and the suffix is the remaining 32 - n bits.
o Three-Level Hierarchy (Subnetting): Allows further division of an address block into
subnets, each with its own prefix.
3. Subnetting Example:
o Given block 17.12.40.0/26 (64 addresses), subnets can be divided as follows:
▪ Subnet 1: /27 (32 addresses).
▪ Subnet 2: /28 (16 addresses).
▪ Subnet 3: /28 (16 addresses).
4. Address Allocation:
o ICANN assigns large blocks to ISPs, who then allocate smaller blocks to customers.
o Example Allocation: For a block 190.100.0.0/16:
▪ Group 1: 64 sub-blocks of /24 (256 addresses each).
▪ Group 2: 128 sub-blocks of /25 (128 addresses each).
▪ Group 3: 128 sub-blocks of /26 (64 addresses each).
o Remaining Addresses: After allocation, 24,576 addresses are available.
Network Address Translation (NAT):
Network Address Translation (NAT) is a technique used to address the shortage of IP addresses by allowing
a single public IP address to be shared among multiple devices within a private network. It enables multiple
internal devices to communicate with external networks using a single or a few public IP addresses.
Private vs. Public Addresses:
• Private Addresses: Reserved for use within private networks and cannot be routed on the global
Internet. They fall within three ranges:
o 10.0.0.0 to 10.255.255.255
o 172.16.0.0 to 172.31.255.255
o 192.168.0.0 to 192.168.255.255
• Public Addresses: Assigned to devices that need to be directly accessible over the Internet.
NAT Operation:
1. Address Translation:
o Outgoing Packets: The NAT router replaces the private IP address in the source field with a
public IP address.
o Incoming Packets: The router replaces the destination public IP address with the appropriate
private IP address.
2. Translation Table:
o A table that maps private addresses and ports to public addresses and ports. This table helps
the NAT router keep track of active connections and correctly route packets.
3. Types of NAT:
o Static NAT: Maps a single private IP address to a single public IP address.
o Dynamic NAT: Maps private IP addresses to a pool of public IP addresses.
o PAT (Port Address Translation) or NAT Overload: Maps multiple private IP addresses to
a single public IP address, distinguishing connections by port numbers.
Challenges and Solutions:
• Single Public Address Limitation: With only one public IP, only one internal device can
communicate with the same external server simultaneously. This can be resolved by using a pool of
public IP addresses or by incorporating port numbers into the translation table.
• Communication Initiation: NAT typically requires that communication be initiated from within the
private network. External servers cannot directly initiate communication with private network hosts.
ISP Usage: ISPs often use NAT to manage large numbers of customers with a limited number of public IP
addresses. They translate many private addresses into a smaller number of public addresses, enabling
efficient IP address usage and conserving the global address space.
NAT is essential for managing the IP address shortage and facilitating secure internal network
communication with external networks.
IPV6 ADDRESSES
IPv6 was introduced to address the limitations of IPv4, such as address depletion and lack of support for
real-time applications, encryption, and authentication.
Address Structure:
• Length: IPv6 addresses are 128 bits long, divided into 16 bytes (octets).
• Notation: They are written in hexadecimal colon notation, with 8 sections of 4 hexadecimal digits
separated by colons (e.g., 2001:0db8:85a3:0000:0000:8a2e:0370:7334).
Abbreviation:
• Leading Zeros: Can be omitted within each 16-bit section (e.g., 0001 becomes 1).
• Consecutive Zero Sections: A series of contiguous zero sections can be replaced with a double
colon (::), but this can only be done once per address (e.g., 2001:0db8::2:1).
Address Space:
• Size: IPv6 offers a vastly larger address space than IPv4, with 2^128 addresses.
• Prefix Categories: The address space is divided into categories based on the initial bits (type
prefixes), such as:
o Unicast Addresses: For identifying a single node.
o Multicast Addresses: For defining a group of nodes that should all receive a packet.
o Anycast Addresses: Assigned to a group of nodes, with packets delivered to the nearest
node.
Address Types:
1. Unicast Addresses:
o Provider-Based: Used by individual hosts, with fields for provider and subscriber identifiers.
2. Multicast Addresses:
o Group ID: Defines a group of hosts. Multicast addresses can be permanent or transient, with
scopes defining their reach (e.g., link-local, site-local, global).
3. Anycast Addresses:
o Packets are delivered to the nearest node in the group, not to all nodes.
Reserved Addresses:
• Unspecified: Used when a host does not know its address.
• Loopback: For internal testing within the host.
• IPv4-Compatible: For transition between IPv4 and IPv6.
• IPv4-Mapped: For communication between IPv6 and IPv4 systems.
Local Addresses:
• Link-Local: Used within a single subnet.
• Site-Local: Used within a site with multiple subnets.
20
NETWORK LAYER: INTERNET PROTOCOL
INTERNETWORKING
• The physical and data link layers handle local data delivery between nodes on a network.
• When data needs to traverse multiple networks, such as moving from host A to host D through
routers R1 and R3, the limitations of the data link layer become apparent.
Need for Network Layer:
• Purpose: The network layer is necessary for host-to-host delivery and routing packets through
routers or switches.
• Functionality: Adds routing information to packets and determines the correct outgoing interface for
packet delivery.
Network Layer Responsibilities:
1. At the Source:
o Packet Creation: Encapsulates data into packets with logical source and destination
addresses.
o Routing Information: Checks routing tables and handles fragmentation if needed.
2. At Routers:
o Routing: Consults routing tables to forward packets to the appropriate next hop.
o Header Updates: Modifies packet headers with routing information before forwarding.
3. At the Destination:
o Address Verification: Ensures the packet's destination address matches the host's address.
o Fragmentation Handling: Reassembles fragmented packets and delivers them to the
transport layer.
Internet as a Datagram Network:
• Packet-Switched Network: The Internet uses packet switching at the network layer, specifically the
datagram approach, where packets are routed independently based on universal addresses.
Internet as a Connectionless Network:
• Connectionless Service: Each packet is treated independently, with no relationship to other packets
from the same source. Packets may travel different paths to the destination.
• Reasons: The Internet's heterogeneous nature makes it impractical to establish end-to-end
connections in advance.
• Comparison with Connection-Oriented Service: Unlike connection-oriented services (where a
connection is established before data transfer, and packets follow a predetermined path), the
connectionless approach recalculates routes for each packet, making it more flexible for the diverse
Internet infrastructure.
IPV4
IPv4 is a fundamental part of the TCP/IP protocol suite, providing a delivery mechanism for data over the
internet. It is characterized as an unreliable, connectionless datagram protocol, meaning it does not provide
error or flow control (other than error detection on the header) and each datagram is treated independently. If
reliability is required, IPv4 must be paired with a reliable protocol such as TCP.
Key Characteristics:
• Unreliable & Connectionless: IPv4 does its best to deliver data without guarantees, similar to a
postal service that does not track or ensure delivery.
• Datagram-Based: Data packets in IPv4 are called datagrams, which consist of a header and data.
Each datagram can take a different route to its destination, and they may arrive out of order, or some
might get lost or corrupted.
Datagram Structure:
• Header: 20 to 60 bytes, containing essential routing and delivery information.
• Fields in the Header:
o Version (VER): Indicates the version of the protocol (currently 4).
o Header Length (HLEN): Specifies the length of the header.
o Services: Differentiated services for handling data.
o Total Length: Total length of the datagram including header and data.
o Identification, Flags, Fragmentation Offset: Used for fragmentation and reassembly of
datagrams.
o Time to Live (TTL): Limits the datagram's lifetime to prevent it from circulating
indefinitely.
o Protocol: Indicates the higher-level protocol (e.g., TCP, UDP) that the data is meant for.
o Header Checksum: Used for error-checking the header.
o Source & Destination Addresses: IPv4 addresses of the sender and receiver.
Fragmentation:
• IPv4 datagrams can be fragmented to pass through networks with smaller Maximum Transmission
Units (MTUs).
• Fragmentation fields include identification, flags, and fragmentation offset.
• Each fragment has its own header with the same identification number.
• Fragments are reassembled only at the destination.
Examples:
1. Invalid Header Length: If an IPv4 packet has a header length of 8 bytes, it is invalid and discarded.
2. Options in Header: If the HLEN value is 8, the header is 32 bytes, indicating 12 bytes of options.
3. Data Size Calculation: With HLEN of 5 and a total length of 40 bytes, the packet carries 20 bytes of
data.
4. TTL and Protocol: With TTL set to 1 and protocol field set to 2, the packet can only make one hop
and belongs to IGMP.
Checksum
• A checksum is a mechanism used to ensure data integrity by detecting errors in transmitted data.
• In IPv4, the checksum is applied to the packet header to verify its integrity.
Calculation:
1. Initial Setup: Set the checksum field to 0.
2. Division: Divide the header into 16-bit sections.
3. Summation: Add all 16-bit sections together.
4. Complementation: Complement the resulting sum.
5. Insertion: Insert the complemented sum into the checksum field.
Scope:
• The checksum covers only the header, not the data.
• Higher-level protocols (which encapsulate the data) have their own checksums.
• Header changes with each router, necessitating checksum recalculation only for the header.
Options
Structure:
• The IPv4 header consists of a fixed part (20 bytes) and a variable part (up to 40 bytes for options).
Purpose:
• Options are used for network testing and debugging but are not mandatory.
• All IPv4 implementations must handle options if present.
Types of Options:
1. Single-byte:
o No Operation: 1-byte option used as a filler.
o End of Option: 1-byte option used for padding at the end of the option field.
2. Multiple-byte:
o Record Route: Records the addresses of routers that handle the datagram.
o Strict Source Route: Predetermines a strict route for the datagram to follow.
o Loose Source Route: Predetermines a less strict route, allowing visits to other routers.
o Timestamp: Records the time of datagram processing by each router in milliseconds from
midnight (UTC/GMT).
Key Points:
• Record Route: Lists up to nine router addresses, useful for debugging and management.
• Strict Source Route: Ensures datagram visits only specified routers; errors occur if any listed
routers are missed.
• Loose Source Route: Allows visiting other routers along with the specified ones.
• Timestamp: Helps track router behavior and estimate datagram travel time.
IPV6
IPv6 (Internetworking Protocol version 6), also known as IPng (Internetworking Protocol next generation),
was developed to address the limitations of IPv4, such as:
1. Address Depletion: Despite temporary solutions like subnetting, classless addressing, and NAT,
IPv4 still faces long-term address depletion.
2. Real-Time Transmission: IPv4 lacks provisions for real-time audio and video transmissions, which
require minimum delay strategies and resource reservation.
3. Security: IPv4 does not provide built-in encryption or authentication.
Advantages of IPv6
1. Larger Address Space: IPv6 uses 128-bit addresses, significantly increasing the number of
available addresses.
2. Better Header Format: Options are separated from the base header, simplifying and speeding up
routing.
3. New Options: IPv6 includes new options for added functionalities.
4. Extension Allowance: IPv6 is designed to allow extensions for new technologies or applications.
5. Support for Resource Allocation: IPv6 includes a flow label mechanism for resource reservation,
useful for real-time traffic.
6. Enhanced Security: IPv6 provides built-in options for encryption and authentication.
IPv6 Packet Format
• Base Header: Fixed at 40 bytes, followed by optional extension headers and upper-layer data.
• Fields:
o Version: Identifies the IP version (6 for IPv6).
o Priority: Defines the packet's priority relative to other packets.
o Flow Label: Used for special handling of data flows.
o Payload Length: Specifies the length of the payload excluding the base header.
o Next Header: Indicates the type of the next header (extension header or upper-layer
protocol).
o Hop Limit: Similar to IPv4's TTL field.
o Source Address: The 128-bit address of the packet's origin.
o Destination Address: The 128-bit address of the packet's destination.
Priority and Flow Label
• Priority: IPv6 packets are assigned priorities to manage congestion and ensure quality of service for
different types of traffic.
o Congestion-Controlled Traffic: Priorities from 0 to 7, with higher numbers indicating
higher priority.
o Non-Congestion-Controlled Traffic: Priorities from 8 to 15, based on the redundancy of the
data.
• Flow Label: Used to identify and manage flows of packets requiring special handling, such as real-
time audio and video.
Comparison Between IPv4 and IPv6 Headers
1. Header Length: Fixed in IPv6, no header length field.
2. Service Type: Replaced by priority and flow label fields.
3. Total Length: Replaced by payload length.
4. Fragmentation Fields: Moved to the fragmentation extension header.
5. TTL: Renamed to hop limit.
6. Protocol Field: Replaced by the next header field.
7. Header Checksum: Eliminated, as upper-layer protocols provide checksum.
8. Options: Implemented as extension headers in IPv6.
Extension Headers
IPv6 supports up to six types of extension headers, which provide additional functionalities not covered by
the base header:
1. Hop-by-Hop Option: Information for all routers along the path.
2. Source Routing: Combines strict and loose source routing.
3. Fragmentation: Only the source can fragment packets.
4. Authentication: Ensures message sender validation and data integrity.
5. Encrypted Security Payload (ESP): Provides data confidentiality and protection against
eavesdropping.
6. Destination Option: Information for the destination node only.
21
Network Layer: Address Mapping, Error Reporting, and
Multicasting
ADDRESS MAPPING
Internet Structure:
• Internet consists of multiple physical networks connected via routers.
• Packets traverse through various physical networks from source to destination.
• Devices (hosts and routers) are identified by logical (IP) addresses at the network level and physical
addresses at the data link layer.
Physical Address:
• Local address specific to a network.
• Examples include the 48-bit MAC address in Ethernet.
• Unique locally but not necessarily universally.
Logical vs. Physical Address:
• Logical (IP) address: Used for network-level identification.
• Physical address: Used for actual packet delivery within a local network.
• Mapping between these addresses is essential for packet delivery.
Static and Dynamic Mapping:
• Static Mapping:
o A table associates logical addresses with physical addresses.
o Limitations: Physical addresses can change due to NIC changes, reboots in certain LANs, or
mobility.
o Requires periodic updates, which can affect network performance.
• Dynamic Mapping:
o Uses protocols like ARP to dynamically find the corresponding address.
MAPPING LOGICAL TO PHYSICAL ADDRESS:
Address Resolution Protocol (ARP)
Function:
• Maps a logical (IP) address to a physical address.
• Hosts/routers send ARP query packets when they know the IP address but not the physical address.
Process:
1. Sender creates an ARP request with its IP and physical address and the receiver’s IP address.
2. The request is broadcasted on the network.
3. The target device recognizes its IP address in the request and sends back an ARP reply with its
physical address.
4. The sender receives the reply and can now send packets using the obtained physical address.
Optimization:
• ARP replies are cached to avoid frequent broadcasts for the same IP address, enhancing efficiency.
Packet Format:
• Contains fields like hardware type, protocol type, hardware length, protocol length, operation,
sender’s hardware and protocol addresses, and target’s hardware and protocol addresses.
Encapsulation:
• ARP packets are encapsulated in data link frames (e.g., Ethernet frames).
MAPPING PHYSICAL TO LOGICAL ADDRESS:
Reverse ARP (RARP), BOOTP, and DHCP
RARP:
• Used when a device knows its physical address but not its logical address.
• Broadcasts a request and receives a reply with the logical address.
• Limited by the need for a server on each network or subnet.
BOOTP:
• Client/server protocol for mapping physical to logical addresses.
• Supports clients and servers on different networks using a relay agent.
• Encapsulated in UDP/IP packets.
• Addresses static configurations.
DHCP:
• Extends BOOTP functionalities for both static and dynamic address allocation.
• Provides temporary (leased) IP addresses from a pool.
• Handles dynamic scenarios where devices frequently change networks or require temporary
addresses.
• Supports manual configuration for static addresses and automatic configuration for dynamic
addresses.
ICMPv6
• ICMPv6 (Internet Control Message Protocol for IPv6) is an updated version of ICMP (ICMPv4)
adapted for IPv6.
• It integrates some previously independent protocols and introduces new functionalities and message
types.
Comparison of Network Layers (IPv4 vs. IPv6):
• ARP and IGMP from IPv4 are now part of ICMPv6.
• RARP is removed as its functions are covered by BOOTP.
ICMPv6 Messages:
• Divided into two categories: Error Reporting and Query, similar to ICMPv4 but with more message
types.
Error Reporting:
• Types of Errors: Destination unreachable, packet too big, time exceeded, parameter problems, and
redirection.
• Key Differences:
o Source-quench message is removed (redundant due to IPv6's priority and flow label fields).
o Packet-too-big message added (IPv6 sender handles fragmentation; if incorrect, a router
sends this error).
Comparison of Error-Reporting Messages:
Type of Message IPv4 IPv6
Destination unreachable Yes Yes
Source quench Yes No
Packet too big No Yes
Time exceeded Yes Yes
Parameter problem Yes Yes
Redirection Yes Yes
Query Messages:
• Diagnose network problems with echo request/reply, router solicitation/advertisement, neighbor
solicitation/advertisement, and group membership.
• Eliminated Messages: Time-stamp request/reply and address-mask request/reply (handled by other
protocols or unnecessary in IPv6).
Comparison of Query Messages:
Type of Message IPv4 IPv6
Echo request and reply Yes Yes
Timestamp request and reply Yes No
Address-mask request and reply Yes No
Router solicitation and advertisement Yes Yes
Neighbor solicitation and advertisement ARP Yes
Group membership IGMP Yes
Specific Message Types:
• Echo Request and Reply: Same as IPv4.
• Router Solicitation and Advertisement: Same concept as IPv4.
• Neighbor Solicitation and Advertisement: Replaces ARP in IPv6.
• Group Membership: Integrates IGMP functionalities.
ICMPv6 enhances network layer error reporting and query capabilities, streamlining some processes by
integrating functions from older, separate protocols.
22
NETWORK LAYER: DELIVERY, FORWARDING, AND
ROUTING
DELIVERY
Direct vs. Indirect Delivery:
1. Direct Delivery:
o Occurs when the destination host is on the same physical network as the sender.
o The sender can determine this by extracting the destination network address and comparing it
to its connected networks.
2. Indirect Delivery:
o Used when the destination host is on a different network.
o The packet is forwarded from router to router until it reaches a router connected to the
destination network.
o Always involves one direct delivery at the end.
FORWARDING
• Forwarding is the process of placing a packet on its route to the destination.
• Requires a routing table to determine the route.
Forwarding Techniques:
1. Next-Hop Method:
▪ The routing table stores only the address of the next hop.
2. Network-Specific Method:
▪ Reduces the routing table size by having entries for networks instead of individual
hosts.
3. Default Method:
▪ Uses a default route for destinations not in the routing table.
• Forwarding Process:
o Uses classless addressing with routing tables including masks.
o The table is searched using the network address derived from the destination address and
mask.
Examples:
1. Router R1 Configuration:
o Routing table includes masks and corresponding network addresses for routing decisions.
2. Forwarding Process Examples:
o Router applies masks sequentially to determine the route based on the network address
match.
Address Aggregation:
• Aggregates multiple smaller address blocks into a larger block to reduce routing table entries and
search time.
Longest Mask Matching:
• The routing table is sorted from the longest to the shortest mask.
• Ensures accurate routing by matching the most specific (longest) mask first.
Hierarchical Routing:
• Creates hierarchy in routing tables to manage large address spaces.
• Utilizes levels of ISPs (local, regional, national) to structure routing efficiently.
Geographical Routing:
• Divides the address space into large blocks for different geographical regions.
• Simplifies routing tables by having fewer entries for large regions.
Routing Tables:
• Can be static or dynamic.
o Static Routing Table:
▪ Manually entered routes, suitable for small or unchanging networks.
o Dynamic Routing Table:
▪ Automatically updated using dynamic routing protocols (e.g., RIP, OSPF, BGP).
• Format:
o Includes fields such as mask, network address, next-hop address, interface, flags, reference
count, and use.
Utilities:
• Tools like netstat can be used to view the contents of routing tables on hosts and routers.
UNICAST ROUTING PROTOCOLS
Routing Tables
• Static Tables: Manually configured; do not change unless manually updated.
• Dynamic Tables: Automatically updated when changes occur in the network (e.g., router failures,
better routes found).
Routing Protocols
• Purpose: Enable routers to share information about network changes, ensuring dynamic updates of
routing tables.
• Optimization: Routers decide optimal paths based on metrics assigned to networks.
Metrics and Optimization
• Metrics: Cost assigned to traversing a network, varies by protocol:
o RIP (Routing Information Protocol): Uses hop count; each network traversed is counted as
one hop.
o OSPF (Open Shortest Path First): Allows different metrics based on the type of service
(e.g., maximum throughput, minimum delay).
o BGP (Border Gateway Protocol): Uses policies defined by the administrator to select paths.
Intra- and Interdomain Routing
• Intradomain Routing: Within a single autonomous system (AS), using protocols like distance
vector and link state.
• Interdomain Routing: Between autonomous systems, typically using path vector protocols.
Distance Vector Routing
• Basic Concept: Each node maintains a table of minimum distances to all other nodes, updated
periodically or when changes occur.
• Initialization: Each node starts with the distances to its immediate neighbors.
• Sharing: Nodes periodically share their tables with immediate neighbors.
• Updating: Nodes update their tables based on received information, adjusting costs and next-hop
entries.
Instability and Solutions
• Instability Issues: Occur due to loops (e.g., two-node and three-node loops) where routing updates
cause incorrect path calculations.
• Solutions:
o Defining Infinity: Limits the maximum cost to a manageable number.
o Split Horizon: Prevents sending routing information back to the source.
o Split Horizon with Poison Reverse: Advertises a route with an infinite cost if the
information came from the neighbor.
RIP (Routing Information Protocol)
• Characteristics:
o Used within an autonomous system.
o Implements distance vector routing.
o Uses hop count as a metric; defines infinity as 16 hops.
o Routing tables list networks as destinations and the next-hop router for each network.
• Example: An autonomous system with seven networks and four routers demonstrates how routers
maintain and use routing tables to determine paths.
Link State Routing
Link State Routing (LSR) operates on the principle that each node in a network maintains a complete
topology of the network, including nodes, links, types, costs, and states (up or down) of the links. Using
Dijkstra's algorithm, each node can build a unique routing table based on its view of the topology, similar to
how different people use the same city map to find different routes.
The topology must be dynamic, with updates reflecting any changes in the network, such as links going
down. Although no node initially knows the full topology, each node gathers partial information about its
directly connected links. These pieces of information, when combined, provide a complete view of the
network.
Steps in Link State Routing:
1. Creation of Link State Packets (LSPs):
o LSPs contain data about the node identity, list of links, sequence number, and age.
o They are generated when there are topology changes or periodically to remove old
information.
2. Flooding of LSPs:
o Nodes disseminate LSPs to all other nodes in the network.
o A node discards older LSPs based on sequence numbers and forwards newer ones to all
interfaces except the one it received from.
3. Formation of Shortest Path Tree:
o Using Dijkstra's algorithm, each node creates a shortest path tree from the received LSPs,
with itself as the root.
4. Calculation of Routing Table:
o Each node uses the shortest path tree to build a routing table showing the cost to reach every
other node.
Example:
Consider a network with nodes A, B, C, D, and E:
• Each node knows the state and cost of its direct links.
• Nodes create LSPs and flood them across the network.
• Nodes receive LSPs, build a complete topology, and use Dijkstra's algorithm to form a shortest path
tree.
• Routing tables are then constructed from these trees.
Key Concepts:
• Link State Packet (LSP): Contains essential information for topology creation.
• Flooding: Ensures LSPs reach all nodes.
• Shortest Path Tree: Represents the shortest paths from the root node.
• Dijkstra's Algorithm: Used to build the shortest path tree.
Example Process:
For node A:
1. A creates an LSP with its links to B, C, and D.
2. A floods the LSP to its neighbors.
3. Each neighbor forwards the LSP, ensuring it reaches all nodes.
4. A receives LSPs from other nodes and constructs a complete topology.
5. A uses Dijkstra's algorithm to build its shortest path tree and routing table, determining the shortest
paths and costs to all other nodes.
Final Routing Table Example for Node A:
• Node | Cost | Next Router
• A|0|-
• B|5|-
• C|2|-
• D|3|-
• E|6|C
By maintaining up-to-date topology information and using efficient algorithms, link state routing enables
robust and dynamic routing decisions within a network.
23
PROCESS-TO-PROCESS DELIVERY: UDP, TCP, AND SCTP
PROCESS-TO-PROCESS DELIVERY
Node-to-Node Delivery:
• Managed by the data link layer, handling frame delivery between neighboring nodes.
Host-to-Host Delivery:
• Managed by the network layer, responsible for delivering datagrams between two hosts.
Process-to-Process Delivery:
• Essential for real communication between application programs (processes) on the internet.
• Managed by the transport layer, ensuring data is delivered from one process on a source host to a
corresponding process on a destination host.
• Utilizes a client/server paradigm where a client process on the local host requests services from a
server process on a remote host.
Addressing in Process-to-Process Delivery:
1. MAC Address: Used at the data link layer to select nodes.
2. IP Address: Used at the network layer to select hosts.
3. Port Number: Used at the transport layer to select processes. Ports are 16-bit numbers (0-65,535):
o Well-known Ports (0-1023): Assigned by IANA.
o Registered Ports (1024-49,151): Not assigned but can be registered.
o Dynamic Ports (49,152-65,535): Free to use by any process.
Socket Address:
• A combination of an IP address and a port number uniquely identifies a process.
• Required for communication, with the client and server each having their unique socket addresses.
Multiplexing and Demultiplexing:
• Multiplexing: Multiple processes sending packets through a single transport layer protocol.
• Demultiplexing: The transport layer delivers packets to the correct process based on port numbers.
Service Types in Transport Layer:
1. Connectionless Service (UDP):
o No connection setup or teardown.
o No packet numbering, potential delays, loss, or out-of-sequence arrival.
o No acknowledgments.
2. Connection-Oriented Service (TCP, SCTP):
o Connection established before data transfer.
o Connection released after data transfer.
o Reliable, implementing flow and error control.
Reliability and Error Control:
• Necessary at the transport layer due to the unreliable nature of the network layer.
• Ensures end-to-end reliability, not just node-to-node.
Common Transport Layer Protocols:
1. UDP: Connectionless and unreliable.
2. TCP: Connection-oriented and reliable.
3. SCTP: A newer protocol, also connection-oriented and reliable.
USER DATAGRAM PROTOCOL (UDP)
• UDP is a connectionless, unreliable transport protocol.
• It offers process-to-process communication, unlike IP, which provides host-to-host communication.
• Performs limited error checking and does not guarantee delivery, order, or duplication protection.
Advantages:
• Simple and lightweight with minimal overhead.
• Suitable for small messages where reliability is not critical.
• Less interaction needed between sender and receiver compared to TCP or SCTP.
Well-Known Ports for UDP:
• Ports used by specific applications, e.g., Echo (7), Discard (9), Daytime (13), DNS (53), TFTP (69),
NTP (123), SNMP (161, 162).
User Datagram Format:
• Fixed-size 8-byte header.
• Fields include:
o Source Port Number: 16-bit number for the sending process.
o Destination Port Number: 16-bit number for the receiving process.
o Length: 16-bit field defining the total length of the datagram.
o Checksum: 16-bit field for error-checking the entire datagram (header and data).
Checksum Calculation:
• Includes a pseudoheader, the UDP header, and the data from the application layer.
• Optional: If not used, the checksum field is filled with 1s.
UDP Operation:
• Connectionless Service: Each datagram is independent, with no need for connection establishment
or termination.
• Flow and Error Control: No mechanisms beyond a checksum for error detection. No flow control,
potentially leading to overflow at the receiver.
• Encapsulation and Decapsulation: UDP encapsulates messages in IP datagrams for delivery.
• Queuing: Queues are associated with ports for managing outgoing and incoming messages.
Uses of UDP:
• Suitable for simple request-response communications with minimal error control.
• Ideal for processes with internal error and flow control mechanisms, e.g., TFTP.
• Used for multicasting, management processes (SNMP), and routing updates (RIP).
TCP
TCP is a connection-oriented, reliable transport protocol used for process-to-process communication over
the Internet. It ensures data is delivered in order and without errors, distinguishing it from the connectionless
UDP.
TCP Services:
• Process-to-Process Communication: Uses port numbers to facilitate communication between
application processes. Well-known ports include 80 for HTTP, 25 for SMTP, and 53 for DNS.
• Stream Delivery Service: TCP allows data to be sent and received as a continuous stream of bytes,
unlike UDP which sends messages with predefined boundaries.
• Full-Duplex Communication: Enables simultaneous bidirectional data flow.
• Connection-Oriented Service: Establishes a virtual connection through a three-step process:
connection establishment, data transfer, and connection termination.
• Reliable Service: Uses acknowledgments and retransmissions to ensure data is correctly delivered.
Key Features:
• Numbering System: Uses sequence and acknowledgment numbers to track bytes in the data stream.
• Flow Control: Manages the rate of data transmission between sender and receiver to prevent
overwhelming the receiver.
• Error Control: Detects and corrects errors in data transmission.
• Congestion Control: Adjusts the rate of data transmission based on network congestion.
Segments and Data Transfer:
• Segments: TCP groups bytes into segments, adds headers, and sends them to the IP layer. Segments
can vary in size and may be sent out of order, requiring reassembly at the receiving end.
• Buffers: Both sending and receiving processes use buffers to manage data flow and storage.
• Connection Establishment: Achieved through a three-way handshake involving SYN and ACK
segments.
• Data Transfer: Segments can carry both data and acknowledgments, with control mechanisms like
the PSH (push) flag ensuring timely delivery to the receiving process.
Segment Format:
• Header Fields: Includes source and destination port addresses, sequence and acknowledgment
numbers, header length, control flags, window size, checksum, and urgent pointer.
• Control Flags: Indicate various states and actions, such as synchronization (SYN), acknowledgment
(ACK), and termination (FIN).
Security:
• SYN Flooding Attack: A type of denial-of-service attack that exploits the connection establishment
phase, overloading the server with fake SYN requests. Countermeasures include resource limits,
filtering unwanted addresses, and using cookies to defer resource allocation.
Flow Control
TCP uses a sliding window protocol to manage flow control, functioning as a hybrid between Go-Back-N
and Selective Repeat protocols. Here are the key points:
1. Sliding Window Protocol:
o Byte-Oriented: Unlike the data link layer which is frame-oriented, TCP's sliding window is
byte-oriented.
o Variable Size: The window size can change, controlled by the receiver.
o No NAKs: Unlike Go-Back-N, TCP does not use negative acknowledgments (NAKs).
o Out-of-Order Handling: Like Selective Repeat, the receiver holds out-of-order segments until
missing ones arrive.
2. Window Mechanics:
o Opening: Moving the right wall to the right allows more bytes to be sent.
o Closing: Moving the left wall to the right acknowledges bytes.
o Shrinking: Moving the right wall to the left is discouraged and often not allowed as it can
cause complications if bytes have already been sent.
3. Flow Control:
o The window size at one end is the lesser of the receiver window (rwnd) and the congestion
window (cwnd).
o Receiver Window (rwnd): Advertised by the receiver, indicating the buffer space available.
o Congestion Window (cwnd): Determined by the network to avoid congestion.
Error Control
TCP ensures reliable data delivery using several mechanisms:
1. Checksum:
o A 16-bit checksum in each segment detects corruption. If corrupted, the segment is discarded.
2. Acknowledgment (ACK):
o Confirms the receipt of data segments.
o Control segments (no data) that consume sequence numbers are also acknowledged.
3. Retransmission:
o Segments are retransmitted if a retransmission timer expires or three duplicate ACKs are
received.
o No retransmission for segments that do not consume sequence numbers, like ACK segments.
4. Out-of-Order Segments:
o Temporarily stored until the missing segment arrives, ensuring data is delivered to the
application in order.
Scenarios
1. Normal Operation:
o Bidirectional data transfer with timely acknowledgments.
2. Lost Segment:
o Lost or corrupted segments are retransmitted when the retransmission timer expires.
3. Fast Retransmission:
o Segments are retransmitted immediately upon receiving three duplicate ACKs, ensuring
faster recovery from lost segments.
SCTP
Stream Control Transmission Protocol (SCTP) is a reliable, message-oriented transport layer protocol
designed for newer Internet applications that need more sophisticated services than TCP can provide. These
applications include ISDN over IP (IUA), telephony signaling (M2UA, M3UA), media gateway control
(H.248), IP telephony (H.323, SIP), and others. SCTP offers enhanced performance and reliability compared
to TCP and UDP. Here’s a brief comparison of the three:
• UDP: Message-oriented, conserves message boundaries, unreliable (messages can be lost,
duplicated, or out of order), lacks congestion and flow control.
• TCP: Byte-oriented, reliable (detects duplicates, resends lost segments, delivers bytes in order),
includes congestion and flow control.
• SCTP: Combines the best features of UDP and TCP, reliable, message-oriented, preserves message
boundaries, detects lost, duplicate, and out-of-order data, includes congestion and flow control, and
offers additional features like multistreaming and multihoming.
Key Features of SCTP:
1. Process-to-Process Communication: Uses well-known ports in the TCP space.
2. Multiple Streams: Allows multiple streams in each connection (association). If one stream is
blocked, others can still deliver data.
3. Multihoming: Supports multiple IP addresses for each end of an association, enabling fault
tolerance.
4. Full-Duplex Communication: Data can flow in both directions simultaneously.
5. Connection-Oriented Service: Establishes an association before data transfer and terminates it
afterward.
6. Reliable Service: Uses acknowledgment mechanisms to ensure data is delivered safely and soundly.
7. Transmission Sequence Number (TSN): Numbers data chunks for transfer control.
8. Stream Identifier (SI): Identifies each stream within an association.
9. Stream Sequence Number (SSN): Orders data chunks within a stream.
10. Packets and Chunks: Data and control information are carried in chunks within packets. Control
chunks manage the association, and data chunks carry user data.
SCTP Packet Format:
• General Header: Includes source and destination port addresses, a verification tag, and a checksum.
• Chunks: Control and data information are carried in chunks. Control chunks precede data chunks in
a packet.
Association Establishment: SCTP uses a four-way handshake to establish an association, including an INIT
chunk, INIT ACK chunk, COOKIE ECHO chunk, and COOKIE ACK chunk. This process ensures that
resources are allocated only after verifying the client’s IP address.
Data Transfer: SCTP maintains message boundaries and supports bidirectional data transfer. Unlike TCP,
which treats data as a byte stream, SCTP treats data as distinct messages.
SCTP’s design addresses some of TCP’s limitations, making it well-suited for applications requiring
reliable, ordered, and timely message delivery.
Flow Control
Flow control in SCTP (Stream Control Transmission Protocol) is akin to that in TCP but involves handling
both bytes and chunks. Here’s a concise breakdown:
Receiver Site
• Buffer (queue): Stores received data chunks not yet read by the process.
• Variables:
o cumTSN: Last received TSN (Transmission Sequence Number).
o winSize: Available buffer size.
o lastACK: Last cumulative acknowledgment sent.
Process:
4. On receiving a data chunk:
▪ Store it in the buffer.
▪ Subtract chunk size from winSize.
▪ Update cumTSN.
5. On process reading a chunk:
▪ Remove it from the buffer.
▪ Add chunk size back to winSize.
6. On sending a SACK:
▪ Send cumulative TSN equal to cumTSN and the current winSize.
Sender Site
• Buffer (queue): Holds chunks ready to be sent or already sent but not yet acknowledged.
• Variables:
o curTSN: TSN of the next chunk to be sent.
o rwnd: Last advertised receiver window size.
o inTransit: Number of bytes sent but not yet acknowledged.
Process:
4. Send a chunk if its size is less than or equal to rwnd - inTransit:
▪ Increment curTSN.
▪ Increment inTransit by the chunk size.
5. On receiving a SACK:
▪ Remove acknowledged chunks from the queue.
▪ Reduce inTransit by the total size of discarded chunks.
▪ Update rwnd with the advertised window in the SACK.
Scenario
• Initial rwnd and winSize: 2000 bytes.
• Sender sends chunks, updates inTransit.
• Receiver acknowledges with SACK, sender updates inTransit and rwnd.
• Sender can send new chunks based on updated values.
Error Control
• Receiver Site:
o Stores all received chunks, including out-of-order ones, and leaves spaces for missing
chunks.
o Tracks duplicate chunks for reporting.
o Sends SACK to report buffer state, including cumTSN and out-of-order chunks.
• Sender Site:
o Two buffers: sending queue (for chunks to be sent) and retransmission queue (for lost
chunks).
o Uses retransmission timers and fast retransmission on receiving duplicate SACKs.
Congestion Control
• Similar strategies to TCP, including:
o Slow start: Exponential increase of sending rate.
o Congestion avoidance: Additive increase.
o Congestion detection: Multiplicative decrease.
o Fast retransmission and fast recovery.
24
CONGESTION CONTROL AND QUALITY OF SERVICE
DATA TRAFFIC
Understanding data traffic is crucial for implementing effective congestion control and ensuring quality of
service in networks. Here’s a summary of key concepts:
Traffic Descriptors
Traffic descriptors are qualitative values representing data flow, including:
• Average Data Rate: The total amount of data sent divided by the time period, indicating the average
bandwidth needed.
• Peak Data Rate: The highest data rate, showing the peak bandwidth required for traffic.
• Maximum Burst Size: The longest duration during which traffic is generated at the peak rate,
important for network capacity planning.
• Effective Bandwidth: A complex calculation involving average data rate, peak data rate, and
maximum burst size, indicating the bandwidth required by the network for traffic flow.
Traffic Profiles
Data flows can be categorized into three profiles:
• Constant Bit Rate (CBR): Data rate remains fixed. The average and peak data rates are the same,
making it easy for networks to handle due to its predictability.
• Variable Bit Rate (VBR): Data rate changes smoothly over time. Average and peak data rates
differ, with a usually small maximum burst size, making it moderately challenging for networks.
• Bursty: Data rate changes suddenly and sharply, with significant differences between average and
peak data rates and a large maximum burst size. This unpredictable profile is difficult for networks to
manage and often requires reshaping techniques.
Handling Traffic
• CBR: Predictable and straightforward for networks to allocate bandwidth.
• VBR: More challenging but typically manageable without reshaping.
• Bursty: Causes congestion due to its unpredictability, often necessitating traffic reshaping to prevent
network overloads.
Understanding these traffic characteristics helps in designing better congestion control mechanisms and
quality of service strategies to maintain network performance and reliability.
CONGESTION
Congestion in a packet-switched network occurs when the load (number of packets sent) exceeds the
network's capacity (number of packets the network can handle). Congestion control involves mechanisms to
keep the load below the network's capacity to prevent delays and packet loss.
Causes of Congestion
• Queuing at Routers: Routers have input and output queues for holding packets before and after
processing.
o Input Queue: Holds incoming packets waiting to be processed.
o Processing Module: Uses routing tables to determine the route for packets.
o Output Queue: Holds packets waiting to be sent out.
If packet arrival rate exceeds processing rate, input queues grow longer. Similarly, if packet departure rate is
less than processing rate, output queues grow longer.
Network Performance Factors
Congestion control focuses on two key performance metrics:
1. Delay: The time taken for packets to travel from source to destination.
2. Throughput: The number of packets passing through the network in a unit of time.
Delay vs. Load
• Low Load: Delay is minimal, primarily consisting of propagation and processing delays.
• High Load: As load approaches capacity, delay increases sharply due to queuing.
• Overload: When load exceeds capacity, delay can become infinite as queues grow indefinitely,
causing significant retransmission and worsening congestion.
Throughput vs. Load
• Proportional Increase: Throughput increases proportionally with load when below capacity.
• Sharp Decline: When load exceeds capacity, throughput declines sharply due to packet discarding
by routers.
o Packet Discarding: Full queues force routers to discard packets, leading to retransmissions
and further congestion.
Effective congestion control strategies are essential to maintain network performance by managing delay
and maximizing throughput while preventing overload conditions.
CONGESTION CONTROL
Congestion control techniques and mechanisms aim to either prevent congestion before it occurs (open-loop
control) or alleviate it after it happens (closed-loop control).
Open-Loop Congestion Control
Open-loop control involves policies to prevent congestion. These mechanisms are implemented by either the
source or destination.
• Retransmission Policy: Optimizing retransmission timers and policies to prevent unnecessary
retransmissions, which can contribute to congestion.
• Window Policy: Using Selective Repeat instead of Go-Back-N to avoid unnecessary retransmission
of packets that have already been correctly received.
• Acknowledgment Policy: Reducing the frequency of acknowledgments to decrease network load.
• Discarding Policy: Implementing policies for routers to discard less critical packets during potential
congestion, preserving the overall quality of transmission.
• Admission Policy: Checking resource requirements before admitting a flow into the network to
prevent overloading.
Closed-Loop Congestion Control
Closed-loop control mechanisms address congestion after it occurs, involving feedback from the network.
• Backpressure: A congested node stops receiving data from upstream nodes, propagating the
congestion signal back to the source, applicable only in virtual-circuit networks.
• Choke Packet: A congested router sends a packet directly to the source to inform it of congestion,
prompting the source to slow down.
• Implicit Signaling: The source infers congestion from symptoms like delayed acknowledgments and
slows down the data transmission.
• Explicit Signaling: Congestion signals are embedded within the data packets, either warning the
source (backward signaling) or the destination (forward signaling).
Key Methods in Closed-Loop Congestion Control:
1. Backpressure:
o Congested node stops accepting data from upstream nodes.
o Propagates the congestion signal back to the source.
o Suitable for virtual-circuit networks.
2. Choke Packet:
o Congested node sends a warning packet directly to the source.
o Intermediate nodes are not informed.
3. Implicit Signaling:
o Source infers congestion from network behavior, like delayed acknowledgments.
4. Explicit Signaling:
o Congestion signals are included in data packets.
o Can be either backward (source warning) or forward (destination warning) signaling.
TWO EXAMPLES
Congestion Control in TCP
TCP implements congestion control through a combination of flow control and congestion control
mechanisms to ensure efficient and reliable data transmission.
Congestion Window
The size of the sender's window in TCP is determined by two factors: the receiver-advertised window size
(rwnd) and the congestion window size (cwnd). The actual window size used by the sender is the minimum
of these two values: Actual window size=min(rwnd, cwnd)
Congestion Control Phases
1. Slow Start (Exponential Increase):
o Starts with a cwnd of 1 Maximum Segment Size (MSS).
o The size of the congestion window increases exponentially (doubles) for each
acknowledgment received.
o Continues until the cwnd reaches a threshold (ssthresh).
2. Congestion Avoidance (Additive Increase):
o Begins when cwnd reaches the ssthresh.
o The size of the congestion window increases linearly (by 1 MSS) for each round of
acknowledgments received.
o Slows down the growth to avoid congestion.
3. Congestion Detection (Multiplicative Decrease):
o Triggered by packet loss detected through a timeout or receipt of three duplicate ACKs.
o Timeout:
▪ Strong reaction: sets ssthresh to half of the current window size and cwnd to 1 MSS.
▪ Starts the slow start phase again.
o Three Duplicate ACKs:
▪ Weaker reaction: sets ssthresh to half of the current window size and cwnd to the
value of ssthresh.
▪ Enters the congestion avoidance phase (fast recovery).
Example Scenario
Consider a TCP connection with a maximum window size of 32 segments and an initial ssthresh of 16
segments:
• Slow Start: cwnd increases exponentially until it reaches 16 segments.
• Congestion Avoidance: cwnd increases linearly beyond 16 segments until a timeout or three
duplicate ACKs occur.
• Timeout at cwnd = 20: ssthresh is set to 10, and cwnd is reset to 1 MSS, starting slow start again.
• Three Duplicate ACKs at cwnd = 12: ssthresh is set to 6, and cwnd is set to 6 MSS, entering
congestion avoidance.
The following figure illustrates the above scenario:
Congestion Control in Frame Relay
Frame Relay uses congestion control mechanisms to maintain high throughput and low delay, even in the
presence of bursty data.
Congestion Avoidance
Frame Relay employs two bits in the frame to explicitly signal congestion:
• BECN (Backward Explicit Congestion Notification):
o Warns the sender of congestion.
o Can use response frames from the receiver or a predefined connection (DLCI = 1023) to send
special frames.
o The sender can reduce the data rate in response.
• FECN (Forward Explicit Congestion Notification):
o Warns the receiver of congestion.
o Assumes higher-level flow control between sender and receiver.
o The receiver can delay acknowledgments, prompting the sender to slow down.
Example Scenarios
• No Congestion: Both FECN and BECN bits are 0.
• Congestion in the Direction A to B: FECN bit is 1, and BECN bit is 0.
• Congestion in the Direction B to A: FECN bit is 0, and BECN bit is 1.
• Congestion in Both Directions: Both FECN and BECN bits are 1.
The following figure illustrates these four scenarios:
By using these mechanisms, Frame Relay helps manage and alleviate congestion, ensuring efficient network
performance.
QUALITY OF SERVICE
Quality of Service (QoS) refers to the overall performance of a network or a service, particularly in terms of
its ability to meet certain performance requirements for data transmission. QoS is essential for ensuring that
network services meet the needs of applications and users.
Flow Characteristics
QoS is often defined by four primary characteristics of data flows: reliability, delay, jitter, and bandwidth.
1. Reliability
o Reliability refers to the ability of the network to deliver packets accurately and without loss.
o High reliability is crucial for applications such as email, file transfer, and Internet access,
where packet loss can result in the need for retransmissions and potential data corruption.
o In contrast, applications like telephony and audio conferencing can tolerate some level of
packet loss without significant degradation in service quality.
2. Delay
o Delay, or latency, is the time it takes for a packet to travel from the source to the destination.
o Different applications have varying sensitivity to delay:
▪ Low delay is critical for real-time applications such as telephony, audio conferencing,
video conferencing, and remote log-in, where timely delivery of packets is essential
for maintaining interactivity and user experience.
▪ Higher delay can be tolerated by applications like file transfer and email, where the
immediacy of packet delivery is less critical.
3. Jitter
o Jitter refers to the variability in packet delay within the same flow.
o Consistent delay (low jitter) is crucial for audio and video applications, where variations can
lead to disruptions in playback and user experience.
o For example, if packets arrive at regular intervals, the application can process them smoothly.
However, if there are significant variations in arrival times, the application might experience
buffering issues or playback glitches.
4. Bandwidth
o Bandwidth is the capacity of the network to transmit data, measured in bits per second (bps).
o Different applications require different amounts of bandwidth:
▪ High-bandwidth applications include video conferencing and streaming, which need a
continuous flow of large amounts of data to ensure high-quality video and audio.
▪ Lower bandwidth is sufficient for applications like email, where the total data
transmitted is relatively small.
Flow Classes
Based on these characteristics, data flows can be categorized into different classes, each with specific QoS
requirements. While the categorization is not formal or universal, it helps in understanding the varying needs
of different applications and the QoS mechanisms that can be employed to meet those needs.
For instance, Asynchronous Transfer Mode (ATM) networks have defined specific QoS classes, but other
protocols and networks may use different classifications.
INTEGRATED SERVICES
Integrated Services (IntServ) is a flow-based Quality of Service (QoS) model designed to provide
guaranteed QoS over the Internet. IntServ allows applications to reserve resources (such as bandwidth and
buffer space) across a network to ensure the desired level of service. This is particularly important for real-
time applications like audio and video streaming, which require consistent performance.
Key Components of Integrated Services
1. Signaling Protocol: RSVP
o Resource Reservation Protocol (RSVP): RSVP is used to reserve resources along the data
path. It helps set up a flow by communicating the resource requirements to each router in the
path.
2. Flow Specification
o Rspec (Resource Specification): Defines the resources needed (e.g., bandwidth, buffer size).
o Tspec (Traffic Specification): Describes the characteristics of the traffic (e.g., data rate,
burstiness).
3. Admission Control
o Routers decide whether to accept or reject a flow based on current resource availability and
existing commitments.
4. Service Classes
o Guaranteed Service: Provides a guaranteed minimum end-to-end delay, suitable for real-
time applications needing strict delay limits.
o Controlled-Load Service: Offers a service quality comparable to an unloaded network,
suitable for applications that can tolerate some delay but are sensitive to congestion.
RSVP Details
• Multicast Trees: RSVP supports multicast by allowing reservations for multiple receivers.
Unicasting is a special case of multicasting with one receiver.
• Receiver-Based Reservation: Unlike some other protocols, RSVP reservations are initiated by
receivers, which aligns with the multicast model.
• RSVP Messages:
o Path Messages: Sent by the sender to establish the path and store information at each router.
o Resv Messages: Sent by the receivers to reserve resources along the path established by the
Path messages.
Reservation Merging and Styles
• Reservation Merging: When multiple receivers request different amounts of bandwidth, routers
merge these requests to reserve the maximum required bandwidth.
• Reservation Styles:
o Wild Card Filter Style: Single reservation for all senders, used when flows from different
senders do not occur simultaneously.
o Fixed Filter Style: Distinct reservations for each flow, used when multiple flows are likely to
occur simultaneously.
o Shared Explicit Style: A single reservation shared by a set of flows.
Soft State
• Soft State: The reservation information in routers needs periodic refreshing (every 30 seconds by
default). This contrasts with hard state protocols, where the flow information remains until explicitly
removed.
Problems with Integrated Services
• Scalability: IntServ requires each router to maintain state information for each flow, which can
become problematic as the number of flows increases.
• Service-Type Limitation: IntServ currently supports only two types of services (guaranteed and
controlled-load), which may not cover all possible application requirements.
DIFFERENTIATED SERVICES
Differentiated Services (DiffServ) is a class-based Quality of Service (QoS) model designed to overcome the
limitations of Integrated Services (IntServ). It was introduced by the IETF (Internet Engineering Task Force)
to address issues such as scalability and service type limitation.
Key Features of DiffServ
1. Edge Processing: The main processing and classification of packets are done at the edge of the
network, rather than at every router within the network. This alleviates the need for core routers to
maintain state information for every flow, enhancing scalability.
2. Per-Class Service: Unlike IntServ's per-flow service, DiffServ categorizes packets into classes. Each
packet is marked with a specific class, and routers handle packets based on their class, not individual
flows. This allows for a wider range of service types.
DS Field
• DS Field: In DiffServ, each packet contains a DS field, which replaces the TOS (Type of Service)
field in IPv4 or the class field in IPv6.
o DSCP (Differentiated Services Code Point): A 6-bit subfield within the DS field that
specifies the per-hop behavior (PHB) for the packet.
o CU (Currently Unused): A 2-bit subfield that is currently not used.
Per-Hop Behavior (PHB)
DiffServ defines specific behaviors for how routers should handle packets:
1. Default PHB (DE PHB): Equivalent to best-effort delivery, compatible with traditional TOS.
2. Expedited Forwarding PHB (EF PHB): Ensures low loss, low latency, and guaranteed bandwidth,
akin to a virtual circuit.
3. Assured Forwarding PHB (AF PHB): Provides reliable delivery as long as the traffic adheres to
the specified profile, with the possibility of discarding packets during congestion.
Traffic Conditioner
DiffServ uses traffic conditioners to manage traffic entering the network:
1. Meters: Check if the incoming flow matches the negotiated traffic profile, using tools like token
buckets.
2. Markers: Modify the DSCP value of packets based on the meter's results. Can down-mark packets
that exceed the profile but cannot up-mark them.
3. Shapers: Adjust the flow of packets to conform to the traffic profile, smoothing out bursts.
4. Droppers: Discard packets that exceed the traffic profile, acting like a shaper without buffering.
Comparison: Integrated Services vs. Differentiated Services
Integrated Services (IntServ):
• Flow-Based Model: Each flow requires resource reservations.
• Signaling Protocol: Uses RSVP for signaling and resource reservation.
• State Maintenance: Routers maintain state information for each flow, leading to scalability issues.
• Service Types: Limited to Guaranteed Service and Controlled-Load Service.
Differentiated Services (DiffServ):
• Class-Based Model: Packets are categorized into classes, with processing done at the network edge.
• No Per-Flow State: Routers do not maintain state information for each flow, enhancing scalability.
• Service Variety: Supports multiple service types through various per-hop behaviors.
• Traffic Management: Utilizes traffic conditioners (meters, markers, shapers, droppers) for effective
traffic management.
25
DOMAIN NAME SYSTEM
Domain Name System (DNS)
The Domain Name System (DNS) is a hierarchical and decentralized naming system used to translate
human-readable domain names (like www.example.com) that computers use to identify each other on the
network. DNS serves as a critical supporting program for many application layer protocols, including e-
mail.
DNS in Action: E-mail Example
When an e-mail is sent to an address like [email protected], the e-mail application uses DNS to
resolve the domain name wonderful.com to its corresponding IP address (e.g., 200.200.200.5). The DNS
client (resolver) in the user's system sends a request to a DNS server to perform this translation.
NAMESPACE
To ensure unambiguous naming and efficient mapping between names and IP addresses, DNS uses a
hierarchical name space.
1. Flat Name Space: A flat name space assigns names without structure, making it impractical for
large-scale systems due to potential name collisions and central management requirements.
2. Hierarchical Name Space: A hierarchical name space organizes names into a tree structure. This
allows for decentralized control, where different levels of the hierarchy are managed by different
authorities, reducing the risk of name collisions.
DOMAIN NAMESPACE
The hierarchical structure of the DNS name space is depicted as an inverted tree with the root at the top.
1. Labels: Each node in the tree has a label, which can be up to 63 characters long. Sibling nodes
(nodes that share the same parent) must have unique labels.
2. Domain Names: A domain name is a sequence of labels separated by dots, read from the specific
node up to the root. The root node's label is an empty string, represented by a trailing dot in the
domain name.
o Fully Qualified Domain Name (FQDN): A domain name that includes all labels from the
specific node to the root, ending with a dot (e.g., challenger.atc.fhda.edu.). An FQDN
uniquely identifies a host.
o Partially Qualified Domain Name (PQDN): A domain name that does not end with a dot
and is not fully specified. It is often used within a local context, where the DNS client
appends a suffix to complete the name (e.g., challenger might become
challenger.atc.fhda.edu.).
DISTRIBUTION OF NAMESPACE
DNS employs a distributed approach to manage the vast amount of name-to-address mapping data. This
method ensures scalability and reliability.
1. Zones: The DNS name space is divided into zones, each managed by a different organization or
authority. Each zone is a portion of the domain tree, and a single DNS server (or a set of servers) is
responsible for managing it.
2. Domains: A domain is a subtree within the DNS hierarchy. The name of a domain corresponds to
the domain name of the top node in the subtree. Domains can be further divided into subdomains,
each managed by different entities.
DNS Hierarchy Example
26
REMOTE LOGGING, ELECTRONIC MAIL, AND FILE
TRANSFER
REMOTE LOGGING
In the Internet, users often need to run application programs on remote systems and transfer results back to
their local machines. This need can be fulfilled through general-purpose client/server programs that allow
users to log in to remote computers. TELNET is an example of such a client/server application, enabling
users to establish connections to remote systems as if their local terminal were directly connected to the
remote machine.
TELNET
TELNET (TErminaL NETwork) is the standard TCP/IP protocol for virtual terminal service, allowing users
to access remote systems and run applications as if they were on a local terminal. It operates by creating a
virtual terminal, making the local terminal appear to be part of the remote system.
Timesharing Environment
TELNET was designed for timesharing environments where a single computer supports multiple users
through terminals. Users can log into these systems using a user ID and password, gaining access to the
system's resources.
Logging Process
Local log-in: In a timesharing environment, users type commands on a local terminal or terminal emulator.
These commands are processed by the local operating system and passed to application programs.
Remote log-in: When accessing a remote system, the process involves the TELNET client and server. The
user's keystrokes are sent from the local terminal to the TELNET client, which converts them to the
Network Virtual Terminal (NVT) character set. These characters are transmitted over the Internet to the
TELNET server on the remote system, where they are interpreted and processed by the remote operating
system via a pseudoterminal driver.
Network Virtual Terminal (NVT)
To handle the heterogeneity of different computer systems, TELNET defines a universal interface called the
Network Virtual Terminal (NVT). The NVT character set consists of 8-bit bytes: the lower 7 bits follow
ASCII, and the highest-order bit is used for control characters.
• Data characters: 7-bit ASCII with the highest-order bit set to 0.
• Control characters: 8-bit characters with the highest-order bit set to 1.
Embedding Control Characters
TELNET uses a single TCP connection for both data and control characters. Control characters are
embedded in the data stream and are preceded by the Interpret As Control (IAC) character to distinguish
them from data.
Options and Option Negotiation
TELNET allows the negotiation of various options between the client and server to enhance functionality.
Common options include binary transmission, echo, suppress go-ahead signals, and terminal type. Option
negotiation uses specific control characters to request, enable, or disable options.
Example: To enable the echo option, the client sends a DO ECHO request, and the server responds with
WILL ECHO if it agrees.
Modes of Operation
TELNET operates in three modes:
1. Default Mode: Echoing is done by the client; characters are sent only after a whole line is
completed.
2. Character Mode: Each character is sent immediately to the server, which echoes it back. This mode
can cause delays in echoing due to transmission time and increases network traffic.
3. Line Mode: Line editing is done by the client, and the entire line is sent to the server. This mode
reduces network traffic and delays.
ELECTRONIC MAIL
Electronic mail (e-mail) is one of the most popular Internet services, allowing users to send and receive
messages. Initially, e-mail messages were short text memos, but today they can include text, audio, and
video, and be sent to multiple recipients.
Architecture
E-mail architecture involves three main components: user agents, message transfer agents (MTAs), and
message access agents (MAAs). Four scenarios illustrate the architecture:
First Scenario
In the simplest scenario, both the sender and receiver are on the same system. Each user has a mailbox
where messages are stored. A user agent (UA) is used to send and retrieve messages.
Second Scenario
In this scenario, the sender and receiver are on different systems, requiring message transfer over the
Internet. Two user agents and a pair of message transfer agents (MTAs) (one client and one server) are used.
Third Scenario
Here, the sender is connected to their mail server via a WAN or LAN. Two pairs of MTAs are needed to
handle the transfer of messages between the sender's and receiver's systems.
Fourth Scenario
In the most common scenario, both the sender and receiver are connected to their mail servers via a WAN or
LAN. This setup requires user agents, two pairs of MTAs, and a pair of message access agents (MAAs) for
retrieving messages.
User Agent
A user agent (UA) helps users compose, read, reply to, forward messages, and handle mailboxes. There are
two types of user agents: command-driven and GUI-based.
Services Provided by a User Agent
1. Composing Messages: User agents help users create e-mail messages with templates and built-in
editors.
2. Reading Messages: User agents display summaries of incoming messages and allow users to read
them.
3. Replying to Messages: Users can reply to the original sender or all recipients.
4. Forwarding Messages: Messages can be forwarded to third parties.
5. Handling Mailboxes: User agents manage inboxes and outboxes for received and sent messages.
User Agent Types
• Command-Driven: Accepts commands from the keyboard (e.g., mail, pine, elm).
• GUI-Based: Uses graphical interfaces for interaction (e.g., Eudora, Outlook, Netscape).
Sending and Receiving Mail
E-mail messages consist of an envelope (sender and receiver addresses) and the message (header and body).
The user agent manages the sending and receiving process, displaying message summaries and handling
mailboxes.
Addresses
E-mail addresses have two parts: the local part (user mailbox) and the domain name (mail server).
Mailing List
A mailing list allows one alias to represent multiple e-mail addresses, enabling messages to be sent to
multiple recipients.
MIME
Multipurpose Internet Mail Extensions (MIME) allow non-ASCII data (e.g., binary files, audio, video) to be
sent via e-mail by transforming it to NVT ASCII data. MIME headers include MIME-Version, Content-
Type, Content-Transfer-Encoding, Content-Id, and Content-Description.
Message Transfer Agent: SMTP
The Simple Mail Transfer Protocol (SMTP) is used for sending e-mail. SMTP involves commands and
responses between an MTA client and server. Mail transfer occurs in three phases: connection
establishment, mail transfer, and connection termination. SMTP is used between the sender's mail server and
the receiver's mail server.
Commands and Responses
SMTP commands (e.g., HELO, MAIL FROM, RCPT TO, DATA, QUIT) are sent from the client to the
server, and responses (e.g., 250 Request command completed, 354 Start mail input) are sent from the server
to the client.
Mail Transfer Phases
1. Connection Establishment: Establishing a connection between the client and server.
2. Mail Transfer: Transferring the message from the sender to the recipient.
3. Connection Termination: Ending the connection between the client and server.
This comprehensive system allows the efficient and reliable exchange of electronic mail across different
systems and networks.
FILE TRANSFER
Transferring files between computers is a fundamental task in networking. One of the most widely used
protocols for this purpose is the File Transfer Protocol (FTP).
File Transfer Protocol (FTP)
FTP is a standard network protocol used to transfer files from one host to another over a TCP/IP-based
network. It addresses several issues such as different file naming conventions, text and data representations,
and directory structures across systems.
Key Features of FTP:
• Dual Connections: FTP establishes two connections between hosts. One is for control information
(commands and responses) using port 21, and the other is for data transfer using port 20. This
separation enhances efficiency.
• Control and Data Connections: Control connections manage commands and responses, using
simple communication rules with 7-bit ASCII characters. Data connections handle the actual file
transfer with more complex rules.
FTP Components:
• Client Side: Includes the user interface, client control process, and client data transfer process.
• Server Side: Includes the server control process and server data transfer process.
• Connection Management: The control connection remains open throughout the session, while the
data connection opens and closes for each file transfer.
Communication Over Control Connection
FTP uses a method similar to SMTP for control connection communication. Commands and responses are
sent one line at a time, using a simple ASCII character set. Each command or response is terminated with a
carriage return and line feed.
Communication Over Data Connection
Data connection transfers files under the control of commands sent over the control connection. File transfer
can involve:
• Retrieving a File: Copying a file from the server to the client using the RETR command.
• Storing a File: Copying a file from the client to the server using the STOR command.
• Listing Directory/Files: Sending a list of directory or file names from the server to the client using
the LIST command.
Preparing for Data Transfer: Before transferring files, FTP defines:
• File Type: ASCII, EBCDIC, or image files.
• Data Structure: File structure, record structure, or page structure.
• Transmission Mode: Stream mode, block mode, or compressed mode.
Example FTP Session: An actual FTP session for retrieving a directory list:
1. Establish Connection: Client connects to FTP server.
2. Login: Client sends username and password.
3. Send Command: Client sends LIST command to retrieve directory listing.
4. Data Transfer: Server opens data connection, sends directory listing, and then closes data
connection.
5. Close Session: Client sends QUIT command to end session.
Anonymous FTP
For public file access without requiring a user account, FTP supports anonymous FTP. Users log in with the
username "anonymous" and a generic password like "guest." Access is limited to a subset of commands,
often restricted to file copying without directory navigation.
Example Anonymous FTP Session:
1. Connect to Server: Client connects to a public FTP server.
2. Login as Anonymous: User logs in with "anonymous" and "guest."
3. Execute Commands: User navigates directories and retrieves files.
4. Close Connection: Client closes the connection after completing file transfers.
27
WWW AND HTTP
The World Wide Web (WWW) is a vast repository of information that is linked together from various points
across the globe. It stands out from other Internet services due to its flexibility, portability, and user-friendly
features. The WWW project was initially started by CERN to create a system to handle distributed resources
necessary for scientific research.
ARCHITECTURE
The WWW operates as a distributed client-server service. A client, typically using a web browser, can
access services hosted on a server. These services are distributed over many locations, known as sites. Each
site holds one or more documents referred to as Web pages, which can contain links to other pages on the
same or different sites. A user can request a document by sending a URL through their browser, and the
server responds by sending the requested document.
Key Components:
• Client (Browser): The browser consists of three main parts: the controller, client protocol, and
interpreters. The controller manages user inputs and document access. The client protocol can be
FTP, HTTP, or others. Interpreters like HTML, Java, or JavaScript are used to display the document.
• Server: Servers store Web pages and respond to client requests. They often use caching to improve
efficiency and can handle multiple requests simultaneously through multithreading or
multiprocessing.
Uniform Resource Locator (URL)
A URL is a standard for specifying the location of information on the Internet. It defines four key elements:
protocol, host computer, port, and path.
1. Protocol: The client-server program used to retrieve the document (e.g., HTTP, FTP).
2. Host: The computer where the information is located, often an alias like "www".
3. Port: An optional part of the URL that specifies the server's port number.
4. Path: The pathname of the file where the information is located, which can include directories and
subdirectories.
Cookies
The WWW was originally designed as a stateless entity where a server responds to a client's request without
retaining any session information. However, modern web functions require maintaining state information for
various purposes like user authentication, e-commerce, and personalized content.
Creation and Storage:
1. When a server receives a request, it stores information about the client in a file or string, which may
include the client's domain name, the contents of the cookie, a timestamp, and other details.
2. The server includes the cookie in its response to the client.
3. The client's browser stores the cookie in a directory, sorted by domain server name.
Using Cookies: When the client sends a request, the browser checks for a cookie from the server and
includes it in the request if found. This helps the server recognize returning clients.
Applications of Cookies:
1. Restricted Access: Sites that allow only registered users send a cookie upon registration. Only clients
with the correct cookie are granted access.
2. E-commerce: Online stores use cookies to track items in a shopping cart. The cookie updates with
each item added, and the final cookie helps calculate the total charge during checkout.
3. Web Portals: Cookies store user preferences for quick access to favorite pages.
4. Advertising: Advertising agencies use cookies to track user interactions with banner ads, compiling
profiles that can be sold to other parties. This usage is controversial due to privacy concerns.
WEB DOCUMENTS
The documents on the World Wide Web (WWW) are categorized into three broad groups based on when
their contents are determined: static, dynamic, and active.
Static Documents
Static documents are fixed-content documents stored on a server. When a client requests a static document,
the server sends a copy of the document to the client. The contents of these documents do not change upon
each request unless manually updated on the server. Examples include HTML files containing text and
images.
• HTML (Hypertext Markup Language): A language used to create web pages. HTML uses tags to
define the structure and format of the content. Tags such as <b> and </b> for bold text, <i> and </i>
for italic text, and <u> and </u> for underlining are commonly used. The HTML document consists
of the head (metadata, title) and the body (actual content).
Dynamic Documents
Dynamic documents are generated by a web server at the time of the client's request. The server runs a script
or program to produce the content, which can vary with each request.
• Common Gateway Interface (CGI): A standard for creating dynamic documents. CGI allows the
server to execute scripts written in languages like C, C++, Perl, or Shell scripts. The output of these
scripts is sent as the response to the client.
• Scripting Technologies: Used to embed scripts within HTML for dynamic content. Examples
include PHP, Java Server Pages (JSP), Active Server Pages (ASP), and ColdFusion. These scripts
run on the server to generate parts of the document dynamically.
Active Documents
Active documents require a program or script to be executed at the client side (browser) to display content
like animations or interactive elements.
• Java Applets: Small programs written in Java, embedded in web pages. The applet is downloaded
and run by the client's browser.
• JavaScript: A scripting language that runs directly in the browser. JavaScript can be embedded in
HTML and is interpreted by the browser to create interactive content.
HTTP
HTTP (Hypertext Transfer Protocol) is a protocol primarily used for accessing data on the World Wide
Web. It combines aspects of FTP (File Transfer Protocol) and SMTP (Simple Mail Transfer Protocol).
While HTTP transfers files like FTP using TCP services, it is simpler because it uses a single TCP
connection without a separate control connection. HTTP messages resemble SMTP messages and use
MIME-like headers, but unlike SMTP messages, HTTP messages are not intended to be read by humans but
by HTTP clients (browsers) and servers. HTTP messages are delivered immediately, unlike SMTP messages
which are stored and forwarded.
HTTP Transactions
An HTTP transaction between a client and server involves the client sending a request message and the
server responding with a response message. Although HTTP uses TCP services, it is a stateless protocol,
meaning each transaction is independent of others. HTTP typically uses well-known port 80 for these
transactions.
Request and Response Messages
Both request and response messages have similar structures consisting of:
1. Request/Status Line: The first line in the message, which varies slightly between request and
response messages.
2. Headers: Additional information about the request or response.
3. Body: Optional, contains the document or information being sent.
Request Line and Status Line
• Request Line: Includes the request type (method), URL, and HTTP version.
• Status Line: Includes the HTTP version, status code, and status phrase.
Common Request Methods
• GET: Requests a document from the server.
• HEAD: Requests information about a document without the document itself.
• POST: Sends information from the client to the server.
• PUT: Sends a document to the server.
• TRACE: Echoes the incoming request.
• CONNECT: Reserved for future use.
• OPTION: Inquires about available options.
Common Status Codes
• 100-199: Informational
• 200-299: Success (e.g., 200 OK)
• 300-399: Redirection (e.g., 301 Moved Permanently)
• 400-499: Client Errors (e.g., 404 Not Found)
• 500-599: Server Errors (e.g., 500 Internal Server Error)
Headers
Headers provide additional information and are categorized into:
1. General Headers: Apply to both request and response messages.
2. Request Headers: Specific to request messages.
3. Response Headers: Specific to response messages.
4. Entity Headers: Provide information about the body of the document.
Example HTTP Requests and Responses
Example 1: Retrieving a Document (GET Method)
• Request:
GET /usr/bin/image1 HTTP/1.1
Accept: image/gif
Accept: image/jpeg
• Response:
HTTP/1.1 200 OK
Date: Mon, 07-Jan-05 13:15:14 GMT
Server: Challenger
MIME-version: 1.0
Content-length: 2048
Example 2: Sending Data to Server (POST Method)
• Request:
POST /cgi-bin/doc.pl HTTP/1.1
Accept: */*
Accept: image/gif
Accept: image/jpeg
Content-length: 50
(Input information)
• Response:
HTTP/1.1 200 OK
Date: Mon, 07-Jan-02 13:15:14 GMT
Server: Challenger
MIME-version: 1.0
Content-length: 2000
Example 3: Using Telnet for HTTP
• Connection and Request:
$ telnet www.mhhe.com 80
GET /engcs/compsci/forouzan HTTP/1.1
From: [email protected]
• Response:
yaml
Copy code
HTTP/1.1 200 OK
Date: Thu, 28 Oct 2004 16:27:46 GMT
Server: Apache/1.3.9 (Unix)
MIME-version: 1.0
Content-Type: text/html
Content-length: 14230
Persistent vs. Nonpersistent Connections
• Nonpersistent Connection: A new TCP connection is made for each request/response pair, causing
high overhead on the server.
• Persistent Connection: The default in HTTP/1.1, where the connection remains open for multiple
requests/responses, reducing overhead.
Proxy Servers
A proxy server caches responses to recent requests, reducing load on the original server, decreasing traffic,
and improving latency. Clients must be configured to use the proxy server instead of directly accessing the
target server.
28
NETWORK MANAGEMENT: SNMP
NETWORK MANAGEMENT SYSTEM
A Network Management System (NMS) performs several key functions to maintain and optimize the
operation of a network. These functions can be broadly divided into five categories: configuration
management, fault management, performance management, security management, and accounting
management.
Configuration Management
Configuration management involves maintaining information about network components and their
configurations. This ensures that any changes in the network, such as hardware replacements, software
updates, or user movements, are tracked and documented. Configuration management can be divided into
two subsystems: reconfiguration and documentation.
Reconfiguration involves adjusting network components and features, including:
• Hardware Reconfiguration: Changes to physical devices, like replacing desktop computers or
relocating routers.
• Software Reconfiguration: Updates or installations of software, which can often be automated.
• User-Account Reconfiguration: Managing user permissions and accounts, which may involve
adding new users or changing their privileges.
Documentation requires meticulous recording of all network configurations and changes. This includes:
• Hardware Documentation: Maps and specifications for each piece of equipment, detailing type,
serial number, vendor information, and warranty details.
• Software Documentation: Information on software types, versions, installation dates, and license
agreements.
• User Account Documentation: Access privileges and user details, ensuring these records are
updated and secured.
Fault Management
Fault management deals with identifying, isolating, correcting, and recording faults in the network. It can be
divided into two subsystems: reactive fault management and proactive fault management.
Reactive Fault Management:
• Detection: Identifying the exact location and nature of faults, such as damaged communication
media.
• Isolation: Minimizing the impact of faults by isolating affected components and notifying users.
• Correction: Repairing or replacing faulty components.
• Documentation: Recording fault details, causes, corrective actions, costs, and time taken to address
faults.
Proactive Fault Management:
• Prevention: Taking measures to prevent faults before they occur, such as replacing components
before their expected failure times or reconfiguring the network to avoid frequent faults.
Performance Management
Performance management ensures the network runs efficiently by monitoring and controlling various
performance metrics.
Capacity:
• Monitoring network capacity to ensure it is not exceeded, which could degrade performance.
Traffic:
• Measuring internal and external traffic to identify and manage periods of excessive use.
Throughput:
• Monitoring the data processing rate of individual devices or network segments to ensure efficient
operation.
Response Time:
• Measuring the time taken to respond to user requests, aiming to minimize delays and ensure prompt
service.
Security Management
Security management controls access to the network based on predefined policies. This involves ensuring
that only authorized users can access certain network resources and implementing measures to protect the
network from unauthorized access and threats.
Accounting Management
Accounting management involves monitoring and controlling users' access to network resources based on
usage. This can help:
• Prevent monopolization of limited resources.
• Ensure efficient use of the network.
• Aid in budgeting and resource planning.
SIMPLE NETWORK MANAGEMENT PROTOCOL (SNMP)
The Simple Network Management Protocol (SNMP) is a framework used for managing devices within an
internet that utilizes the TCP/IP protocol suite. SNMP provides fundamental operations necessary for
monitoring and maintaining an internet.
SNMP employs the concept of a manager and agents:
• Manager: Typically a host that controls and monitors a set of agents.
• Agent: Usually a router or host that the manager monitors.
SNMP is an application-level protocol allowing a few manager stations to control a set of agents. This setup
enables the protocol to monitor devices from different manufacturers installed on various physical networks.
SNMP abstracts management tasks from the physical characteristics of the managed devices and the
underlying networking technology, making it suitable for heterogeneous internet environments consisting of
different LANs and WANs connected by routers from various manufacturers.
Managers and Agents
• Manager (SNMP Client): A host running the SNMP client program.
• Agent (SNMP Server): A router or host running the SNMP server program.
The management process involves interactions between the manager and the agent:
1. Monitoring: The agent maintains performance information in a database, which the manager can
access. For example, the number of packets received and forwarded by a router.
2. Control: The manager can instruct the agent to perform specific actions, such as rebooting.
3. Notification: Agents can send warning messages (traps) to the manager in case of unusual situations.
Management Components
SNMP works in conjunction with two other protocols:
• Structure of Management Information (SMI): Defines the rules for naming objects, object types,
and encoding methods.
• Management Information Base (MIB): Defines the number and type of objects managed.
Together, SNMP, SMI, and MIB enable efficient network management.
Roles of Protocols
SNMP
• Defines the format of packets exchanged between a manager and an agent.
• Reads and changes the status (values) of objects (variables) in SNMP packets.
SMI
• Establishes rules for naming objects, defining object types (including range and length), and
encoding objects and values.
• Provides a collection of general rules for object management without defining the specific objects or
their values.
MIB
• Creates a collection of named objects, their types, and their relationships for each entity to be
managed.
An Analogy to Programming
The three network management components are analogous to programming:
• SMI: Similar to programming language syntax, defining variable structures and naming conventions.
• MIB: Analogous to variable declarations in a program.
• SNMP: Performs actions by storing, changing, and interpreting values of objects, similar to program
statements.
Management Process Overview
Consider a manager station (SNMP client) wanting to query an agent station (SNMP server) for the number
of UDP user datagrams received. The process involves:
1. MIB: Identifies the object holding the number of UDP user datagrams received.
2. SMI: Encodes the object's name.
3. SNMP: Creates and encapsulates a GetRequest message containing the encoded information.
Structure of Management Information (SMI)
Attributes
SMI emphasizes three attributes:
1. Name: Each managed object must have a unique name, defined by an object identifier (OID) in a
hierarchical structure.
2. Type: Defines the data type stored in an object, including simple types (e.g., INTEGER, OCTET
STRING) and structured types (e.g., SEQUENCE).
3. Encoding: Uses Basic Encoding Rules (BER) to encode data for network transmission.
Management Information Base (MIB)
Groups
MIB organizes objects into groups under the mib-2 object. Key groups include:
• System: General information about the node.
• Interface: Information about all interfaces of the node.
• Address Translation: Information about the ARP table.
• IP: Information related to IP, such as the routing table.
• ICMP: Information related to ICMP, such as packet counts and errors.
• TCP: General information related to TCP, such as connection tables.
• UDP: General information related to UDP, such as packet counts.
• SNMP: General information about SNMP itself.
Accessing MIB Variables
• Simple Variables: Accessed using the OID of the group followed by the variable ID.
• Tables: Accessed by specifying the table ID, then the entry (sequence) ID, and finally the field ID
within the entry.
Examples of Encoding
1. INTEGER 14:
o Tag: 02 (INTEGER)
o Length: 04 (4 bytes)
o Value: 0000000E (14 in hexadecimal)
2. OCTET STRING "HI":
o Tag: 04 (OCTET STRING)
o Length: 02 (2 bytes)
o Value: 48 49 ("H" and "I" in ASCII)
3. ObjectIdentifier 1.3.6.1:
o Tag: 06 (ObjectIdentifier)
o Length: 04 (4 bytes)
o Value: 2B 06 01 (1.3.6.1 in OID encoding)
4. IPAddress 131.21.14.8:
o Tag: 40 (IPAddress)
o Length: 04 (4 bytes)
o Value: 83 15 0E 08 (131.21.14.8 in IP address encoding)
UDP Ports for SNMP:
• Port 161: Used by the SNMP agent (server). The agent listens on this port for requests from the
manager.
• Port 162: Used by the SNMP manager (client). The manager listens on this port for Trap messages
from the agent.
Communication:
• Request and Response Messages:
o From Manager (Client) to Agent (Server):
▪ The manager sends requests using an ephemeral (temporary) port as the source and
port 161 as the destination.
▪ The agent responds from port 161 to the ephemeral port of the manager.
o From Agent (Server) to Manager (Client):
▪ The agent initiates a connection using an ephemeral port to port 162 on the manager
when sending Trap messages. This is a one-way communication.
SNMP Versions and Security:
• SNMPv1 and SNMPv2:
o Basic security features, which are limited compared to SNMPv3.
• SNMPv3:
o Provides enhanced security features including message authentication, privacy (encryption),
and manager authorization.
o Allows remote configuration of security settings, meaning a manager can modify settings
without physical presence.
29
MULTIMEDIA
Categories of Internet Audio/Video Services:
1. Streaming Stored Audio/Video:
o Description: On-demand access to pre-recorded and compressed audio/video files.
o Examples:
▪ Audio: Songs, audiobooks, lectures.
▪ Video: Movies, TV shows, music videos.
2. Streaming Live Audio/Video:
o Description: Real-time broadcast of audio and video over the Internet.
o Examples:
▪ Audio: Internet radio broadcasts.
▪ Video: Internet TV (still emerging).
3. Interactive Audio/Video:
o Description: Real-time, interactive communication between users.
o Examples:
▪ Audio: Internet telephony.
▪ Video: Internet teleconferencing.
DIGITIZING AUDIO AND VIDEO
Digitizing Audio:
• Analog to Digital Conversion:
o Process: Analog audio signals are converted into digital signals.
o Sampling Rate (Nyquist Theorem): For a signal with a maximum frequency fff, it must be
sampled at least 2f2f2f times per second.
o Voice Sampling: 8000 samples/second with 8 bits/sample → 64 kbps.
o Music Sampling: 44,100 samples/second with 16 bits/sample → 705.6 kbps (monaural) or
1.411 Mbps (stereo).
Digitizing Video:
• Frames and Refresh Rates:
o Concept: Video is a sequence of frames, creating the illusion of motion when displayed at a
high enough rate.
o North American Standard: 25 frames per second.
o Flickering Prevention: Each frame may be repainted twice, requiring either 50 frames per
second or 25 frames with repainted memory.
• Resolution and Data Rate Calculation:
o Resolution Example: 1024 x 768 pixels.
o Data Rate Calculation:
▪ For color frames at 24 bits per pixel and 25 frames per second:
Data Rate=2×25×1024×768×24=944 Mbps\text{Data Rate} = 2 \times 25 \times 1024
\times 768 \times 24 = 944 \text{ Mbps}Data Rate=2×25×1024×768×24=944 Mbps
▪ High data rates require advanced technologies like SONET (Synchronous Optical
Network) for transmission.
Compression Necessity:
• Reason: To reduce the data rate of video to make it feasible for transmission over the Internet, which
typically requires compression to handle lower-rate technologies effectively.
This overview encapsulates the technological advancements and methodologies involved in modern audio
and video streaming, focusing on digitization and compression to meet the demands of Internet-based media.
AUDIO AND VIDEO COMPRESSION
To transmit audio and video effectively over the Internet, compression techniques are essential. Here's a
detailed look at how audio and video compression works:
Audio Compression
1. Predictive Encoding:
• Concept: This technique encodes the differences between audio samples rather than encoding every
sample value. It is often used for speech.
• Examples: GSM (13 kbps), G.729 (8 kbps), G.723.3 (6.4 or 5.3 kbps).
2. Perceptual Encoding (e.g., MP3):
• Concept: MP3 is based on perceptual encoding, which leverages psychoacoustics—the study of how
humans perceive sound. This technique compresses audio by taking advantage of the limitations of
human hearing.
• Mechanisms:
o Frequency Masking: Loud sounds can obscure softer sounds in different frequency ranges.
o Temporal Masking: Loud sounds can numb our hearing temporarily even after they stop.
• Process:
o Analyzes and divides audio spectrum into groups.
o Allocates zero bits to completely masked frequencies.
o Allocates fewer bits to partially masked frequencies.
o Allocates more bits to non-masked frequencies.
• Data Rates: MP3 compression produces files at rates such as 96 kbps, 128 kbps, and 160 kbps,
based on the original audio frequency range.
Video Compression
1. Image Compression (JPEG):
• Concept: JPEG compresses images by dividing them into 8x8 pixel blocks and transforming them to
reveal redundancies.
• Steps:
o Discrete Cosine Transform (DCT): Transforms the 8x8 pixel block into a set of values that
reveal redundancies.
o Quantization: Reduces the number of bits needed by dividing values by a constant and
truncating fractions. This step is lossy, meaning some data is irretrievably lost.
o Compression: Uses methods like zigzag scanning to arrange the values and remove
redundant zeros.
Examples:
o Case 1 (Uniform Gray): Only the DC value is non-zero.
o Case 2 (Two Sections): DC value and a few AC values are non-zero.
o Case 3 (Gradient): DC value and many AC values are non-zero.
2. Video Compression (MPEG):
• Concept: MPEG compresses video by addressing both spatial and temporal redundancy.
• Types of Frames:
o I-frames (Intra-coded Frames): Independent frames that do not rely on other frames.
Present at regular intervals to manage sudden changes and for viewers who tune in at any
time.
o P-frames (Predicted Frames): Contain changes from the preceding I-frame or P-frame.
They carry less data compared to I-frames.
o B-frames (Bidirectional Frames): Use both previous and future frames for encoding, and
are more efficient than P-frames in terms of data size.
• Versions:
o MPEG-1: Designed for CD-ROM with a data rate of 1.5 Mbps.
o MPEG-2: Designed for high-quality DVD with data rates of 3 to 6 Mbps.
STREAMING STORED AUDIO/VIDEO
1. Using a Web Server
o Description: The client (browser) downloads a compressed audio/video file from a Web
server using HTTP. The file must be fully downloaded before playback begins.
o Drawback: Large files mean significant wait times before playback.
2. Using a Web Server with Metafile
o Description: The Web server provides a metafile that contains information about the
audio/video file. The media player reads the metafile and then accesses the audio/video file
directly.
o Process:
1. GET request for metafile.
2. Download of metafile.
3. Media player uses the metafile’s URL to access the actual file.
4. File is downloaded and played.
o Advantage: Allows for better management of media files but still doesn’t involve true
streaming.
3. Using a Media Server
o Description: Similar to the second approach but uses a media server that supports UDP for
streaming, which is more suitable for real-time data.
o Process:
1. GET request for metafile.
2. Media player gets the metafile.
3. Media player uses the metafile URL to request the file from the media server.
4. Media server delivers the file via UDP.
4. Using a Media Server and RTSP
o Description: Uses the Real-Time Streaming Protocol (RTSP) for more control over playback
and streaming.
o Process:
1. GET request for metafile.
2. Media player gets the metafile and sends SETUP request to media server.
3. Media server responds, and media player sends PLAY request.
4. Media server streams the file using UDP.
5. Playback control is handled through RTSP commands like PLAY, PAUSE, and
TEARDOWN.
STREAMING LIVE AUDIO/VIDEO
• Description: Similar to traditional broadcasting but over the Internet.
• Characteristics:
o Communication is multicast rather than unicast.
o Ideally uses UDP and RTP, but currently often still relies on TCP and unicasting.
• Challenges:
o Managing delays and retransmissions is crucial for effective live streaming.
REAL-TIME INTERACTIVE AUDIO/VIDEO
• Characteristics:
o Time Relationship: Packets must preserve the time relationship to maintain real-time
communication.
o Jitter: Variability in packet arrival times can cause playback issues; solved with timestamps
and playback buffers.
o Playback Buffer: Stores data to handle jitter and maintain smooth playback.
o Ordering: Sequence numbers are used to ensure packets are played in the correct order.
o Multicasting: Often used in conferencing to manage traffic from multiple sources.
o Translation: Converts high-bandwidth signals to lower-quality signals suitable for the
recipient’s bandwidth.
o Mixing: Combines multiple streams into one single stream.
Transport Layer Protocols
• TCP: Not suitable for interactive traffic due to its error control and retransmission mechanisms.
• UDP: Preferred for interactive multimedia as it supports multicasting and does not have
retransmission, but lacks features like time-stamping and sequencing.
• RTP: Provides additional features needed for real-time multimedia, such as time-stamping,
sequencing, and mixing.
30
CRYPTOGRAPHY
INTRODUCTION
Cryptography is the science and art of transforming messages to make them secure and resistant to attacks. It
involves several key concepts and algorithms to ensure that information remains confidential and protected
from unauthorized access.
Key Concepts
1. Plaintext and Ciphertext
o Plaintext: The original message before encryption.
o Ciphertext: The transformed message after encryption.
2. Cipher
o Refers to the algorithms used for encryption and decryption. Ciphers can be categorized into
different types based on their structure and the number of keys they use.
3. Key
o A number or set of numbers that the cipher operates on. It is crucial for both encryption and
decryption processes.
4. Alice, Bob, and Eve
o Alice: The sender of the secure message.
o Bob: The recipient of the message.
o Eve: An interceptor who attempts to access or disrupt the communication between Alice and
Bob.
Categories of Cryptography
Cryptographic algorithms can be divided into two main categories:
1. Symmetric-Key Cryptography (Secret-Key Cryptography)
o Description: The same key is used for both encryption and decryption. The key must be kept
secret and shared between the sender and receiver.
o Example: Shift Cipher, DES (Data Encryption Standard).
2. Asymmetric-Key Cryptography (Public-Key Cryptography)
o Description: Uses a pair of keys—public and private. The public key is used for encryption,
and the private key is used for decryption. The public key is available to everyone, while the
private key is kept secret by the receiver.
o Example: RSA (Rivest-Shamir-Adleman), ECC (Elliptic Curve Cryptography).
Symmetric-Key Cryptography
Symmetric-key cryptography involves algorithms where the same key is used for both encryption and
decryption.
1. Traditional Ciphers
o Substitution Ciphers: Replace one symbol with another. Can be monoalphabetic (one-to-
one mapping) or polyalphabetic (one-to-many mapping).
o Transposition Ciphers: Rearrange the symbols in the plaintext without substituting them.
o Shift Cipher: A type of monoalphabetic cipher where characters are shifted by a fixed
number of positions in the alphabet. Known as the Caesar Cipher when used by Julius
Caesar.
2. Modern Ciphers
o XOR Cipher: Uses the XOR operation to encrypt and decrypt data.
o Rotation Cipher: Rotates the bits in the data block to the left or right.
o S-box (Substitution Box): Performs substitution at the bit level.
o P-box (Permutation Box): Performs transpositions at the bit level.
o DES (Data Encryption Standard): A block cipher that encrypts 64-bit blocks of plaintext
with a 64-bit key. DES has been a standard for many years but has been replaced by AES due
to its shorter key length.
o Triple DES (3DES): An enhancement of DES that applies the DES algorithm three times to
each data block to increase security.
o AES (Advanced Encryption Standard): The current standard for symmetric-key
encryption. AES supports key sizes of 128, 192, or 256 bits and operates in multiple rounds
of encryption.
Comparison of Symmetric and Asymmetric Cryptography
• Symmetric-Key Cryptography:
o Pros: Fast and efficient for large volumes of data.
o Cons: Key distribution and management can be challenging.
• Asymmetric-Key Cryptography:
o Pros: Solves the key distribution problem and enables digital signatures.
o Cons: Generally slower than symmetric-key cryptography due to complex algorithms.
The text you provided covers several cryptographic techniques and concepts. Here’s a concise summary of
the main points:
Cipher Feedback (CFB) Mode
CFB (Cipher Feedback) mode allows encryption of smaller units of data (r bits) than the cipher's block
size. It works by encrypting a block of data and using part of the resulting ciphertext to encrypt the plaintext.
Key characteristics of CFB mode:
• Changing the IV (Initialization Vector) leads to different ciphertexts for the same plaintext.
• Each ciphertext block depends on both the plaintext and the previous ciphertext block.
• Errors in one ciphertext block can affect subsequent blocks.
Output Feedback (OFB) Mode
OFB (Output Feedback) mode also encrypts smaller units of data but differs from CFB in that the
ciphertext is not used to generate the key stream. Instead, the feedback comes from the key stream itself.
Key characteristics of OFB mode:
• Changing the IV results in different ciphertexts for the same plaintext.
• Each ciphertext block is only dependent on the plaintext block and the key stream.
• Errors in ciphertext blocks do not propagate to other blocks.
Asymmetric-Key Cryptography
Asymmetric-Key (Public-Key) Cryptography involves two keys: a public key and a private key. Two
main algorithms are discussed:
ASYMMETRIC-KEY CRYPTOGRAPHY
RSA Algorithm
RSA is a widely used public-key cryptosystem. It involves the following steps:
1. Key Generation:
o Choose two large prime numbers p and q.
o Compute n=p×q.
o Compute ϕ(n)=(p−1)×(q−1).
o Choose an integer e (public exponent) such that 1 < e < ϕ(n) and e is coprime with ϕ(n).
o Compute d (private exponent) such that d×e≡1mod ϕ(n).
o Public key is (e,n) and private key is (d,n).
2. Encryption:
o Convert plaintext to an integer mmm (must be less than nnn).
o Compute ciphertext C = me mod n.
3. Decryption:
o Compute plaintext m = Cd mod n.
Diffie-Hellman Key Exchange
Diffie-Hellman is used for securely exchanging cryptographic keys over a public channel. The procedure
involves:
1. Both parties agree on a large prime number p and a base g.
2. Each party selects a private key (x for Alice, y for Bob) and computes a public value (R1=gx mod p,
R2= gy mod p).
3. They exchange these public values.
4. Each party computes the shared secret key using the received public value and their own private key.
Man-in-the-Middle Attack: An attacker can intercept and alter the public values exchanged between Alice
and Bob to establish separate keys with each party. This can be mitigated by incorporating authentication
mechanisms.
Authentication
To prevent attacks like the man-in-the-middle attack, it is essential to authenticate the parties involved in the
key exchange process. This ensures that both parties are communicating with the intended party and not an
imposter.
31
NETWORK SECURITY
SECURITY SERVICES
Security Services
1. Message Confidentiality: Ensures that messages are readable only by the intended recipient.
Achieved through encryption.
2. Message Integrity: Ensures that messages are not altered during transmission. Uses mechanisms
like message digests.
3. Message Authentication: Confirms the identity of the sender and ensures that the message comes
from a legitimate source.
4. Message Nonrepudiation: Ensures that the sender cannot deny sending a message once it has been
sent.
5. Entity Authentication: Verifies the identity of users or entities before granting access to resources.
MESSAGE CONFIDENTIALITY
• Symmetric-Key Cryptography: Involves a shared secret key for both encryption and decryption.
Efficient but requires secure key exchange.
• Asymmetric-Key Cryptography: Uses a pair of keys (public and private). Public key encrypts,
private key decrypts. Suitable for scenarios where secure key exchange is challenging but less
efficient for long messages.
MESSAGE INTEGRITY
• Document and Fingerprint: Physical analogy where a document is verified by its fingerprint.
• Message and Message Digest: Electronic equivalent where a message is hashed to produce a digest.
The digest is sent with the message, and integrity is verified by recomputing and comparing the
digest.
Hash Functions
• Criteria:
o One-wayness: Difficult to recreate the original message from the digest.
o Weak Collision Resistance: Hard to find another message that produces the same digest.
o Strong Collision Resistance: Hard to find any two messages that produce the same digest.
SHA-1 Hash Algorithm
• Structure: Processes 512-bit blocks of a message, producing a 160-bit digest. The algorithm
includes steps for expanding the block, processing each block, and mangling data to create the final
digest.
Examples:
• Lossless Compression vs. Hashing: Compression methods are reversible, so they cannot be used as
hashing functions. Checksum methods can be used but do not always meet all hash function criteria.
• Hash Algorithms: SHA-1 is a common hashing algorithm, but newer algorithms like SHA-2 and
SHA-3 are often recommended due to vulnerabilities found in SHA-1.
MESSAGE AUTHENTICATION
Your text provides a detailed overview of Message Authentication Codes (MACs) and Digital Signatures.
Here's a brief summary and some key points for each section:
Message Authentication Codes (MACs)
• Purpose: Ensure the integrity and authenticity of a message.
• Modification Detection Code (MDC): Uses a hash function to detect any changes in the message
but does not authenticate the sender.
• MAC (Message Authentication Code): Enhances MDC by including a symmetric key. This allows
the recipient to verify both the integrity of the message and its source.
o Process:
1. Alice generates a MAC using a symmetric key and a keyed hash function.
2. She sends the message along with the MAC to Bob.
3. Bob computes a new MAC using the same key and compares it with the received
MAC to verify authenticity.
HMAC (Hashed Message Authentication Code)
• Purpose: A specific type of MAC that uses keyless hash functions (e.g., SHA-1) to create a MAC.
• Process:
1. The symmetric key is combined with the message and hashed.
2. The result is hashed again with the key to produce the final HMAC.
3. The recipient verifies the HMAC by recalculating it and comparing it with the received
HMAC.
DIGITAL SIGNATURE
• Purpose: Authenticate the sender and ensure the integrity of the message. Unlike MACs, digital
signatures use asymmetric keys (public and private keys).
• Types of Signatures:
o Conventional Signature: Included in the document itself and verified by comparison with a
stored signature.
o Digital Signature: Sent as a separate document. The recipient verifies it using the sender's
public key.
• Signing Methods:
o Signing the Document: Encrypts the entire document with the sender's private key.
Verification is done by decrypting with the sender's public key.
o Signing the Digest: Signs a hash (digest) of the document instead of the whole document,
which is more efficient.
Key Differences
• Inclusion: Conventional signatures are part of the document, while digital signatures are separate.
• Verification Method: Conventional signatures require a stored copy for verification, while digital
signatures use cryptographic methods.
• Relationship: Conventional signatures have a one-to-many relationship; digital signatures are one-
to-one with each message.
• Duplicity: Conventional signatures may be distinguishable from the original document, while digital
signatures are not unless additional factors like timestamps are used.
• Need for Keys: Conventional signatures are like private keys; digital signatures use asymmetric
keys.
Nonrepudiation
• Challenge: Ensuring that a sender cannot deny sending a message.
• Solution: Use a trusted third party that verifies and stores the message and signature. This party can
provide proof if the sender denies the action.
Signature Schemes
• Examples: RSA and DSS (Digital Signature Standard) are common signature schemes, but their
implementation details are beyond this summary.
ENTITY AUTHENTICATION
Entity authentication is a technique designed to allow one party (the verifier) to prove the identity of another
party (the claimant). This process is essential in various scenarios, such as when accessing a system or
performing transactions. Entity authentication differs from message authentication in several key ways:
1. Real-Time Interaction: Entity authentication requires real-time interaction between the claimant
and verifier, whereas message authentication can occur offline, with the claimant potentially not
present during the verification process.
2. Session Duration: Entity authentication typically authenticates the claimant for the entire duration of
a session, while message authentication validates each individual message.
Types of Witnesses for Entity Authentication
Entity authentication can rely on three kinds of witnesses:
1. Something Known: A secret known only by the claimant, such as a password, PIN, secret key, or
private key.
2. Something Possessed: An object the claimant possesses, like a passport, driver's license,
identification card, credit card, or smart card.
3. Something Inherent: An inherent characteristic of the claimant, such as a fingerprint, voice, facial
characteristic, retinal pattern, or handwriting.
Password-Based Authentication
Passwords are the simplest and oldest method of entity authentication, categorized into two types:
1. Fixed Password: A password that remains constant for each access. This method is susceptible to
several attacks, including eavesdropping, physical theft, file access, and guessing.
o Eavesdropping: Observing the password during entry or intercepting it during transmission.
o Stealing: Physically obtaining the password.
o Accessing a File: Hacking into the system to retrieve or alter stored passwords.
o Guessing: Attempting various combinations to find the correct password.
2. One-Time Password: A password used only once, making eavesdropping and theft ineffective but
introducing complexity in implementation.
Enhancing Password Security
To improve the security of fixed passwords, several methods can be employed:
1. Hashing: Storing the hash of the password rather than the plaintext password. While this prevents
direct password theft, it can still be vulnerable to dictionary attacks.
2. Salting: Adding a random string (salt) to the password before hashing. This increases the complexity
of dictionary attacks by expanding the number of potential hash values.
3. Combining Identification Techniques: Using two forms of identification, such as an ATM card
(something possessed) and a PIN (something known), to enhance security.
Challenge-Response Authentication
Challenge-response authentication allows the claimant to prove knowledge of a secret without revealing it.
The verifier sends a time-varying challenge (e.g., a random number or timestamp) to the claimant, who then
applies a function to the challenge and sends the result (response) back to the verifier. This process ensures
that the claimant knows the secret without exposing it to potential interception.
1. Using Symmetric-Key Ciphers: The challenge is encrypted with a shared secret key and the verifier
decrypts the response to verify the claimant.
2. Using Asymmetric-Key Ciphers: The challenge is encrypted with the claimant’s public key and
decrypted with the claimant’s private key, or signed by the claimant’s private key and verified with
the claimant’s public key.
3. Using Keyed-Hash Functions (MAC): The challenge is hashed with a secret key to produce a
response, ensuring both integrity and authenticity.
KEY MANAGEMENT
Key management is crucial for both symmetric and asymmetric cryptography. It involves distributing and
maintaining secret keys (for symmetric cryptography) and public/private key pairs (for asymmetric
cryptography).
1. Symmetric-Key Distribution: Involves sharing a secret key between two parties. The number of
keys required increases quadratically with the number of parties, leading to the N² problem. To
address this, a Key Distribution Center (KDC) can be used to manage and distribute keys.
2. Session Keys: Temporary keys used for a single session, established and authenticated by a KDC.
3. Kerberos Protocol: An authentication protocol using a KDC, where users authenticate with an
Authentication Server (AS) and obtain session keys from a Ticket-Granting Server (TGS) to access
real servers.
32
SECURITY IN THE INTERNET: IPSEC, SSL/TLS, PGP, VPN,
AND FIREWALLS
IP SECURITY (IPSEC)
Common Issues Across Protocols
• Message Authentication Code (MAC): A MAC is needed to ensure data integrity.
• Encryption: Messages and sometimes the MAC itself need encryption.
• Security Parameters: Before securing data, Alice and Bob need to agree on security parameters,
including algorithms and keys for authentication and encryption.
Key Protocols
1. IPSec (Internet Protocol Security)
o Purpose: Provides security at the IP layer, ensuring authentication and confidentiality.
o Modes:
▪ Transport Mode: Protects the payload of the IP packet (excluding the IP header).
▪ Tunnel Mode: Protects the entire IP packet, including the header, by encapsulating it
in a new IP packet.
o Protocols:
▪ Authentication Header (AH): Provides source authentication and data integrity but
not confidentiality. It uses a hash function and symmetric key.
▪ Encapsulating Security Payload (ESP): Provides source authentication, data
integrity, and confidentiality by encrypting the payload and adding authentication data
at the end.
2. SSL/TLS (Secure Sockets Layer / Transport Layer Security)
o Purpose: Secures data at the transport layer (TCP).
o Function: Establishes a secure connection through encryption and authentication
mechanisms.
3. PGP (Pretty Good Privacy)
o Purpose: Secures email communication over SMTP.
o Function: Provides encryption and authentication for email messages.
Security Association (SA)
• Definition: A set of parameters established between two parties for securing communications.
• Types:
o Outbound SA: For securing datagrams sent to another party.
o Inbound SA: For securing datagrams received from another party.
• Establishment:
o Manual Configuration: Predefined parameters set manually.
o Automatic Key Management: Protocols like IKE (Internet Key Exchange) automate the
process.
Security Association Database (SADB)
• Definition: A database that holds information about all security associations for a system.
• Components:
o Security Parameter Index (SPI): A unique identifier for each security association.
o Inbound and Outbound SADBs: Separate databases for managing inbound and outbound
associations.
Summary of Services Provided by IPSec
Service AH ESP
Access Control Yes Yes
Message Authentication Yes Yes
Entity Authentication Yes Yes
Confidentiality No Yes
Service AH ESP
Replay Attack Protection Yes Yes
Key Concepts
• Public-Key Cryptography: Used to securely exchange the initial set of security parameters.
• Replay Attack Protection: Achieved through sequence numbers and a fixed-size window to prevent
processing duplicate packets.
Internet Key Exchange
Internet Key Exchange (IKE) is a crucial protocol used in the creation of Security Associations (SAs) for
IPSec, which ensures secure communication over IP networks. IKE itself is based on three other protocols:
Oakley, SKEME, and ISAKMP.
• Oakley Protocol: Developed by Hilarie Orman, Oakley enhances the Diffie-Hellman key exchange
method, focusing on key creation. It does not specify message formats, making it flexible but less
prescriptive.
• SKEME: Designed by Hugo Krawczyk, SKEME uses public-key encryption for entity
authentication in key exchange protocols.
• ISAKMP (Internet Security Association and Key Management Protocol): Created by the NSA,
ISAKMP defines the packet formats, protocols, and parameters necessary for IKE exchanges. It
ensures standardized, formatted messages for creating SAs.
How ISAKMP Works
ISAKMP packets can be carried over any underlying protocol, including the IP layer. When using IPSec,
ISAKMP packets are encapsulated within IP datagrams. While ISAKMP itself does not require additional
security for transport, it is the encapsulated content (SAs) that needs to be protected.
Virtual Private Network (VPN)
A Virtual Private Network (VPN) leverages the global Internet to create a private network for an
organization. It uses IPSec in tunnel mode to secure IP datagrams, offering authentication, integrity, and
confidentiality.
Types of Networks
1. Private Networks: Used internally within an organization to provide privacy and access to shared
resources. Examples include intranets (private LANs) and extranets (LANs with restricted external
access).
2. Hybrid Networks: Combine private internets for internal communication with global Internet access
for external communication. This setup maintains privacy while allowing global connectivity.
3. Virtual Private Networks: Use the Internet to create a private, secure network without the high
costs associated with private WANs. VPNs utilize IPSec in tunnel mode to ensure privacy and
security.
VPN Addressing
VPNs use a combination of public and private addressing. The public network (Internet) transports VPN
packets, while the private network addresses are encapsulated within the VPN datagrams.
SSL/TLS
Overview
Secure Sockets Layer (SSL) and Transport Layer Security (TLS) are protocols designed to secure
communications over the Internet. SSL, developed by Netscape, has been succeeded by TLS, which is an
IETF standard.
SSL Services
SSL provides several key services:
1. Fragmentation: Data is divided into smaller blocks.
2. Compression: Data blocks are optionally compressed.
3. Message Integrity: Keyed-hash functions are used to create Message Authentication Codes
(MACs).
4. Confidentiality: Data and MACs are encrypted using symmetric-key cryptography.
5. Framing: A header is added to the encrypted data before passing it to the transport layer.
Security Parameters
SSL uses cipher suites, which define the algorithms for key exchange, encryption, and hashing. For
example, SSL_DHE_RSA_WITH_3DES_EDE_CBC_SHA specifies ephemeral Diffie-Hellman for key
exchange, 3DES for encryption, and SHA for hashing.
Cryptographic Secrets
SSL generates cryptographic secrets through a complex process involving the exchange of random numbers,
a premaster secret, and the creation of a master secret using hash functions.
SSL Protocols
1. Record Protocol: Handles the transportation of data, applying fragmentation, compression, MAC,
and encryption.
2. Handshake Protocol: Negotiates cipher suites, authenticates parties, and exchanges cryptographic
information.
3. ChangeCipherSpec Protocol: Signals when cryptographic parameters are ready for use.
4. Alert Protocol: Reports errors and abnormal conditions.
TLS Differences from SSL
• Version: TLS 1.0 is compatible with SSLv3.0 but not vice versa.
• Cipher Suite: TLS does not support Fortezza cipher suites.
• Cryptographic Secrets: TLS uses a pseudorandom function (PRF) for generating secrets.
• Alert Protocol: TLS includes new alert messages and removes some from SSL.
• Handshake Protocol: TLS has modified some message details from SSL.
• Record Protocol: TLS uses HMAC instead of the MAC used in SSL.
PGP
PGP is an encryption program that provides cryptographic privacy and authentication for data
communication. It’s primarily used for securing emails, offering features like confidentiality, authentication,
and message integrity. PGP operates at the application layer, which makes it distinct from other protocols
like IPSec and SSL/TLS that function at lower layers in the TCP/IP suite.
PGP Services:
1. Plaintext: Sending messages without encryption or authentication.
2. Message Authentication: Signing the message with the sender's private key to verify authenticity.
3. Compression: Compressing the message to reduce size, which helps in traffic reduction but does not
enhance security.
4. Confidentiality with One-Time Session Key: Encrypting the message with a session key, which is
then encrypted with the recipient’s public key.
5. Code Conversion: Using Radix 64 conversion to translate binary data into ASCII characters,
making it suitable for email transmission.
6. Segmentation: Dividing the message into smaller segments to comply with email protocol limits.
PGP Process Overview:
1. Sender’s Actions (Alice):
o Session Key Encryption: Alice generates a session key and encrypts it with Bob’s public
key. She includes algorithm identifiers for both symmetric and public-key encryption.
o Message Authentication: Alice signs the message using her private key and includes
algorithm identifiers for the signature.
o Message Encryption: The message, along with the encrypted session key and signature, is
encrypted using the session key.
o Transmission: Alice combines all components into a PGP packet and sends it to Bob.
2. Receiver’s Actions (Bob):
o Session Key Decryption: Bob uses his private key to decrypt the session key and algorithm
identifiers.
o Message Decryption: Bob uses the session key to decrypt the message.
o Signature Verification: Bob decrypts the digest using Alice’s public key and compares it
with a newly computed digest of the message to verify authenticity.
PGP Algorithms:
Algorithm ID Description
Public key 1 RSA (encryption or signing)
Public key 2 RSA (for encryption only)
Public key 3 RSA (for signing only)
Public key 17 DSS (for signing)
Hash 1 MD5
Hash 2 SHA-1
Hash 3 RIPEMD
Encryption 0 No encryption
Encryption 1 IDEA
Encryption 2 Triple DES
Encryption 9 AES
Key Rings: PGP uses key rings for managing public and private keys. Each user has:
• A ring of their own private/public keys.
• A ring of public keys of others they interact with.
PGP Certificates: Unlike traditional hierarchical certificate authorities (CAs), PGP uses a web of trust
model. Users can issue certificates for each other, with trust levels assigned to these certificates. This
decentralized model allows for a flexible but trust-based approach to validating public keys.
Trust Levels in PGP:
• Introducer Trust: Trust assigned to the introducer of a certificate.
• Certificate Trust: Trust assigned to the certificate based on the introducer’s trust level.
• Key Legitimacy: Determined by the weighted trust levels of certificates from various sources.
Key Revocation: If a key is compromised or outdated, the owner can issue a revocation certificate. This
certificate, signed by the old key, is disseminated to inform others in the ring about the key's revocation.
FIREWALLS
Firewalls Overview: Firewalls are security devices that control incoming and outgoing network traffic
based on predetermined security rules. They serve as a barrier between a trusted internal network and
untrusted external networks.
Types of Firewalls:
1. Packet-Filter Firewall:
o Operation: Filters traffic based on IP addresses, ports, and protocols.
o Example: Can block packets from specific IP addresses or ports, or filter traffic based on
network layer and transport layer headers.
2. Proxy Firewall:
o Operation: Filters traffic at the application layer by acting as an intermediary between the
user and the destination server. It inspects the content of packets to enforce security policies.
o Example: Can restrict access to web pages based on content, ensuring only authorized users
can access specific resources.
Firewall Examples:
• Packet-Filter Example: Blocks incoming packets from certain networks or destined for specific
ports.
• Proxy Firewall Example: Inspects HTTP requests and only forwards those from authenticated
users, ensuring only legitimate traffic is allowed through.