0% found this document useful (0 votes)
12 views154 pages

End Sem Communication Network

The document provides an overview of communication networks, focusing on routing algorithms and protocols, including TCP, UDP, RIP, OSPF, and BGP. It discusses the concepts of delivery, forwarding, and routing within and between autonomous systems, as well as multicast routing techniques. Additionally, it covers various routing algorithms such as Dijkstra's and Bellman-Ford, and highlights the importance of efficient bandwidth usage in network communication.

Uploaded by

Abhay Chauhan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views154 pages

End Sem Communication Network

The document provides an overview of communication networks, focusing on routing algorithms and protocols, including TCP, UDP, RIP, OSPF, and BGP. It discusses the concepts of delivery, forwarding, and routing within and between autonomous systems, as well as multicast routing techniques. Additionally, it covers various routing algorithms such as Dijkstra's and Bellman-Ford, and highlights the importance of efficient bandwidth usage in network communication.

Uploaded by

Abhay Chauhan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 154

Communication Networks

Presented By
Tejswini K Lahane
[email protected]
1
Outline

• Routing algorithms and Routing protocols;


•Transport Layer : TCP and UDP
•Congestion control
•BOOTP
•DHCP
•DNS
•SMTP and MIME
•FTP and TFTP
•World Wide Web and HTTP;
Outline

• Telephone networks
• ISDN
• SS7
• SONET and SDH
• Frame Relay and ATM
Network Layer: Delivery, Forwarding and Routing
• Delivery refers to the way a packet is handled by the underlying networks under the control of the network layer.

• Forwarding refers to the way a packet is delivered to the next station.


Next-Hop Method Versus Route Method

• Routing refers to the way routing tables are created to help in forwarding.
Routing
Routing refers to the process of directing a data packet from one node to another
A host or a router has a routing table with an entry for each destination, or a combination of destinations, to route IP packets.

Unicast routing

• A static routing table contains information entered


manually.

• A dynamic routing table is updated periodically by using one of the dynamic routing
protocols such as RIP, OSPF, BGP.
Intra- and Interdomain Routing

Today, an internet can be so large that one routing protocol cannot handle the task of updating the routing tables of all routers. For
this reason, an internet is divided into autonomous systems. An autonomous system (AS) is a group of networks and routers
under the authority of a single administration. Routing inside an autonomous system is referred to as intradomain routing.
Routing between autonomous systems is referred to as interdomain routing. Each autonomous system can choose one or more
intradomainrouting protocols to handle routing inside the autonomous system. However, only one interdomain routing protocol
handles routing between autonomous systems
• Routing Information Protocol (RIP) is an implementation of the distance vector protocol.
• Open Shortest Path First (OSPF) is an implementation of the link state protocol.
• Border Gateway Protocol (BGP) is an implementation of the path vector protocol.
Distance Vector Routing

• In distance vector routing, the least-cost route between any two nodes is the route
with minimum distance.
• In this protocol, as the name implies, each node maintains a vector (table) of
minimum distances to every node.

• In this type of routing protocol, all the nodes that are a part of the network advertise
their routing table to their adjacent nodes (nodes that are directly connected) at
regular intervals.
• With each router getting updated at regular intervals, it may take time for all the
. nodes to have the same accurate network view.
• Uses fixed length sub-net, not suitable for scaling.
•Distance-vector routing protocols use the Bellman–Ford algorithm to calculate the
best route
The Routing Information Protocol (RIP)

It is an intradomain routing protocol used inside an autonomous system. It is a very simple


protocol based on distance vector routing. RIP implements distance vector routing directly
with some considerations:
1. In an autonomous system, we are dealing with routers and networks (links). The routers
have routing tables; networks do not.
2. The destination in a routing table is a network, which means the first column defines a
network address.
3. The metric used by RIP is very simple; the distance is defined as the number of links
(networks) to reach the destination. For this reason, the metric in RIP is called a hop count.
4. Infinity is defined as 16, which means that any route in an autonomous system using
RIP cannot have more than 15 hops.
5. The next-node column defines the address of the router to which the packet is to be sent to
reach its destination.
2. Link State Routing

Link State Routing is another type of dynamic routing protocol in which routes advertise their updated routing
tables only when some new updates are added. This results in the effective use of bandwidth. All the routers keep
exchanging information dynamically regarding different links such as cost and hop count to find the best possible
path.
•Uses a variable length subnet mask, which is scalable and uses addressing more effectively.

•The algorithm used: Dijkstra's Algorithm to find the shortest path.


The metrics used to measure the cost of travel from one node to another
1.Hop Count: Hop count refers to the number of nodes a data packet has to traverse to reach its intended
destination. Transmitting from one node to another node counts as 1 - hop count. The goal is to minimize the hop
count and find the shortest path.

2.Bandwidth Consumption: Bandwidth is the ability of a network to transmit data typically measured in Kbps
(Kilobits per second), Mbps (Megabits per second), or Gbps (Gigabits per second). The bandwidth depends on
several factors such as - the volume of data, traffic on a network, network speed, etc. Routing decision is made in
a way to ensure efficient bandwidth consumption.

3.Delay: Delay is the time it takes for a data packet to travel from the source node to its destination node. There
are different types of delay such as - propagation delay, transmission delay, and queuing delay.
4.Load: Load refers to the network traffic on a certain path in the context of routing. A data packet will be routed to
the path with a lesser load so that it reaches its destination in the specified time.

5.Reliability: Reliability refers to the assured delivery of the data packet to its intended destination although there
are certain other factors, the data packet is routed in such a way that it reaches its destination. The stability and
availability of the link in the network are looked over before routing the data packet from a specific path.
Dijkstra Algorithm
OSPF

The Open Shortest Path First or OSPF protocol is an intradomain routing protocol based on link state routing.
Its domain is also an autonomous system. Areas To handle routing efficiently and in a timely manner,

• OSPF divides an autonomous system into areas.


• An area is a collection of networks, hosts, and routers all contained within an autonomous system.
• An autonomous system can be divided into many different areas.
• All networks inside an area must be connected.
• Routers inside an area flood the area with routing information.
• At the border of an area, special routers called area border routers summarize the information about the
area and send it to other areas.
• Among the areas inside an autonomous system is a special area called the backbone; all the areas inside an autonomous
system
must be connectedto the backbone.
• In other words, the backbone serves as a primary area and the other areas as secondary areas. This does not mean that the
routers within areas cannot be connected to each other, however. The routers inside the backbone are called the
backbone routers. Each area has an area identification. The area identification of the backbone is
zero.
Metric The OSPF protocol allows the administrator to assign a cost, called the metric, to each route. The metric can be
based on a type of service (minimum delay, maximum throughput, and so on). As a matter of fact, a router can have
multiple routing tables,each based on a different type of service.

Types of Links In OSPF terminology, a connection is called a link. Four types of


links have been defined: point-to-point, transient, stub, and virtual
A point-to-point link connects two routers without any other host or router inbetween.

A transient link is a network with several routers attached to it. The data can enter through any of the routers and leave through
any router. All LANs and some WANs withtwo or more routers are of this type
When the link between two routers is broken, the administration may create a
virtual link between them, using a longer path that probably goes through several
routers.
Path vector routing:-
Path vector routing proved to be useful for interdomain routing.
• The principle of path vector routing is similar to that of distance vector routing.
• In path vector routing, we assume that there is one node in each autonomous system that acts on behalf of the entire
autonomous system.
• Let us call it the speaker node. The speaker node in an AS creates a routing table and advertises it to speaker nodes in the
neighboring ASs.
• The idea is the same as for distance vector routing except that only speaker nodes in each AS can communicate with each
other. However, what is advertised is different. A speaker node advertises the path, not the metric of the nodes, in its
autonomous system or other autonomous systems.
Optimum path.

We are looking for a path to a destination that is the best for the organization that runs the autonomous

system.

We definitely cannot include metrics in this route because each autonomous system that is included in the path may use a

different criterion for themetric.

One system may use, internally, RIP, which defines hop count as the metric; another may use OSPF with minimum delay

defined as the metric.

The optimum path is the path that fits the organization.

In our previous figure, each autonomoussystem may have more than one path to a destination. For example, a path from

AS4 to ASI can be AS4-AS3-AS2-AS1, or it can be AS4-AS3-ASI. For the tables,

we chose the one that had the smaller number of autonomous systems, but this is
Border Gateway Protocol (BGP) – The Internet’s Routing Backbone

Border Gateway Protocol (BGP) is the routing protocol used to exchange routing information between different
Autonomous Systems (AS) on the Internet. It is known as a path-vector protocol and is essential for the operation of the
global Internet.

1. Why is BGP Needed?

•The Internet is a collection of multiple networks controlled by different organizations, called Autonomous Systems (AS).
•Internal routing protocols (like OSPF, RIP) work within an AS but cannot manage routing between different ASes.
•BGP allows different ASes to communicate and find the best paths for data packets.
shortest path routing algorithms

1.Bellman–Ford algorithm
The Bellman-Ford algorithm is used to find the shortest path from a single source vertex to all other vertices in a
weighted graph. It works even with negative weight edges and can detect negative weight cycles.
shortest path routing algorithm

2.Dijkstra’s algorithm
• Dijkstra’s algorithm is an alternative algorithm for finding the shortest paths from a source node to all other
nodes in a network. It is generally more efficient than the Bellman-Ford algorithm but requires each link cost
to be positive, which is fortunately the case in communication networks.
• Dijkstra’s algorithm is to keep identifying the closest nodes from the source node in order of
increasing path cost. The algorithm is iterative.
• At the first iteration the algorithm finds the closest node from the source node, which must be the
neighbor of the source node if link costs are positive.
• At the second iteration the algorithm finds the second-closest node from the source node. This node
must be the neighbor of either the source node or the closest node to the source node; otherwise,
there is a closer node.
• At the third iteration the third-closest node must be the neighbor of the first two closest nodes, and
so on.
• Thus at the kth iteration, the algorithm will have determined the k closest nodes from the source
node.
Feature Dijkstra's Algorithm Bellman-Ford Algorithm
Finds the shortest path from a single
Finds the shortest path from a single
Purpose source and detects negative weight
source.
cycles.
Works With Only positive weights. Both positive and negative weights.

Speed Faster slower than Dijkstra

Large graphs with only positive Graphs with negative weights (e.g.,
Best Used For
weights (e.g., GPS, networks). financial graphs, loss minimization).
Greedy algorithm (picks the best Dynamic programming (tries all paths
Algorithm Type
option at each step). and updates costs).
Negative Weight Handling Cannot handle negative weights. Handles negative weights.
Weighted directed and undirected
Works on Weighted directed graphs.
graphs.
Finding the fastest route on Google Stock price prediction where losses
Example Use Case
Maps. (negative weights) matter.
Unicast and multicast routing
• Broadcasting
• In broadcast communication, the relationship between the source and the destination is
• one-to-all. There is only one source, but all the other hosts are the destinations. The
• Internet does not explicitly support broadcasting because of the huge amount of traffic it would create and
because of the bandwidth it would need. Imagine the traffic generated
• in the Internet if one person wanted to send a message to everyone else connected
• to the Internet.
Why Do We Need Multicast Routing?

•Saves Bandwidth: Instead of sending multiple copies, one


stream is shared.
•Reduces Network Load: Less traffic compared to unicast.
•Efficient for Streaming & Group Communication: Used in
online gaming, video conferences, stock market updates, etc.
In unicast routing, each router in the domain has a table that defines a shortest path tree to possible
destinations.

In multicast routing, each involved router needs to construct a shortest path tree for each group.

Forwarding of a single packet to members of a group requires shortest path tree. If we have
n groups, we may need n shortest path trees. We can imagine
the complexity of multicast routing. Two approaches have been used to solve the problem:
source-based trees and group-shared trees.
Source-Based Tree-In the source-based tree approach, each router needs to have one shortest path tree for each group.
Group-Shared Tree. In the group-shared tree approach, instead of each router having m shortest path trees, only one
designated router, called the center core, rrendezvous router, takes the responsibility of distributing multicast traffic. The
corehas m shortest path trees in its routing table. The rest of the routers in the domain have none. If a router receives a
multicast packet, it encapsulates the packet in a unicast
packet and sends it to the core router.
Flooding shortest path routing algorithm

DVMRP (Distance Vector Multicast Routing Protocol)


•Works like the Distance Vector Routing Protocol but for
multicast.
•Uses Flood-and-Prune technique:
• Flooding – Initially, sends data to all routers.
• Pruning – Removes paths with no receivers.

Flooding broadcasts packets, but creates loops in the


systems.
•Uses Reverse Path Forwarding (RPF) to prevent loops.
•Reverse Path Forwarding (RPF) is a technique used in multicast routing
to prevent loops while efficiently delivering packets.
• It is a modified flooding strategy, meaning that instead of sending
packets to all routers, it ensures that only one copy is forwarded along
the shortest path, while duplicate copies are discarded.
•If a multicast packet arrives on the correct interface (the shortest
path to the source), it is forwarded.
If the packet arrives on the wrong interface (not the expected shortest
path), it is discarded to prevent loops.
How RPF Works (Step-by-Step)
1.Packet Arrival:
1. A multicast packet arrives at a router through an incoming
interface.
2.RPF Check:
1. The router checks its unicast routing table to determine
the best path to reach the source of the packet.
2. If the packet arrives on the same interface that the router
would use to send traffic back to the source, the packet
passes the RPF check and is forwarded.
3. If the packet arrives on a different interface, the packet
fails the RPF check and is discarded.
3.Forwarding the Packet:
1. If the RPF check is successful, the packet is forwarded to
the next routers in the multicast tree.
2. PIM (Protocol Independent Multicast)
PIM does not depend on a specific unicast protocol. It has two modes:
1.PIM Dense Mode (PIM-DM) – For groups with many receivers.
1. Flood-and-Prune method like DVMRP.
2. Example: Live sports streaming where many people watch.
2.PIM Sparse Mode (PIM-SM) – For groups with few receivers.
1. Uses Rendezvous Points (RP) to manage groups.
2. Example: A special webinar where only selected students join.
PIM is widely used today because it is scalable.

CRT
The Core-Based Tree (CBT) protocol is a group-shared protocol that uses a core as
the root of the tree. The autonomous system is divided into regions, and a core (center
router or rendezvous router) is chosen for each region.
PIM (Protocol Independent DVMRP (Distance Vector
Feature CBT (Core-Based Tree) MOSPF (Multicast OSPF)
Multicast) Multicast Routing Protocol)

Type Tree-Based Multicast Tree-Based Multicast Distance Vector Multicast Link-State Multicast

Uses existing Unicast Reverse Path Forwarding Uses OSPF’s Link-State


Routing Method Shared Tree with Core
Routing (RPF) Database

PIM-SM (Sparse Mode), PIM- Uses Flood-and-Prune


Variants Uses a Core Router Uses OSPF LSA Updates
DM (Dense Mode) Mechanism

Loop Prevention Uses RPF Prevents Loops Uses RPF to Avoid Loops OSPF Mechanism

Scalability Highly Scalable Scales Well Limited to Small Networks Limited to OSPF Domains

Large Networks, Internet-


Best Used For Group Communication Early Multicast Networks OSPF-Based Networks
wide Multicast

Corporate video
Streaming live sports events Old MBONE (Multicast Multicasting stock market
conferencing (e.g., Webex,
Real-World Example on YouTube or Netflix using Backbone) network used for data updates within a bank’s
Zoom) where a central
IP Multicast early multicast experiments internal network
server is used
Transport Layer: TCPand UDP
• The transport layer is responsible for process-to-process delivery-the delivery of a packet, part of a message,
from one process to another.
• Two processes communicate in a client/server relationship
• Both processes (client and server) have the same name
• three protocols at the transport layer: UDP, TCP, and SCTP.
• A transport layer protocol can either be connectionless or connection-oriented.
• UDP, is connectionless.
• TCP and SCTP are connection oriented
USER DATAGRAM PROTOCOL (UDP)

• The User Datagram Protocol (UDP) is called a


connectionless, unreliable transport protocol. It
does not add anything to the services of IP except
to provide process-to process communication
• it performs very limited error checking
• UDP is a very simple protocol using a minimum of
overhead. If a process wants to send a small
message and does not care much about reliability,
it can use UDP.

User datagram format


Checksum. This field is used to detect errors over the entire user datagram (header plus data).

A checksum is a value used to verify the integrity of data during transmission or storage. It is a small, fixed-size
numerical value derived from a block of data using a checksum algorithm. The sender calculates the checksum and
transmits it along with the data, and the receiver recalculates it to check for errors.

Common Checksum Algorithms

•Parity Bit: A simple method that adds a single bit for error detection.
•CRC (Cyclic Redundancy Check): Used in networking and storage devices.
•MD5 (Message Digest Algorithm 5): Produces a 128-bit hash value.
•SHA (Secure Hash Algorithm): More secure, used in cryptographic applications.

•Checksum use in IP, ICMP and UDP

The UDP checksum calculation is different from the one for IP and ICMP.
Here the checksum includes three sections: a pseudoheader, the UDP header,
and the data
coming from the application layer.
Transmission Control Protocol (TCP).
• TCP is a connectionoriented protocol; it
creates a virtual connection between two
TCPs to send data.
• TCP uses flow and error control mechanisms
at the transport level.
• In brief, TCP is called a connection-oriented,
reliable transport protocol. It adds
connection-oriented and reliability features to
the services of IP.
•TCP is stream-oriented, meaning data is sent as a continuous stream of bytes.
•UDP is message-oriented, where predefined messages (user datagrams) are sent individually.
•UDP adds a header to each message and sends it as an independent IP datagram.
•Neither IP nor UDP maintains relationships between datagrams.
•TCP simulates a continuous connection between sender and receiver, like an imaginary "tube."
•The sending process writes to the byte stream, while the receiving process reads from it.
• Sending and Receiving Buffers :
• Because the sending and the receiving processes may not write or read data at the same speed, TCP needs
buffers for storage.
• There are two buffers, the sending buffer and the receiving buffer, one for each direction.
• Buffers support flow and error control, ensuring reliable transmission.
• Circular arrays are commonly used for buffer implementation.
• Buffer sizes vary, typically ranging from hundreds to thousands of bytes.
• The sending and receiving buffers may have different sizes, depending on the system's implementation.
• One way to implement a buffer is to use a circular array of I-byte locations as
• shown in Figure 23.14. For simplicity, we have shown two buffers of 20 bytes each;
• TCP uses segments to send data, as IP requires packets, not a byte stream.
• A segment is a group of bytes from the sending buffer, with a TCP header added.
• Segments are encapsulated in IP datagrams for transmission.
• The process is transparent to the receiving application.
• Segments may arrive out of order, be lost, or get corrupted.
• TCP handles retransmission and ordering, ensuring reliable delivery.

TCP Features
TCP has several features.
1. Numbering System
• CP tracks transmitted and received segments but does not use segment numbers.
• No segment number field exists in the TCP header. Instead, there are two fields called the sequence number and the
acknowledgment number.
• These two ields refer to the byte number and not the segment number.

2. Byte Number :TCP numbers all data bytes that are transmitted in a connection. Numbering
is independent in each direction. When TCP receives bytes of data from a process, it stores them in the sending buffer and
numbers them. The numbering does not necessarily start from O. Instead, TCP generates a random number between 0 and 232 -
1 for the number of the first byte. For example, if the random number happens to be 1057 and the total
data to be sent are 6000 bytes, the bytes are numbered from 1057 to 7056. byte numbering is used for flow and error control.
The bytes of data being transferred in each connection are numbered by TCP.
The numbering starts with a randomly generated number.

3. Sequence Number After the bytes have been numbered, TCP assigns a sequence number to each segment that is being sent. The
sequence number for each segment is the number of the first byte carried in that segment.

4. Acknowledgment number confirms receipt and specifies the next expected byte.
Cumulative acknowledgment: Ensures all bytes up to the given number are received.Example: If acknowledgment number is
5643, all bytes up to 5642 are successfully received.
Flow Control
TCP, unlike UDP, provides flow control. The receiver of the data controls the amount of data that are to be sent by the sender.
This is done to prevent the receiver from being overwhelmed with data. The numbering system allows TCP to use a byte-
oriented flow control.
Error Control
To provide reliable service, TCP implements an error control mechanism. Although
error control considers a segment as the unit of data for error detection (loss or corrupted
segments), error control is byte-oriented

Congestion Control
TCP, unlike UDP, takes into account congestion in the network. The amount of data sent
by a sender is not only controlled by the receiver (flow control), but is also detennined by the level of congestion in the
network.
Congestion occurs when too many packets are present in the network, exceeding its capacity. This results
in delays, packet loss, and reduced performance.

Several factors contribute to congestion, including:


1.High traffic volume – Too many devices sending data simultaneously.
2.Insufficient bandwidth – Network capacity is not enough to handle data
load.
3.Network buffer overflows – Packets are dropped when buffer space is
exhausted.
4.Routing inefficiencies – Suboptimal routing decisions can overload certain
links.
5.Sudden bursts of data – Large data transfers can create temporary
congestion.
Congestion control techniques can be categorized into two
types:
1.Open-loop congestion control (Preventive):
1. Controls traffic entry into the network.
2. Examples: Leaky Bucket and Token Bucket algorithms.
2.Closed-loop congestion control (Reactive):
1. Detects congestion and reacts accordingly.
2. Examples: TCP congestion control, Frame Relay
congestion notification."_
Open-Loop Congestion Control
In open-loop congestion control, policies are applied to prevent congestion before it
happens. In these mechanisms, congestion control is handled by either the source or the
destination.
Brief list of policies that can prevent congestion.

1. Retransmission Policy
Retransmission is sometimes unavoidable. If the sender feels that a sent packet is lost
or corrupted, the packet needs to be retransmitted. Retransmission in general may
increase congestion in the network. However, a good retransmission policy can prevent
congestion.
The retransmission policy and the retransmission timers must be designed
to optimize efficiency and at the same time prevent congestion. For example, the
retransmission policy used by TCP is designed to prevent or alleviate
congestion.
2.Window Policy
The type of window at the sender may also affect congestion. The Selective Repeat window
is better than the Go-Back-N window for congestion control. In the Go-Back-N window,
when the timer for a packet times out, several packets may be resent, although some may
have arrived safe and sound at the receiver. This duplication may make the
congestion worse. The Selective Repeat window, on the other hand, tries to send the specific
packets that have been lost or corrupted.

3.Acknowledgment Policy
The acknowledgment policy imposed by the receiver may also affect congestion. If the
receiver does not acknowledge every packet it receives, it may slow down the sender
and help prevent congestion. Several approaches are used in this case. A receiver may
send an acknowledgment only if it has a packet to be sent or a special timer expires. A
receiver may decide to acknowledge only N packets at a time. We need to know that
the acknowledgments are also part of the load in a network. Sending fewer
acknowledgmentsmeans imposing less load on the network.
4. Discarding Policy
A good discarding policy by the routers may prevent congestion and at the same time
may not harm the integrity of the transmission. For example, in audio transmission, if
the policy is to discard less sensitive packets when congestion is likely to happen, the
quality of sound is still preserved and congestion is prevented or alleviated.

5.Admission Policy
An admission policy, which is a quality-of-service mechanism, can also prevent
congestion
in virtual-circuit networks. Switches in a flow first check the resource requirement
of a flow before admitting it to the network. A router can deny establishing a
virtualcircuit
connection if there is congestion in the network or if there is a possibility of future
congestion.
Key Mechanism:
• Admission Control
• Policing
• Traffic Shaping
• Traffic Scheduling
Admission Control
• Admission control is open loop preventive scheme
• Admission control refers to the mechanism used by a router, or a switch, to accept or reject a flow based on
predefined parameters called flow specifications.
• Before a router accepts a flow for processing, it checks the flow specifications to see if its capacity (in terms of
bandwidth, buffer size, CPU speed, etc.) and its previous commitments to other flows can handle the new flow.
• It determine whether the network can handle the new traffic without compromising Quality of Service

Key Flow Characteristics:


1.Bandwidth
2.Delay (Latency)
3.Jitter
1. Variation in packet delay over time.
2. High jitter causes voice/video distortions, affecting QoS-sensitive applications.
4.Packet Loss
1. The percentage of packets lost during transmission.
5.Priority (Traffic Class)
1. Some flows (e.g., emergency calls, real-time video) have higher priority than others (e.g., file downloads).
2. Admission control may prioritize critical flows while rejecting lower-priority ones.
• Policing

Network monitors traffic flows continuously to ensure they meet


their traffic contract • When a packet violates the contract, network
can discard or tag the packet giving it lower priority
If congestion occurs, tagged packets are discarded first
• Leaky Bucket Algorithm is the most commonly used policing
mechanism
-Bucket has specified leak rate for average contracted rate
– Bucket has specified depth to accommodate variations in arrival
rate
– Arriving packet is conforming if it does not result in overflow
Leaky Bucket Algorithm
•Concept: The leaky bucket algorithm regulates data flow in a network, similar to water being
poured into a bucket with a small hole at the bottom.

•Working Principle:
•Data packets enter the bucket irregularly (like water poured randomly).
•The bucket has a fixed leak rate (steady packet output).
•If the bucket overflows, excess packets are discarded (prevents congestion).

•Key Features:
•Ensures a smooth, constant data transmission rate.
•Prevents sudden bursts of data that could overwhelm the network.
•Works as a traffic policing mechanism to enforce bandwidth limits.
•Analogy:
•Water = Incoming network packets.
•Bucket = Buffer or queue to hold packets.
•Hole at the bottom = Regulated packet transmission rate.
•Overflow = Packet loss when traffic exceeds capacity.
Leaky Bucket Algorithm
The input rate can vary but the output rate remains
constant. Similarly, in networking, a technique called leaky
bucket can smooth out bursty traffic. Bursty chunks are
stored in the bucket and sent out at an average rate.

Function: Enforces a strict traffic rate limit.


Traffic Shaping

• Another method of congestion control is to “shape” the traffic before


it enters the network. • Traffic shaping controls the rate at which
packets are sent (not just how many). Used in ATM and Integrated
Services networks. • At connection set-up time, the sender and
carrier negotiate a traffic pattern (shape).

• • Two traffic shaping algorithms are: – Leaky Bucket traffic shaper –


Token Bucket traffic shaper
Leaky Bucket Traffic Shaper

• Function: Smooths out traffic before transmission.


• Working Mechanism:
• Incoming packets are queued in the bucket.
• Packets are released at a constant rate.
• Instead of discarding excess packets, it buffers them.
• Effect on Traffic:
• Bursty traffic is delayed but not discarded.
• Provides smoother data flow and better Quality of Service (QoS).
• Use Case: Traffic shaping for a steady data rate, preventing congestion
without data loss.
Key Differences

Feature Leaky Bucket Policing Algorithm Leaky Bucket Traffic Shaper


Enforces rate limit, drops excess Smooths traffic flow, buffers excess
Purpose
packets packets
Handling of Bursty Traffic Drops excess packets Buffers and transmits later
Constant, but excess is stored
Rate of Packet Release Constant, but excess is rejected
temporarily
Strict rate enforcement, possible Smoothens transmission, avoids
Effect on Traffic
data loss packet loss
Use Case Policing traffic, preventing overuse Shaping traffic for consistent flow

•Policing = Hard limit (Drops extra packets).


•Shaping = Soft smoothing (Buffers excess and sends later)
Token Bucket Algorithm
The leaky bucket algorithm enforces output patterns at the average rate, no matter how busy
the traffic is. So, to deal with the more traffic, we need a flexible algorithm so that the data is
not lost. One such approach is the token bucket algorithm
The Token Bucket algorithm is a traffic shaping mechanism used to regulate data transmission
rates while allowing bursts of data within a limit. It ensures that traffic adheres to a specified
rate while permitting temporary bursts when enough tokens are available.
How Does It Work?
1.Tokens Represent Permission to Send Data:
1. A bucket holds tokens (small units of permission to send data).
2. Each token allows the transmission of one packet (or a certain number of
bytes).
2.Tokens Are Added at a Fixed Rate:
1. Tokens are added into the bucket at a constant rate (e.g., 10 tokens per
second).
2. If the bucket is full, extra tokens are discarded.
3.Sending Data Requires Tokens:
1. A device can only send a packet if enough tokens are available.
2. If tokens are not available, the data must wait or be discarded.
4.Allows Bursts Within Limits:
1. If tokens have accumulated, the system can send a burst of data.
2. However, after a burst, new tokens must be generated before sending more
data.
Feature Leaky Bucket Traffic Shaper Token Bucket Traffic Shaper
Controls data flow by allowing
Controls data flow by allowing a
Basic Concept bursts while maintaining an
constant rate of output.
average rate.
Bucket Content Stores packets (data units). Stores tokens.
Constant output rate, regardless Variable output rate, allowing
Rate of Transmission
of input rate. bursts when tokens are available.
Does not allow bursts; enforces Allows bursts if enough tokens
Handling Bursts
strict rate control. are accumulated.
Tokens accumulate at a fixed rate
No tokens are used; packets are
Token Mechanism and are consumed when packets
processed at a fixed rate.
are sent.
Smooths out traffic completely, Smooths traffic but permits
Traffic Nature
enforcing strict shaping. bursts when tokens are available.
Used in traffic shaping to
Used in traffic policing to allow
Implementation Use regulate flow and prevent
flexible data transmission.
congestion.
Less flexible; discards excess More flexible; allows short bursts
Flexibility
packets. of high-speed transmission.
Traffic Scheduling
Traffic Scheduling in Networking
Traffic scheduling is a technique used in networks to manage data transmission efficiently,
ensuring fair bandwidth allocation, minimizing delays, and prioritizing critical data.
Key Traffic Scheduling Algorithms:
1.First-Come, First-Served (FCFS):
1. Packets are processed in the order they arrive.
2. Simple but does not differentiate between high- and low-priority traffic.
2.Priority Queuing (PQ):
1. Assigns different priority levels to packets.
2. Higher-priority packets are transmitted first.
3. Can lead to starvation of low-priority packets.
3.Weighted Fair Queuing (WFQ):
1. Each flow gets a fair share of bandwidth based on assigned weights.
2. Ensures proportional resource allocation.
Closed-Loop Congestion Control:
Closed-loop congestion control mechanisms try to alleviate congestion after it happens.

Several mechanisms have been used by different protocols as follows:

1.Backpressure
The technique of backpressure refers to a congestion control mechanism in which a
congested node stops receiving data from the immediate upstream node or nodes. This
may cause the upstream node or nodes to become congested, and they, in turn, reject
data from their upstream nodes or nodes. And so on. Backpressure is a node-to-node
congestion control that starts with a node and propagates, in the opposite direction of
data flow, to the source. The backpressure technique can be applied only to virtual circuit
networks, in which each node knows the upstream node from which a flow of data is
corning. Figure 24.6 shows the idea of backpressure.
2,Choke Packet

A choke packet is a packet sent by a node to the source to inform it of congestion.


Note the difference between the backpressure and choke packet methods. In backpressure,
the warning is from one node to its upstream node, although the warning may
eventually reach the source station. In the choke packet method, the warning is from the
router, which has encountered congestion, to the source station directly. The intermediate
nodes through which the packet has traveled are not warned. We have seen
an example of this type of control in ICMP. When a router in the Internet is overwheh:
ned with IP datagrams, it may discard some of them; but it informs the source
3.Implicit Signaling
In implicit signaling, there is no communication between the congested node or nodes
and the source. The source guesses that there is a congestion somewhere in the network
from other symptoms. For example, when a source sends several packets and there is
no acknowledgment for a while, one assumption is that the network is congested. The
delay in receiving an acknowledgment is interpreted as congestion in the network; the
source should slow down.

4.Explicit Signaling
The node that experiences congestion can explicitly send a signal to the source or
destination.
The explicit signaling method, however, is different from the choke packet
method. In the choke packet method, a separate packet is used for this purpose; in the
explicit signaling method, the signal is included in the packets that carry data. Explicit
signaling, as we will see in Frame Relay congestion control, can occur in either the
forward or the backward direction.
Backward Signaling: A bit can be set in a packet moving in the direction
opposite
to the congestion. This bit can warn the source that there is congestion and
that it needs
to slow down to avoid the discarding of packets.
Forward Signaling :A bit can be set in a packet moving in the direction of the
congestion. This bit can warn the destination that there is congestion. The
receiver in
this case can use policies, such as slowing down the acknowledgments, to
alleviate the i
congestion.
Congestion Control in TCP:
TCP uses a closed-loop congestion control mechanism to dynamically adjust
its transmission rate based on network conditions.
Congestion Window policy
The sender has two pieces of information: the receiver-advertised window size and
the congestion window size. The actual size of the window is the minimum of these two.
Actual window size= minimum (rwnd, cwnd)
The receiver window (rwnd) is the value advertised by the opposite end in a segment containing
acknowledgment. It is the number of bytes the other end can accept before its buffer overflows and data
are discarded

The congestion window (cwnd) is a value determined by the network to avoid congestion.
TCP has mechanism for congestion control.
This mechanism is applied at sender
The key mechanisms in TCP congestion control include:

1.Slow Start: Starts with a small congestion window (CWND) and increases exponentially.

2.Congestion Avoidance: Uses additive increase to prevent excessive growth.

3.Fast Retransmit: Detects packet loss using duplicate ACKs.

4.Fast Recovery: Adjusts CWND without resetting to 1 MSS.


TCP Congestion Policy
•TCP handles congestion in three phases: Slow Start, Congestion Avoidance,
and Congestion Detection.
•Slow Start Phase:
• The sender starts with a very small data transmission rate.
• The data rate increases rapidly until it reaches a threshold.
•Congestion Avoidance Phase:
• Once the threshold is reached, the sender slows down the data rate to
prevent congestion.
•Congestion Detection Phase:
• If congestion is detected, TCP reduces the transmission rate.
• Depending on how congestion is detected, TCP either returns to Slow
Start or Congestion Avoidance.
Slow Start (Exponential Growth)
•Initially, the congestion window (cwnd) starts with one Maximum Segment
Size (MSS).
•Each time an acknowledgment is received, cwnd increases by 1 MSS.
•This causes exponential growth in the data transmission rate.
•Example:
• Start with cwnd = 1 MSS (send 1 segment).
• After acknowledgment, cwnd = 2 MSS (send 2 segments).
• After the next acknowledgments, cwnd = 4 MSS, then 8 MSS, and so on.
•The process continues until congestion is detected or a threshold is reached.
Congestion Avoidance (Additive Increase)
•When TCP reaches a certain threshold, it switches from exponential
growth to a slower, controlled increase.
•This phase helps prevent congestion before it occurs.
•Instead of doubling the congestion window (cwnd) like in Slow Start,
TCP increases cwnd gradually.
•How it works:
• After each full round of acknowledgments (when all sent data is
acknowledged), cwnd increases by 1 MSS instead of doubling.
• This results in a steady, linear increase rather than exponential
growth.
•Goal: To keep increasing data transmission while avoiding sudden
congestion.
Congestion Detection (Multiplicative Decrease)
•When TCP detects congestion, it reduces the congestion window size to control
data flow.
•Congestion detection happens in two ways:
• Timeout Occurs (Strong Reaction)
• No acknowledgment received, meaning severe congestion.
• Action Taken:
• Reduce the congestion threshold to half of the current window
size.
• Set congestion window (cwnd) to 1 MSS (restart transmission
cautiously).
• Restart Slow Start to build up the window size again.
• Three Duplicate ACKs (Weaker Reaction)
• Some segments still reach the receiver, meaning mild congestion.
• Action Taken:
• Reduce the congestion threshold to half of the current window
size.
• Set cwnd = threshold (some implementations add 3 MSS).
• Restart Congestion Avoidance instead of Slow Start.
•Overall Goal: Adjust transmission rate based on congestion severity to maintain
network stability.
There are several TCP congestion control algorithms used today:
• TCP Tahoe: Resets CWND to 1 MSS after packet loss.
• TCP Reno: Introduces Fast Recovery for better performance.
• TCP New Reno: Further improves retransmission efficiency.
• Cubic TCP: Used in modern high-speed networks.
Congestion control in Frame relay
Congestion in Frame Relay
Frame Relay is a packet-switched network protocol that efficiently handles bursty traffic. However,
congestion can occur when network resources (bandwidth, buffers, processing power) are
insufficient to handle incoming traffic.
Causes of Congestion in Frame Relay:
1. Traffic Overload: Too many frames are transmitted, exceeding network capacity.
2. Insufficient Bandwidth: The available bandwidth is not enough for peak traffic loads.
3. Slow Downstream Links: If one link is slower than another, it creates bottlenecks.
4. Buffer Overflow: If routers or switches cannot store incoming frames, they may drop packets.
Congestion Indications in Frame Relay:
Frame Relay uses explicit congestion notification mechanisms:
• FECN (Forward Explicit Congestion Notification): Set by a Frame Relay switch to inform the receiver
that congestion is occurring along the path.
• BECN (Backward Explicit Congestion Notification): Sent in the reverse direction to inform the
sender to slow down transmission.
• DE (Discard Eligibility) Bit: Frames marked with DE can be dropped first in case of congestion.
Aspect Congestion in TCP Congestion in Frame Relay
Connection-oriented, reliable Connection-oriented, but
Protocol Type
(IP networks). does not guarantee reliability.
Packet loss due to network Frame loss due to high traffic,
Cause of Congestion overload, buffer overflow, or buffer overflow, and limited
insufficient bandwidth. bandwidth.
Uses mechanisms like Slow
Start, Congestion Avoidance, Uses FECN, BECN, and DE bits
Congestion Control
Fast Retransmit, and Fast for congestion notification.
Recovery.
Relies on applications and
Reduces transmission rate
upper layers to adjust traffic
Reaction to Congestion dynamically (e.g., TCP
flow based on congestion
window size adjustment).
notifications.
Retransmits lost packets Does not retransmit frames
Packet Loss Handling automatically (ensuring (higher-layer protocols must
reliable delivery). handle it).
Quality of Service (QoS) in Networking:
Quality of Service (QoS) refers to the ability of a network to manage traffic and
provide differentiated service levels to ensure reliable performance for critical
applications. It controls network parameters such as bandwidth, latency, jitter, and
packet loss to meet the requirements of various services like video streaming, VoIP,
and online gaming.

Key Objectives of QoS


Guaranteed Performance – Ensuring smooth service for real-time applications.
Efficient Resource Allocation – Managing bandwidth and prioritizing traffic.
Traffic Prioritization – Assigning higher priority to delay-sensitive applications.
Congestion Management – Preventing excessive traffic from degrading
performance.
Key QoS Parameters

Parameter Definition Impact on Network Performance


The maximum data rate a network Insufficient bandwidth leads to
Bandwidth
can handle. slow speeds and buffering.
The time taken for a packet to High latency affects real-time
Latency (Delay)
travel from source to destination. applications like VoIP.
Causes distortion in audio/video
Jitter Variability in packet arrival time.
communication.
Percentage of packets that do not Leads to degraded voice and video
Packet Loss
reach their destination. quality.
Actual data transfer rate in a Lower throughput results in slower
Throughput
network. downloads and streaming.
QoS Mechanisms
1. Traffic Classification & Marking
Identifies packets based on protocol, source, or destination.
2. Traffic Shaping & Policing
Traffic Shaping: Smooths out bursty traffic to control flow rates.
Traffic Policing: Drops or marks packets that exceed the allowed bandwidth.
Example: Token Bucket Algorithm for rate limiting.
3. Congestion Management
Uses queuing techniques to manage network traffic efficiently.
FIFO (First-In, First-Out) – Simple, but no prioritization.
Priority Queuing (PQ) – High-priority traffic is always processed first.
Weighted Fair Queuing (WFQ) – Assigns bandwidth proportionally.
4. Congestion Avoidance
• Random Early Detection (RED): Drops packets randomly before congestion occurs.
Explicit Congestion Notification (ECN): Alerts sender about congestion without
dropping packets.
APPLICATION LAYER Protocol

BOOTP (Bootstrap Protocol)


• BOOTP (Bootstrap Protocol) is a network protocol used to assign IP
addresses and other network configuration settings to diskless
computers and network devices. It operates at the Application Layer
(Layer 7) of the OSI model and uses UDP (User Datagram Protocol)
on port 67 (server) and port 68 (client).
Key Features of BOOTP:
1.IP Address Assignment:
1. BOOTP allows a client (e.g., diskless workstation) to request an IP
address from a BOOTP server.
2. The server assigns a static IP address from a predefined list.
2.Other Configuration Information:
1. Along with the IP address, BOOTP provides:
1.Subnet mask
2.Default gateway
3.DNS server address
3.Diskless Booting Support:
1. Originally designed for diskless workstations to boot using network
resources.
2. The BOOTP server provides the TFTP (Trivial File Transfer Protocol)
server address to download the bootloader.
4.Uses UDP (Connectionless Protocol):
1. BOOTP clients send requests using UDP to reduce overhead and
simplify communication.
BOOTP Process
1.Client Broadcasts Request:
1.The BOOTP client sends a broadcast message (DISCOVER) on UDP
port 68 requesting an IP address.
2.BOOTP Server Responds:
1.The BOOTP server checks its database and assigns an IP address to
the client.
2.Sends a reply (OFFER) on UDP port 67 with IP details and boot file
information.
3.Client Receives Configuration:
1.The client configures itself using the provided IP and other settings.
2.If diskless, it downloads the OS boot file from the TFTP server.
DHCP (Dynamic Host Configuration Protocol)
• DHCP (Dynamic Host Configuration Protocol) is a network protocol
used to automatically assign IP addresses and other network
configuration settings to devices on a network. It is an improvement
over BOOTP and operates at the Application Layer (Layer 7) of the
OSI model, using UDP (port 67 for the server and port 68 for the
client).
Key Features of DHCP:
1.Automatic IP Address Assignment
1. DHCP dynamically assigns IP addresses to devices, eliminating the need for manual
configuration.
2. Supports dynamic, static, and automatic allocation.
2.Additional Network Configuration
1. Along with IP addresses, DHCP provides:
1. Subnet mask
2. Default gateway
3. DNS server
4. WINS server
3.Lease-Based Address Allocation
1. IP addresses are assigned for a specific lease duration.
2. When the lease expires, the address can be renewed or reassigned.
4.Relay Agent Support
1. Allows DHCP servers to assign IPs across different subnets using a relay agent.
5.Scalability and Efficiency
1. Suitable for large networks, reducing the administrative burden of manually assigning IPs.
DHCP Process
1.DHCPDISCOVER (Client to Server - Broadcast)
1. The client broadcasts a DHCPDISCOVER message to find available DHCP servers.
2.DHCPOFFER (Server to Client - Unicast/Broadcast)
1. DHCP servers respond with a DHCPOFFER, offering an IP address and configuration
settings.
3.DHCPREQUEST (Client to Server - Broadcast)
1. The client selects one offer and requests the IP address using a DHCPREQUEST
message.
4.DHCPACK (Server to Client - Unicast/Broadcast)
1. The server confirms the allocation with a DHCPACK message, and the client
configures itself with the provided details.
Feature BOOTP (Bootstrap Protocol) DHCP (Dynamic Host Configuration Protocol)
Assigns dynamic, automatic, or static IP
Function Assigns static IP addresses from a predefined list
addresses
Dynamic and lease-based (IP addresses can
IP Address Allocation Pre-configured (manual mapping of MAC to IP)
change)
More flexible (supports dynamic IP assignment,
Flexibility Limited (requires manual configuration)
renewal, and reallocation)
Temporary lease-based assignment (can be
Lease Mechanism Permanent IP assignment
renewed or released)
Provides only IP address, default gateway, and Provides IP address, subnet mask, gateway,
Additional Parameters
subnet mask DNS, WINS, TFTP server, and more
Also uses UDP ports 67 and 68, but with more
Communication Method Uses UDP ports 67 (server) and 68 (client)
advanced messaging
Limited to same subnet (cannot assign IPs across Supports DHCP relay agents for assigning IPs
Relay Agent Support
routers) across different subnets
Used for diskless workstations that need to boot Used for modern networks to dynamically
Usage
from a network assign IPs to devices
Scalability Suitable for small networks Suitable for large networks
More vulnerable to rogue DHCP attacks
Security Less vulnerable (static mapping)
(requires security measures)
DNS- Domain Name System

The Domain Name System (DNS) is a critical part of the Internet that translates
human-readable domain names (e.g., www.google.com) into machine-readable
IP addresses (e.g., 142.250.183.206). Without DNS, users would have to
memorize complex numerical IP addresses for every website they visit.

Need for DNS

Initially, a host file was used for mapping names to IP addresses, but as
the internet grew, maintaining a single host file became impractical​.DNS
was introduced as a scalable and decentralized system for name
resolution.
•There are two types of name spaces:

•Flat Name Space – Simple names with no hierarchical structure (not scalable).
•Hierarchical Name Space – Names are organized in levels (e.g.,
•www.example.com).
Label
Each node in the tree has a label, which is a string with a maximum of 63
characters. The root label is a null string (empty string). DNS requires that
children of a node(nodes that branch from the same node) have different
labels, which guarantees the
uniqueness of the domain names.

Domain Name
Each node in the tree has a domain name. A full domain name is a sequence
of labels separated by dots (.). The domain names are always read from the
node up to the root.The last label is the label of the root (null). This means
that a full domain name always ends in a null label, which means the last
character is a dot because the null string is nothing. Figure 25.3 shows some
domain names.
Fully Qualified Domain Name (FQDN)
•A Fully Qualified Domain Name (FQDN) is a complete domain name that uniquely identifies a
host on the internet.
•It contains all labels, from the most specific (hostname) to the most general (root).
•The name must end with a null label, represented by a dot (.) at the end.
•Example: challenger.atc.tbda.edu.
•challenger → Hostname
•atc → Subdomain (Advanced Technology Center)
•tbda → Subdomain of edu
•edu → Top-Level Domain (TLD)
•. → Root
•A DNS server can only match an FQDN to an IP address for resolution.
Partially Qualified Domain Name (PQDN)

•A Partially Qualified Domain Name (PQDN) does not include the full hierarchy up to the root.
•It starts from a node but does not end at the root (.).
•Used when the name belongs to the same site as the client, allowing the resolver to add the
missing part (suffix).
•Example:
•A user at jhda.edu. types challenger
•The DNS resolver adds the suffix → challenger.atc.jhda.edu.
•DNS clients typically hold a list of suffixes to complete PQDNs into FQDNs automatically.
•The null suffix (.) is added when an FQDN is explicitly defined.
DNS Servers and Zones
•Root Servers: Handle requests for top-level domains.
•TLD Servers: Store records for each top-level domain (e.g.,
.com, .org).
•Authoritative Name Servers: Store records for specific
domains.
•Recursive Resolvers: Handle user queries and forward
requests if needed​.
•Zones: A DNS server is responsible for a particular zone,
which is a portion of the domain name space​
Zone in DNS
A DNS Zone is a part of the Domain Name System (DNS) that a specific DNS
server manages. Think of it like a section in a big address book, where one
group is responsible for handling certain addresses (domain names).

Types of DNS Zones (With Simple Examples)


1.Primary Zone (Master Zone)
•The main zone where all domain name records are stored.
•Example: The primary zone for example.com contains records like:
•www.example.com → 192.168.1.10 (A Record)
•mail.example.com → 192.168.1.20 (MX Record)
2.Secondary Zone (Backup Zone)
•A copy of the primary zone, used as a backup.
•If the primary DNS server goes down, the secondary zone still answers queries.
3.Stub Zone
•A shortcut zone that contains only information about another DNS server.
•It helps speed up queries.
4.Forward Lookup Zone
•Converts a domain name (like google.com) into an IP address (like 142.250.183.206).
5.Reverse Lookup Zone
•Does the opposite of a forward lookup zone.
•Converts an IP address into a domain name. Example: 192.168.1.1 → server1.example.com
Types of DNS Domains

•Generic Domains (gTLDs) – .com, .org, .edu, etc.​.


•Country-Code Domains (ccTLDs) – .us, .in, .de, etc.​.
•Inverse Domain – Used for mapping an IP address back to a domain
name (PTR records)​
Country Domains
• The country domains section uses two-character country abbreviations (e.g., in for
• India). Second labels can be organizational, or they can be more specific,
• state designations. India, for example, uses state abbreviations as a
• subdivision of us (e.g., mh.in.).

Inverse Domain
•Purpose: The Inverse Domain is used to map an IP address to a domain name
(reverse lookup).
•Use Case:
• A server receives a request from a client.
• The server has a list of authorized clients, but only their IP addresses are
stored.
• The server asks DNS to convert the IP address into a hostname to check
authorization.
How Inverse Domain Works
•Uses a special type of DNS query called a Pointer (PTR) query.
•The inverse domain is part of the domain name space with the arpa top-level domain.
•The second-level domain is called in-addr, which stands for inverse address lookup.
•The IP address is written in reverse order, followed by in-addr.arpa.
Example:
•Given IP address: 132.34.45.121 (Class B address, 132.34 is the network ID).
•Reverse lookup format: 121.45.34.132.in-addr.arpa.

Hierarchy of Inverse Domain


•The structure follows the network ID → subnet ID → host ID
hierarchy.
•Higher-level servers handle network-wide lookups.
•Lower-level servers handle specific subnets.
•This structure looks inverted compared to standard domain
names.
RESOLUTION
Mapping a name to an address or an address to a name is called name-
address resolution

1. Recursive Resolution
• The client (resolver) requests a recursive answer from a DNS server.
• The DNS server is responsible for finding the final answer for the client.
• If the server knows the answer, it responds immediately.
• If it does not know, it forwards the query to another server (usually a
parent server).
• This process continues until an authoritative DNS server provides the
correct answer.
• Once resolved, the answer travels back through the chain to the original
client.
• Advantage: The client does not need to contact multiple servers.
Disadvantage: Increases load on the DNS server.
2. Iterative Resolution
• The client requests information, but the DNS server does not fetch the
final answer.
• If the DNS server knows the answer, it responds.
• If it does not know, it gives the client the IP address of another DNS server
that might have the answer.
• The client is responsible for repeating the query to the next server.
• The process continues until the authoritative server provides the correct
answer.
• Advantage: Reduces load on individual DNS servers.
Disadvantage: The client must handle multiple queries.
3. Caching in DNS
• Caching improves DNS efficiency by storing previously resolved queries.
• When a DNS server gets a response, it stores the mapping in cache memory.
• If another client asks for the same query, the server provides the cached
answer instantly instead of querying again.
• Problem with caching:
• If the cached data becomes outdated, it may return an incorrect IP address.
Solution:
• DNS servers use Time-to-Live (TTL) for each cached record.
• TTL defines how long the DNS server can keep the information before
refreshing it.
• Once TTL expires, the DNS server must request fresh data from the
authoritative server.
When a user types www.example.com, the following steps occur:

Step-by-Step DNS Resolution


1.User Request (Local DNS Query)
•The user's device checks its local DNS cache to see if it already knows the IP.
2.Recursive DNS Resolver
•If not found, the request goes to the Recursive Resolver (usually provided by an ISP).
3.Root Name Server
•The resolver queries the root server, which directs it to the TLD server.
4.TLD Name Server
•The .com TLD server responds with the address of the Authoritative Name Server for
example.com.
5.Authoritative Name Server
•The actual IP address of www.example.com is returned.
6.Final Response
•The resolver caches the result and provides the IP to the user’s browser​.
Example Query Flow:
www.example.com → Root Server → .com TLD → example.com Name Server → Returns IP
(192.168.1.10)
Types of DNS Records
DNS uses different record types for handling queries.

Record Type Purpose Example


Maps domain to an IPv4 example.com →
A Record
address 192.168.1.10
Maps domain to an IPv6 example.com →
AAAA Record
address 2001:db8::1
Creates an alias for www.example.com →
CNAME Record
another domain example.com
Specifies mail servers for
MX Record mail.example.com
email routing
Used for reverse DNS 192.168.1.10 →
PTR Record
lookup example.com
DNS Message and Format
• DNS communication happens through DNS messages, which are
exchanged between clients and servers using the UDP or TCP
protocol over port 53.

1. Types of DNS Messages


There are two types of DNS messages:
1.Query Message
1. Sent by the client (resolver) to request information.
2. Asks for the IP address of a domain (forward lookup) or domain name of
an IP address (reverse lookup).
2.Response Message
1. Sent by the DNS server in reply to a query.
2. Contains the resolved IP address or an error message if the query fails.
2. DNS Message Format

A DNS message consists of five sections:

Section Description
Contains important control information, including the
Header
number of queries and responses.
Question Holds the domain name the client is querying for.
Contains the resolved IP address or other DNS record
Answer
information.
Specifies the authoritative name server for the queried
Authority
domain.
Provides extra information, such as additional DNS
Additional
records that may help resolve the query.
3. DNS Message Header Format
The header is a 12-byte section present in both query and response
messages. It contains:

Field Size Description


ID 16 bits A unique identifier for the request.
Defines the type of query/response (e.g.,
Flags 16 bits
standard, recursive, or authoritative).
Number of questions in the Question
QDCOUNT 16 bits
section.
ANCOUNT 16 bits Number of answers in the Answer section.
Number of authority records in the Authority
NSCOUNT 16 bits
section.
Number of additional records in the
ARCOUNT 16 bits
Additional section.
Registrars in DNS
• Registrars are commercial entities accredited by ICANN (Internet Corporation
for Assigned Names and Numbers) that manage the registration of new domain
names.
• They ensure that each domain name is unique before adding it to the DNS
database.
• A fee is charged for domain registration and renewal.

How Are New Domains Added to DNS?


1.The organization contacts a registrar (e.g., GoDaddy, Namecheap, Google Domains
2.The registrar checks if the domain name is available.
3.If available, the registrar registers the domain in the DNS database.
4.The organization provides DNS information, including:
•Domain Name (e.g., ws.wonderful.com).
•Server Name (e.g., ws).
•IP Address (e.g., 200.200.200.5).
5.The domain becomes active and accessible over the internet.
Dynamic Domain Name System (DDNS)
• The Dynamic Domain Name System (DDNS) is an extension of DNS
that allows automatic updates of domain-to-IP mappings without
manual intervention.

Why is DDNS Needed?


•Originally, DNS required manual updates whenever a host was
added, removed, or changed.
•With millions of devices constantly connecting and disconnecting,
manual updates are impractical.
•DDNS automates this process, ensuring real-time updates in the
DNS database.
How DDNS Works?
1.A new host or IP change occurs.
2.DHCP (Dynamic Host Configuration Protocol) assigns a new IP address.
3.The DHCP server sends the updated information to the Primary DNS Server.
4.The Primary DNS Server updates the DNS zone with the new IP address.
5.Secondary DNS Servers are notified of the changes in two ways:
1. Active Notification: The primary server informs secondary servers immediately.
2. Passive Notification: Secondary servers periodically check for updates.
6.Secondary servers perform a zone transfer to synchronize records.
Encapsulation in DNS
• DNS uses both UDP and TCP for communication. The well-known port
number 53 is used by the DNS server in both cases.

DNS Over UDP (Default Mode)


•Used when the response size is less than 512 bytes.
•UDP is faster and requires less overhead because it is connectionless.
•Example: A simple domain name resolution query (e.g.,
www.example.com → 192.168.1.10).
2. DNS Over TCP (When Needed)
• Used when the response size is more than 512 bytes (e.g., large DNS
records or zone transfers).
• TCP is connection-oriented and ensures reliable delivery.
• There are two cases where TCP is used:
Case 1: Prior Knowledge of Large Response
• If the resolver already knows the response will be larger than 512 bytes, it
directly uses TCP.
• Example:
• A secondary DNS server requesting a zone transfer from a primary server (zone
data is usually large).
Case 2: UDP First, Then TCP (Fallback Mechanism)
•If the resolver does not know the response size, it first tries UDP.
•If the response exceeds 512 bytes, the DNS server truncates the response and sets the
TC (Truncated) bit in the header.
•The resolver detects this and switches to TCP to request the full response again.
ELECTRONIC MAIL
Main Components of Email System
The email system consists of three main components that work together to send, transfer, and access
messages efficiently. These components are:
1.User Agent (UA)
2.Message Transfer Agent (MTA)
3.Message Access Agent (MAA)
4. User Agent (UA) – Email Client

The User Agent (UA) is the software or application that allows users to compose, send, receive, and
read emails. It provides the interface between the user and the email system.
• Functions of User Agent:
Composing and sending emails
Receiving and reading emails
Managing inbox, drafts, and sent items
Attaching files to emails
Organizing messages (folders, labels, spam filtering)
Examples of User Agents:
• Web-based Email Clients: Gmail, Yahoo Mail, Outlook Web
• Desktop Email Clients: Microsoft Outlook, Mozilla Thunderbird, Apple Mail
2. Message Transfer Agent (MTA) – Email Server
The Message Transfer Agent (MTA) is responsible for transferring emails from the sender to the recipient. It operates
in the background and handles the actual email delivery process.
Functions of Message Transfer Agent:
Receives outgoing emails from the sender’s User Agent
Determines the recipient’s mail server
Transfers the email using SMTP (Simple Mail Transfer Protocol)
Stores emails temporarily if the recipient’s server is unavailable
Examples of Message Transfer Agents:
•Postfix
•Sendmail
•Microsoft Exchange Server
How It Works:
1.The sender writes an email in a User Agent (e.g., Gmail).
2.The email is sent to an SMTP server (part of the MTA).
3.The MTA forwards the email to the recipient’s mail server.
3. Message Access Agent (MAA) – Email Retrieval System
The Message Access Agent (MAA) allows users to retrieve and access their emails from the email server. It ensures
that emails are delivered from the recipient's mail server to their User Agent.
Functions of Message Access Agent:
Retrieves emails from the mail server
Supports protocols like POP3 (Post Office Protocol) and IMAP (Internet Message Access Protocol)
Manages the synchronization of emails across multiple devices
Protocols Used:
•POP3 (Post Office Protocol v3):
• Downloads emails from the server to the device
• Emails are deleted from the server after download
•IMAP (Internet Message Access Protocol):
• Emails remain stored on the server
• Allows access from multiple devices (smartphones, laptops, etc.)
Examples of Message Access Agents:
•Dovecot
•Cyrus IMAP Server
Mailing List
•A mailing list uses an alias to send messages to multiple recipients.
•When an email is sent to an alias, the system:
•Checks if a mailing list exists.
•Sends individual copies to each member.
•If no alias exists, the email is delivered normally.
•Example: Sending an email to [email protected] sends it to all team members.

MIME (Multipurpose Internet Mail Extensions)


•MIME extends email capabilities beyond ASCII text.
•Allows sending multimedia content like images, audio, and video.
•How it works:
• At the sender's end, MIME encodes non-ASCII data into ASCII format.
• The email is sent through the Internet.
• At the receiver’s end, MIME decodes the data back to its original form.
MIME introduces additional headers to manage different content types:

1.MIME-Version: Identifies MIME version used (e.g., 1.0).


2.Content-Type: Specifies the type of file (e.g., text/html, image/jpeg).
3.Content-Transfer-Encoding: Defines encoding format (Base64, Quoted-Printable).
4.Content-ID: Unique identifier for embedded content (e.g., images in
HTML emails).
5.Content-Description: Short description of the attached file.
Message Transfer Agent: SMTP
. The actual mail transfer is done through message transfer agents. To send mail, a system must
have the client MTA, and to receive mail, a system must have a server MTA.
• Theformal protocol that defines the MTA client and server in the Internet is called the Simple
Mail Transfer Protocol (SMTP)
• Works on the store-and-forward model.
• SMTP is a push protocol (emails are pushed from client to server
• SMTP is used two times,
between the sender and the
sender's mail server and
between the two mail servers.
• SMTP simply defines how
commands and responses
must be sent back and
forth.Each network is free to
choose a software package for
implementation
• Responses are sent from the server to the client. A response is a three digit code that may be
followed by additional textual information. Table 26.8 lists some of the responses.

As the table shows, responses are divided into


four categories. The leftmost digit of
the code (2, 3, 4, and 5) defines the category.

Mail Transfer Phases


The process of transferring a mail message occurs in three
phases: connection establishment,
mail transfer, and connection termination.
Message Access Agent: POP and IMAP

Role of Message Access Agent (MAA)


• The first two stages of email transmission use SMTP, which is a push
protocol (sends emails from sender to recipient's mail server).
• The third stage (retrieving emails from the server to the user's device)
requires a pull protocol because the client must pull messages from
the mail server.
• Message Access Agent (MAA) facilitates email retrieval.
• The two main Message Access Protocols:
• POP3 (Post Office Protocol, version 3)
• IMAP4 (Internet Message Access Protocol, version 4)
• POP3 (Post Office Protocol, version 3) is a simple and limited email retrieval protocol.
• Used to download emails from the mail server to the client device.
• Once emails are downloaded, they are usually deleted from the server (default behavior).
• Suitable for users accessing email from a single device.
POP3 Components
Client POP3 Software
• Installed on the recipient’s computer.
• Communicates with the POP3 server to retrieve emails.
Server POP3 Software
• Installed on the mail server.
• Stores and manages emails before they are downloaded by the client.
How POP3 Works
1. The client initiates a connection to the server using TCP port 110.
2. The client sends its username and password for authentication.
3. The user can list and retrieve emails one by one.
4. After downloading, emails are either deleted from the server (delete mode) or kept for future
access (keep mode).
IMAP4 (Internet Mail Access Protocol, version 4) is a powerful and flexible email
retrieval protocol.
•Unlike POP3, IMAP keeps emails on the server and allows users to manage emails
remotely.
•Ideal for users who access email from multiple devices.
•Works over TCP port 143 (default) and port 993 for secure IMAP (IMAPS) using
SSL/TLS.
How IMAP4 Works
1.The client connects to the email server using TCP port 143.
2.The user logs in with their username and password.
3.Emails are not downloaded; instead, they remain on the server and can be accessed as
needed.
4.Users can view, search, organize, and delete emails directly on the server.
Key Features of IMAP4
1.Check Email Headers Before Downloading
• Users can preview email subject, sender, and size before downloading.
2. Search Email Content on the Server
• Users can search for specific words or phrases in emails without downloading
them.
3.Partial Download of Emails
• Useful when bandwidth is limited or emails contain large multimedia
attachments.
4.Organize Emails on the Server
• Users can create, delete, rename folders on the email server.
• Allows hierarchical folder structure for better email organization.
5. Synchronize Emails Across Multiple Devices
• Emails stay on the server, so they are accessible from multiple devices (PC,
phone, tablet).
4. IMAP4 vs. POP3 – Comparison

Feature IMAP4 POP3


Emails are downloaded and
Storage Emails remain on the server
deleted
Access Access from multiple devices Access from only one device
Email Organization Folders and hierarchy on the server No server-side organization
Emails stay synchronized across
Synchronization No synchronization
devices
More secure (email stays on the
Security Less secure (email stored locally)
server)
Can search emails before
Search & Filtering Must download to search
downloading
Introduction to FTP
• File Transfer Protocol (FTP) is a standard mechanism for transferring files between
hosts in a TCP/IP network.
• Used for uploading and downloading files between a client and a server.
• Addresses compatibility issues related to file names, text formats, and directory
structures between different systems.
• Operates using a client-server model.

How FTP Works


• FTP uses two separate TCP connections between the client and server:
• Control Connection (Port 21): Used for sending commands and responses.
• Data Connection (Port 20): Used for transferring actual files.
• The separation of control and data connections enhances efficiency.
Key Features of FTP
• Supports both active and passive modes for data transfer.
• Allows authentication (username & password) for secure access
• Supports different file transfer modes (ASCII & Binary).
• Facilitates large file transfers across different operating systems.
• Allows directory management (creating, renaming, and deleting files/folders).
FTP Connection Modes
• Active Mode:
• The client opens a command channel on Port 21 and requests a connection.
• The server initiates the data transfer using Port 20.
• Firewall issues may arise since the server initiates the data connection.
• Passive Mode (PASV):
• The client requests a data connection, and the server provides a random
port.
• The client then connects to this port for file transfer.
• Firewall-friendly as the client initiates all connections.
FTP Transfer Modes
1.ASCII Mode: Transfers text files while converting between different character
encodings.
2.Binary Mode: Transfers files without modification, preserving the original
format (used for images, videos, executables).
Trivial File Transfer Protocol (TFTP)
Overview
• TFTP (Trivial File Transfer Protocol) is a simplified version of FTP.
• Used for basic file transfers in network environments, such as booting operating
systems or firmware updates.
• Does not require authentication.
• Works on the UDP protocol, making it faster but less reliable than FTP.

How TFTP Works


1.The client sends a request to the TFTP server.
2.The server sends or receives the requested file.
3.TFTP uses UDP port 69 for communication.
Key Differences: FTP vs. TFTP
Feature FTP TFTP
Protocol Used TCP UDP
Port Number 21 (Control), 20 (Data) 69
Authentication Required Not Required
Security Can use FTPS/SFTP Less secure
Unreliable (no error-
Reliability Reliable (error-checking)
checking)
Large file transfers, website Firmware updates,
Use Case
management network booting
When to Use FTP vs. TFTP?

Use FTP when authentication, security, and file management are required.

Use TFTP for fast, lightweight, and automatic file transfers (e.g., network booting).
WWW

WWW Architecture
• The World Wide Web (WWW) is a distributed system that provides access
to interconnected documents and resources via the Internet.
• Based on the client-server model where web browsers (clients) request
resources from web servers.
• Uses the Hypertext Transfer Protocol (HTTP/HTTPS) for communication.
• Web resources are identified by Uniform Resource Locators (URLs).
• Consists of three key components:
• Web Clients (Browsers) – Display web pages (e.g., Chrome, Firefox, Edge).
• Web Servers – Store and serve web pages (e.g., Apache, Nginx, IIS).
• Web Documents – Static (HTML, CSS) or dynamic (JavaScript, PHP).
Client (Browser)
• Web browsers are software applications that interpret and display web documents.
• Various commercial browsers exist, including Google Chrome, Mozilla Firefox, Microsoft Edge,
and Safari.
• Architecture of a Web Browser:
• Controller:
• Receives input from the keyboard or mouse.
• Uses client programs to access web documents.
• Client Protocol:
• Handles communication with web servers using protocols like HTTP, HTTPS, and FTP.
• Interpreters:
• Process different types of web content:

HTML Interpreter – Renders structured web pages.
Java Interpreter – Executes Java applets.
JavaScript Interpreter – Runs dynamic scripts for interactivity.
• The controller fetches the web document, the client protocol retrieves it, and the appropriate
interpreter renders the content.
Server
• The Web page is stored at the server.
• Each time a client request arrives, the corresponding document is sent to the client.
• To improve efficiency, servers store requested files in a cache in memory, as memory is faster to access than
disk.
• A server can become more efficient through multithreading or multiprocessing.
• With multithreading/multiprocessing, a server can handle multiple requests simultaneously.
Uniform Resource Locator (URL)
• A client that wants to access a Web page needs the address. To facilitate the access
of documents distributed throughout the world, HTTP uses locators. The uniform
resource locator (URL) is a standard for specifying any kind of information on the
Internet. The
• URL defines four things: protocol, host computer, port, and
pathprotocol://hostname:port/path?query_string#fragment
•Components of a URL:
1.Protocol: Specifies the communication method (e.g., HTTP, HTTPS,
FTP).
2.Hostname (Domain): Identifies the web server (e.g.,
www.example.com).
3.Port (Optional): Default is 80 for HTTP, 443 for HTTPS.
4.Path: Specifies the file location (e.g., /about.html).
5.Query String (Optional): Contains parameters for dynamic content
(?id=123).
6.Fragment (Optional): A specific section within a page (#section1).
Cookies
• Cookies are small pieces of data stored on a user's browser by a web server.
• Used to store user preferences, session data, and tracking information.
• Types of Cookies:
• Session Cookies: Temporary; deleted when the browser is closed.
• Persistent Cookies: Stored for a specified duration, even after closing the browser.
• Third-party Cookies: Set by external domains (used for ads & tracking).
• Common Uses of Cookies:
User authentication (e.g., remembering login details).
Personalizing user experience (e.g., language preferences).
Tracking user activity for analytics and targeted ads.
• Privacy Concerns:
Can be used for tracking user behavior without consent.
Some browsers block third-party cookies to enhance privacy.
Web Document
• A Web document is a digital file stored on a web server and accessed through a web
browser.
• Web documents are primarily written in HTML (Hypertext Markup Language) but
can also include other formats like CSS, JavaScript, images, and multimedia files.
• Web documents can be static (fixed content) or dynamic (content generated in real-
time using scripts or databases).
• A static web document is delivered to the user exactly as stored on the server.
• A dynamic web document is generated by a web application, often using languages
like PHP, JavaScript, or Python.
• Web documents are identified and accessed using a Uniform Resource Locator
(URL).
• A web document may include hyperlinks, allowing users to navigate to other pages
or resources on the web.
1. What is HTML?
HTML stands for Hypertext Markup Language. It is the basic language used to create webpages.
Think of HTML as the building blocks of a webpage, just like bricks are the building blocks of a
house.
• 2. Why is HTML Important?
Every webpage you see on the internet is built using HTML. It structures the content by defining
headings, paragraphs, images, links, and more.

HTML is a Markup Language


•Unlike programming languages (like Python or Java), HTML does not perform
calculations or logic.
•It is only used to structure and display content on webpages.
2. HTML Works with Tags
•HTML uses tags to tell the browser how to display the content.
•<tagname> Content </tagname>
•Example: html
•<p>This is a paragraph.</p>
•<p> is the opening tag.
•</p> is the closing tag.
•The text in between appears as a paragraph on the webpage.
•HTTP (Hypertext Transfer Protocol)
•HTTP is a protocol primarily used for accessing data on the World Wide
Web.
•Combines functionalities of FTP and SMTP:
•Similar to FTP as it transfers files and uses TCP services.
•Simpler than FTP as it uses only one TCP connection (no separate control
connection).
•Similar to SMTP as the data format is controlled by MIME-like headers.
•Key Differences from SMTP:
•HTTP messages are not meant to be read by humans; they are processed by
web browsers and servers.
•Unlike SMTP, HTTP messages are delivered immediately instead of being
stored and forwarded.
•Message Format:
•Request Message: Contains client commands embedded in the request.
•Response Message: Contains requested file contents or other information.
•Uses TCP on well-known port 80.
• Common HTTP methods:
• GET: Requests data from a resource.
• POST: Sends data to be processed by the server.
• PUT: Updates an existing resource.
• DELETE: Removes a resource.

• HTTP response codes:


• 200 OK: Request was successful.
• 404 Not Found: The requested resource does not exist.
• 500 Internal Server Error: Server encountered an error processing the request.
Thank YOU

You might also like