0% found this document useful (0 votes)
14 views34 pages

QuestionBank 3&4

The document contains a comprehensive question bank covering various networking concepts, including responsibilities of network devices, characteristics of datagram and virtual circuit switching, source routing, and routing algorithms. It also discusses IPv4 and IPv6 protocols, their features, limitations, and header formats, as well as the differences between classful and classless addressing. Additionally, it outlines the importance of routing and the steps involved in distance vector and link state algorithms.

Uploaded by

karthikjyekkar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views34 pages

QuestionBank 3&4

The document contains a comprehensive question bank covering various networking concepts, including responsibilities of network devices, characteristics of datagram and virtual circuit switching, source routing, and routing algorithms. It also discusses IPv4 and IPv6 protocols, their features, limitations, and header formats, as well as the differences between classful and classless addressing. Additionally, it outlines the importance of routing and the steps involved in distance vector and link state algorithms.

Uploaded by

karthikjyekkar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 34

Question Bank

2marks

Outline main responsibility of switch device?


Outline main responsibility of Bridge device?
Outline main responsibility of Router device?
Identify three ways to handle headers for source routing.
List attractive properties of Switch devices.
Distinguish class full address over classless address.
List different approaches, used to forward the data from source to destination in a switched
network.
Draw a header format of asynchronous Transfer mode.(pg.no:184)
What is source routing in a switched network?(pg.no: 186)
Write any two challenges for mobile networking
Give an illustration of learning bridges.(pg.no:190)
Discuss IP datagram fragmentation and reassembly. (pg.no: 210)
Draw a format of class A, Class B, Class C.(pgno:215)
List main point to bear in mind while forwarding of IP datagram(pg.no:216)
What is a subnet address?
What is subnetting?
What is class less addressing.

10 marks

Explain characteristics of datagram and virtual circuit switching.


Characteristics of Datagram Networks (Connectionless Switching)

1. No Setup Required: Hosts can send packets anytime without prior connection setup.
Packets are forwarded immediately based on the forwarding table.
2. Independent Routing: Each packet is forwarded independently, meaning two packets
from the same source to the same destination may take different paths.
3. No Delivery Guarantee: The sender has no assurance that the packet will be delivered
or that the destination is active, as no acknowledgment is built-in.
4. Highly Scalable: Datagram networks do not require a dedicated path, making them
suitable for large-scale and dynamic environments like the internet.
5. Failure Tolerance: If a switch or link fails, packets can take alternate routes, ensuring
continued communication without major disruptions.
Characteristics of Virtual Circuit Switching (Connection-Oriented Switching)

1. Predefined Path: A logical connection is established before data transfer, ensuring all
packets follow the same path.
2. Reliable & Ordered Delivery: Packets arrive in sequence since they travel through the
same path, reducing the need for reordering.
3. Resource Reservation: Bandwidth and switch capacity are reserved during connection
setup, ensuring stable performance.
4. Lower Overhead After Setup: Once established, packets carry a virtual circuit
identifier instead of a full destination address, reducing processing time.
5. Failure Requires Re-establishment: If a switch or link fails, the entire virtual circuit is
disrupted, requiring a new connection setup.

Discuss Connectionless approach in switched networks. (Datagram) (refer


above answer)

In a connectionless network, packets are transmitted independently without


requiring a pre-established communication path between the sender and receiver.
This approach is commonly used in datagram networks, such as the Internet
using the IP (Internet Protocol).

A connectionless network allows data packets to be sent independently without establishing a


dedicated path. Each packet is routed separately, and they may take different paths to the
destination. This approach provides flexibility but does not guarantee order or delivery.

Examples:

• IP Networks (Internet Protocol): Each packet is forwarded independently, making it


efficient for large-scale communication like web browsing and email.
• UDP (User Datagram Protocol): Used in real-time applications such as video streaming
and online gaming, where speed is prioritized over reliability.
• Ethernet Networks: Frames are transmitted without pre-established connections,
making LAN communication fast and efficient.

Discuss connection oriented approach in switched networks with the help of


examples. Pg: 207
A connection-oriented network requires establishing a dedicated communication
path between sender and receiver before data transmission begins. All packets
follow the same route, ensuring ordered and reliable delivery.
Examples of Connection-Oriented Networks:

• Telephone Networks: Traditional circuit-switched telephone networks establish a


dedicated connection before communication starts.
• TCP in the Internet: The TCP protocol ensures reliable, ordered data transfer in
applications like web browsing, file transfers (FTP), and email (SMTP, IMAP,
POP3).
• MPLS (Multiprotocol Label Switching): A networking technology that establishes
predefined label-switched paths for efficient and reliable data forwarding.

Connection-oriented networks are best suited for applications requiring high reliability and
sequential data transfer, such as video conferencing, online banking, and voice calls.
Discuss Source Routing in a switched network. Pg: 219
Source Routing in a Switched Network

Source routing is a switching technique in which the entire route a packet must take through the
network is determined by the source host. Instead of relying on switches to determine forwarding
paths dynamically, the packet itself carries all necessary routing information in its header.

How It Works:

1. The source host includes an ordered list of switch ports in the packet header.
2. Each switch reads the next port number in the header and forwards the packet
accordingly.
3. Some implementations rotate or strip the header entries after each hop to ensure the
correct path is followed.

Types of Source Routing:

• Strict Source Routing: Every switch along the path must be explicitly listed.
• Loose Source Routing: Only key waypoints are specified, leaving intermediate routing
flexible.

Examples:

• IP Source Routing: An optional feature in IPv4 that allows packets to follow a pre-
determined path.
• Token Ring Networks: IBM’s Token Ring used source routing to guide packets through
the network.
• Multiprotocol Label Switching (MPLS): Uses label stacks, a concept similar to source
routing, for efficient packet forwarding.

Source routing is useful in controlled environments like data centers but is rarely used in large-
scale public networks due to scalability and security concerns.

List importance of routing? Provide the steps and protocol which


follow in the Distance Vector algorithm with examples.
Importance of Routing

Routing is crucial in networking for the following reasons:

1. Efficiency: Ensures data packets take the most optimal path, reducing latency and
improving speed.
2. Reliability: Adapts to network changes, ensuring communication even in case of
failures.
3. Scalability: Supports network expansion by efficiently managing data traffic.
4. Load Balancing: Distributes network traffic effectively, preventing congestion.
5. Security: Some routing protocols include authentication mechanisms to prevent
unauthorized access.

Steps in the Distance Vector Algorithm

The Distance Vector Algorithm follows these steps:

1. Initial Setup:
o Each router maintains a routing table with entries for directly connected
neighbors.
o Example: Router A has two neighbors, B and C, so its initial table contains direct
paths to B and C.
2. Periodic Updates:
o Routers exchange routing tables with their immediate neighbors at regular
intervals.
o Example: Router A shares its table with B and C, and vice versa.
3. Distance Calculation:
o Each router updates its routing table by adding the cost (e.g., hop count) to each
destination received from its neighbours.
o Example: If A → B has cost 1 and B → D has cost 2, then A can reach D via B
with cost 3.
4. Routing Table Update:
o If a router discovers a shorter path, it updates its table and propagates the
information to neighbors.
o Example: If A initially had a route to D with a cost of 5 but learns from B that a
path exists with a cost of 3, it updates its table and informs other routers.
5. Convergence:
o The process continues until all routers have the most efficient path to each
destination.

Protocols Following Distance Vector Algorithm

1. Routing Information Protocol (RIP)


o Uses hop count as a metric (max hop count = 15).
o Updates routing tables every 30 seconds.
o Example: In a small network with routers A, B, and C, A updates its table based
on information from B and C every 30 seconds.
2. Interior Gateway Routing Protocol (IGRP)
o Uses additional metrics like bandwidth and delay.
o Updates routing tables every 90 seconds.
o Example: In a large enterprise network, IGRP helps routers find better paths using
multiple factors, not just hop count.

What is routing? Discuss the link state algorithm with an example.


Routing is the process of selecting the best paths in a network along which to send data packets
from one point to another.
It involves various techniques and protocols to ensure data reaches its destination efficiently
and accurately.

Routing

Routing is the process of determining the best path for data packets to travel from a source to a
destination across a network. It involves selecting efficient routes based on network conditions,
cost, and performance. Routers play a key role in forwarding packets based on routing tables.

Link State Algorithm

The Link State Algorithm is a more advanced and efficient routing technique than the Distance
Vector Algorithm. It is used primarily in large and complex networks.
Key Components of the Link State Algorithm:

1. Link: Represents a network connection between two routers.


2. State: Includes information like cost (metric), bandwidth, and delay of the link.

Steps in the Link State Algorithm:

1. Initial State:
o Each router knows the state of its directly connected links and assigns a cost
metric to each.
o Example: Router A connects to Router B with a cost of 5 and Router C with a
cost of 3.
2. Link State Advertisements (LSAs):
o Each router generates an LSA containing details of its directly connected links
and floods this information to all routers in the network.
o Example: Router A sends LSAs to B and C, and they pass it to other routers.
3. Building a Topology Map:
o Routers collect LSAs from all other routers and construct a complete network
topology.
o Example: After receiving LSAs from all routers, A knows the full network layout.
4. Shortest Path Calculation:
o Each router applies Dijkstra's Algorithm to compute the shortest path to all
destinations based on the topology map.
o Example: If Router A wants to reach Router D, it selects the shortest route based
on cumulative link costs.

Example of Link State Algorithm:

Consider a network with four routers: A, B, C, and D.

Link Cost
A→B 5
A→C 3
B→D 2
C→D 4

• Step 1: Each router generates LSAs and shares with all routers.
• Step 2: Every router builds the complete topology map.
• Step 3: Using Dijkstra’s Algorithm, each router computes the shortest paths:
o A to D: A → B → D (Cost = 5 + 2 = 7)
o A to C: Direct (Cost = 3)

Protocols Using Link State Algorithm:

1. Open Shortest Path First (OSPF):


oUses LSAs and Dijkstra’s Algorithm.
oCommonly used in enterprise networks.
2. Intermediate System to Intermediate System (IS-IS):
o Similar to OSPF but more scalable for service provider networks.

Advantages of Link State Algorithm:

Faster Convergence: Quickly adapts to network changes.


Accurate Network Map: Each router has a full view of the network.
Scalability: Suitable for large networks.

Disadvantages:

Complexity: More difficult to implement than Distance Vector.


Resource Intensive: Requires more memory and CPU power.

Write a note on IPV4

Note on IPv4 (Internet Protocol Version 4)

IPv4 is the fourth version of the Internet Protocol, used to identify devices on a network using
a unique address. It is the most widely used version of IP.

Key Features:

• 32-bit address system, written as four numbers (e.g., 192.168.1.1)


• Provides about 4.3 billion unique addresses
• Supports connectionless communication (no need to set up a connection before data
transfer)
• Uses protocols like ICMP, TCP, and UDP
• Header includes fields like:
o Version
o Source and Destination IP addresses
o Time to Live (TTL)
o Protocol
o Header Checksum

Limitations:
• Limited address space
• No built-in support for security or encryption
• No native support for mobility or auto-configuration

Write a note on IPV6


b) With the diagram, Discuss IPV6 header format.

Note on IPv4 (Internet Protocol Version 4)

IPv4 stands for Internet Protocol version 4. It is the fourth version of the Internet Protocol
and the first widely used version to connect devices on the internet. IPv4 is a connectionless
protocol used to identify devices through an IP address.

Introduction

IPv4 (Internet Protocol Version 4) is the fourth version of the Internet Protocol (IP) and the most
widely used protocol for identifying devices on a network. It operates at the network layer of
the OSI model and provides unique addressing for devices to communicate over the internet.

Key Features of IPv4

1. 32-bit Addressing – Uses a 32-bit address, allowing approximately 4.3 billion (2³²)
unique addresses.
2. Decimal Notation – Represented in dotted decimal format, e.g., 192.168.1.1.
3. Classful Addressing – Divided into five classes (A, B, C, D, E) based on network size.
4. Subnetting and CIDR – Introduces subnetting for better IP management and CIDR for
efficient address allocation.
5. Connectionless Protocol – Does not guarantee delivery, as it relies on TCP for
reliability.
6. Broadcast Communication – Supports broadcasting to send data to all devices in a
network.
7. Fragmentation – Allows large packets to be divided into smaller fragments for
transmission.

Limitations of IPv4

• Limited Address Space – The 4.3 billion IP addresses are insufficient due to the rapid
growth of internet-connected devices.
• Security Concerns – IPv4 does not have built-in encryption or authentication.
• Lack of QoS Support – IPv4 does not efficiently support real-time services like VoIP.
• Address Exhaustion – Due to the growing demand, IPv4 addresses are nearly
exhausted.
Transition to IPv6

To overcome IPv4's limitations, IPv6 was introduced, featuring:

• 128-bit addressing (supports 2¹²⁸ addresses).


• Improved security (IPSec is mandatory).
• Better routing efficiency (no need for NAT).

Conclusion

IPv4 remains the backbone of the internet, but due to address exhaustion and security concerns,
the transition to IPv6 is gradually taking place.

IPv6 has a simpler and more efficient header format compared to IPv4. The fixed header is 40
bytes long and contains essential information required for routing and delivery.

Fields in IPv6 Header (From the diagram):


1. Version (4 bits)
o Indicates the version of IP protocol.
o Always set to 6 for IPv6.
2. Traffic Class (8 bits)
o Similar to IPv4’s Type of Service (ToS).
o Used for priority handling of packets, like QoS (Quality of Service).
3. Flow Label (20 bits)
o Identifies packet flows that require special handling, such as real-time audio or
video.
o Helps routers recognize packets from the same flow.
4. Payload Length (16 bits)
o Specifies the length of the payload (i.e., the data + any extension headers).
o Does not include the fixed 40-byte IPv6 header.
5. Next Header (8 bits)
o Indicates the type of header immediately following this one.
o Can point to TCP, UDP, ICMPv6, or an extension header.
6. Hop Limit (8 bits)
o Replaces the “Time to Live (TTL)” field in IPv4.
o Tells routers how many hops the packet can take before being discarded.
7. Source Address (128 bits)
o The IPv6 address of the sender.
o A full 128-bit address to support a vastly larger address space than IPv4.
8. Destination Address (128 bits)
o The IPv6 address of the receiver.
9. Next Header/Data (Variable)
o This is where the extension headers and actual data (like TCP or UDP) go.
o Extension headers are used for additional features such as routing, fragmentation,
etc.

Advantages of IPv6 Header Format:

• Simplified header speeds up processing by routers.


• No checksum (relies on upper-layer checks).
• Supports extension headers for added features without bloating the basic header.
• Larger address space for better scalability.

Compare classful and classless addresses.


Comparison of Classful and Classless Addressing
Feature Classful Addressing Classless Addressing (CIDR)
Uses predefined IP address classes Eliminates fixed classes, allowing
Definition
(A, B, C, D, E). flexible address allocation.
Default subnet masks (A: 255.0.0.0, Uses Variable-Length Subnet Mask
Subnet Mask
B: 255.255.0.0, C: 255.255.255.0). (VLSM) for efficient IP usage.
Allows efficient allocation by
Address Often results in wastage of IP
assigning only the required number of
Utilization addresses.
addresses.
Routing Table Larger due to multiple entries for Smaller, as CIDR allows route
Size networks. aggregation.
Limited due to fixed class sizes (e.g.,
Highly scalable because custom
Scalability Class B always has 65,536
prefix lengths can be used.
addresses).
- CIDR example: 192.168.1.0/27 (32
- Class A: 10.0.0.0/8 (16 million IPs)
IP Assignment IPs)
- Class B: 172.16.0.0/16 (65,536 IPs)
Example - Can allocate variable-sized blocks
- Class C: 192.168.1.0/24 (256 IPs)
(e.g., /28 for 16 IPs, /30 for 4 IPs).
Broadcast and Uses default subnet masks, leading to Supports custom subnet masks,
Subnetting more broadcast traffic. reducing unnecessary broadcasts.
Modern internet addressing (post-
Used In Older networks, pre-CIDR era.
1993, with CIDR adoption).

Key Takeaways

• Classful addressing is rigid, leading to inefficient address usage.


• Classless addressing (CIDR) allows flexible subnetting, reducing waste and routing
table size.
• CIDR is the standard today, making classful addressing largely obsolete.

Explain an algorithm, which addresses the problem of loops


in bridges.
Discuss steps in spanning tree algorithms with the help of examples.

To address the problem of loops in bridges within an extended LAN, the Spanning Tree
Algorithm is used, here's an explanation of the Spanning Tree Algorithm that prevents infinite
looping of frames:
Spanning Tree Algorithm – Explained

Problem:

When multiple bridges are used to connect LAN segments, loops can be formed unintentionally
or intentionally (for redundancy). In such cases, broadcast and unknown-destination frames
can circulate endlessly, causing network congestion or broadcast storms.

Solution: Spanning Tree Algorithm (STA)

The Spanning Tree Algorithm, developed by Radia Perlman, ensures that the network of
bridges is loop-free by logically organizing it into a spanning tree — a subset of the network
graph that:

• Connects all the LANs.


• Contains no cycles (loops).

This is implemented in practice via the Spanning Tree Protocol (STP), part of the IEEE
802.1D standard.

How the Algorithm Works

1. Select a Root Bridge


o Each bridge has a unique ID.
o All bridges communicate using special messages (called Bridge Protocol Data
Units – BPDUs) to elect the bridge with the smallest ID as the root bridge.
2. Compute the Shortest Path to the Root
o Each bridge determines the shortest path (in terms of cost) to reach the root
bridge.
o This path is based on the number of hops or link cost.
3. Select Root Ports
o Each bridge selects one port (called the root port) that gives it the best (shortest)
path to the root bridge.
o Only that port will be used to forward frames toward the root.
4. Select Designated Ports
o On each LAN segment, the bridge closest to the root is selected as the
designated bridge.
o The port it uses to reach that LAN is the designated port and is used to forward
frames away from the root onto that LAN.
5. Disable Other Ports (Blocking State)
o Any ports that are neither root ports nor designated ports are put into a
blocking state.
o These ports do not forward frames and effectively break loops in the topology.

Effect of the Algorithm

By keeping only one active path between any two LAN segments, the spanning tree eliminates
loops, ensuring:

• No frame is forwarded infinitely.


• The network remains connected.
• Redundant paths can be reactivated dynamically if active paths fail (the spanning tree
reconfigures automatically).

Example (based on your data):

In the diagram where bridges B1, B4, and B6 form a loop:

• Without STP, a broadcast from one host could endlessly circulate between B1, B4, and
B6.
• With STP, only one path among B1, B4, and B6 will be selected to be part of the
spanning tree; the others will be disabled, breaking the loop.

Summary:

The Spanning Tree Algorithm is a distributed protocol that enables bridges to organize
themselves into a loop-free logical topology by electing a root bridge, computing shortest paths,
and disabling unnecessary ports.

Discuss routing on mobile IP.


Mobile IP is a communication protocol that allows mobile devices (like smartphones, laptops)
to move between networks while maintaining the same IP address.

Why Mobile IP is Needed:

Normally, IP addresses are tied to a fixed location. But mobile users move across different
networks (like from Wi-Fi to mobile data). Without Mobile IP, communication would break
when the device’s network changes.

How Routing Works in Mobile IP:

1. When the Mobile Node is at Home:


o It communicates normally using its home IP address.
2. When the Mobile Node Moves to a Foreign Network:
o It gets a Care-of Address from the Foreign Agent.
o The Mobile Node registers its new CoA with the Home Agent.
3. Communication Flow:
o Any data sent to the mobile node goes first to the Home Agent.
o The Home Agent forwards it to the Care-of Address using tunneling
(encapsulation).
o The Foreign Agent delivers the packet to the Mobile Node.
4. Reverse Path (from Mobile Node to Others):
o The mobile node can send packets directly to other hosts.

Write a note on : DHCP


DHCP stands for Dynamic Host Configuration Protocol. It is a network management protocol used
to automatically assign IP addresses and other important network configuration settings to devices on
a network.

Discuss on Global Addresses


Global addresses are IP addresses that are unique and accessible across the entire internet. They are
used to identify devices on a global scale, allowing communication between different networks around
the world.

Describe how datagram forwards in IP.


How Datagram Forwarding Works in IP

In IP networks, datagram forwarding means sending data packets (called datagrams) from one
device to another through routers until it reaches its destination.

Step-by-Step Process:

1. Source Sends Datagram:


o The sender creates a datagram with the destination IP address.
o The datagram is handed over to the local router (also called the default gateway).
2. Router Checks Destination IP:
o The router receives the datagram and examines its destination IP address.
3. Routing Table Lookup:
o The router checks its routing table, which contains paths to different IP
networks.
o Based on the destination address, it picks the best next hop (the next router or
final device).
4. Forwarding to Next Hop:
o The router sends the datagram to the next hop.
o If the destination is directly connected, it sends it directly there.
5. Repeat Until Destination:
o Each router along the way repeats this process.
o Finally, the datagram reaches the device with the matching IP address.

Example:

• You send a message to a friend whose IP is 192.168.10.5.


• Your PC sends the datagram to your router.
• Your router looks up its table and forwards it to another router.
• This continues until the datagram reaches 192.168.10.5.

Write a note on : ARP.


Write a note on : ICMP
Write a note on : Virtual networks.
Virtual Networks

A Virtual Network is a network created using software, allowing devices to communicate with
each other as if they are physically connected, even though they may be located in different
places. It operates on top of physical networks and helps in managing, segmenting, and securing
network traffic.

Key Features:

• Logical, not physical — exists in software.


• Devices appear to be on the same local network.
• Can be easily created, configured, and managed.

Types of Virtual Networks:

1. VPN (Virtual Private Network):


Creates a secure tunnel over the internet, allowing users to access private networks
remotely.
2. VLAN (Virtual Local Area Network):
Segments a physical network into smaller logical groups to improve performance and
security.

Advantages:
• Cost-effective (no need for physical wiring).
• Flexible and scalable.
• Enhances security by isolating traffic.
• Ideal for remote access and cloud services.

Example:

When employees work from home, they use a VPN to connect to the office network securely.
Though they are not in the office physically, the virtual network makes it appear like they are.

List challenges for mobile networking.

Challenges for Mobile Networking

1. Limited Bandwidth:
Wireless networks often have lower data transfer rates compared to wired networks.
2. Signal Interference:
Physical obstacles, weather, or other electronic devices can cause signal loss or disruption.
3. Mobility Management:
Ensuring seamless handover when users move between different network areas is complex.
4. Battery Constraints:
Mobile devices rely on battery power, so energy-efficient communication is important.
5. Security Issues:
Wireless communication is more prone to attacks like eavesdropping and data theft.
6. Network Congestion:
High user traffic can lead to slow connections or dropped data.
7. Variable Connectivity:
Signal strength and network availability can change frequently, affecting performance.
8. Latency:
Time delay in sending and receiving data may increase due to weak or busy networks.
9. Limited Hardware Resources:
Mobile devices have less processing power and storage than desktops or servers.
10. Data Cost:
Mobile data usage can be expensive, especially in regions with limited or costly internet
access.
Module 4:
2 marks

List issues in resource allocation.

Give a glimpse on slow start.

Write the characteristics of FIFO.

Write the characteristics of Fair Queuing.

Draw a header format of UDP & TCP

Draw UDP message Queue

Discuss how TCP manages a byte stream.

Discuss a three way handshake.

List TCP congestion avoidance mechanism

List TCP Congestion Avoidance mechanism

Compare congestion avoidance with congestion control mechanisms.

What is congestion?

List issues which affect the network to get congestion in wired networks.

List issues which affect the network to get congestion in wireless networks.

10 marks
Discuss state transition diagram in reliable byte stream.
Discuss how connection establishment and termination are done in TCP to transmit data/3 way
handshake.
State Transition Diagram in Reliable Byte Stream (TCP)

A state transition diagram for a reliable byte stream protocol like TCP illustrates how a TCP
connection progresses through various states during its lifecycle. It helps in understanding how
TCP establishes, maintains, and terminates connections reliably.

Connection Establishment (Three-Way Handshake)

• CLOSED → LISTEN: A server process enters the LISTEN state, waiting for
connection requests.
• CLOSED → SYN_SENT: A client initiates a connection by sending a SYN segment
(Active Open).
• LISTEN → SYN_RCVD: The server receives SYN, responds with SYN + ACK.
• SYN_SENT → ESTABLISHED: Client receives SYN + ACK, sends ACK, and the
connection is established.
• SYN_RCVD → ESTABLISHED: Server receives the final ACK, completing the
handshake.

2. Data Transfer State

• ESTABLISHED: The connection is active, and both parties can send/receive data.

FIN (Finish) is a flag in the TCP header used to terminate a connection between two
devices.

3. Connection Termination (Four-Way Handshake)

• ESTABLISHED → FIN_WAIT_1: A party initiates connection termination by sending


FIN.
• ESTABLISHED → CLOSE_WAIT: The other party receives FIN, acknowledges it
(ACK), and moves to CLOSE_WAIT to handle any remaining tasks.
• FIN_WAIT_1 → FIN_WAIT_2: After receiving ACK for the first FIN, the sender
moves to FIN_WAIT_2.
• CLOSE_WAIT → LAST_ACK: Once the remaining tasks are completed, the receiver
sends a final FIN.
• FIN_WAIT_2 → TIME_WAIT: After receiving FIN, an acknowledgment is sent
(FIN/ACK) and TCP waits before fully closing to ensure the other side also closes.
• TIME_WAIT → CLOSED: The connection is completely closed after a timeout (to
handle lost ACKs).
• LAST_ACK → CLOSED: The server, after sending FIN, waits for an ACK. Once
received, it moves to CLOSED state.

Key States in TCP Connection

1. CLOSED
o The initial state where no connection exists.
o A connection is created when an application initiates a connection request.
2. LISTEN
o The server waits for an incoming connection request from a client.
o The server enters this state after executing socket() and listen().
3. SYN-SENT
o The client sends a SYN (synchronize) segment to initiate a connection.
o The client moves to this state after executing connect().
4. SYN-RECEIVED
o The server receives a SYN request and responds with SYN + ACK.
o The server transitions here after receiving a connection request.
5. ESTABLISHED
o Both client and server successfully exchange SYN and ACK segments.
o This state allows data transmission between both ends.
6. FIN-WAIT-1
o The connection is being closed from one end (active close).
o The client sends a FIN (Finish) segment and waits for an acknowledgment.
7. FIN-WAIT-2
o The other party acknowledges the FIN and prepares to close its end.
o The client moves here after receiving an ACK for its FIN.
8. CLOSING
o Both sides send a FIN simultaneously, leading to a transition to this state.
9. TIME-WAIT
o The sender of the final ACK waits for a period (typically 2 × MSL) to ensure the
other end received it.
o This prevents old duplicate packets from interfering with a new connection.
10. CLOSE-WAIT

• The receiving side acknowledges the FIN but still has data to send.
• The application will close the connection when all pending data is sent.

11. LAST-ACK

• The receiving side sends its own FIN after sending all remaining data.
• The connection is finally closed when the other end acknowledges it.

Discuss on simple Demultiplexer/UDP

Simple Demultiplexer (UDP)

A simple demultiplexer is a system that takes incoming data and distributes it to the correct
destination without additional processing or modifications.

In networking, UDP (User Datagram Protocol) acts as a simple demultiplexer because:

1. It only uses port numbers to determine where to send data.


2. It does not establish a connection before sending data.
3. It does not check for errors or retransmit lost packets—it just delivers data to the
application.

The User Datagram Protocol (UDP) is a simple, connectionless transport protocol that extends
host-to-host delivery into process-to-process communication. It provides minimal overhead and
direct data transmission without establishing a connection.
Key Features of UDP

1. Connectionless Protocol
o Unlike TCP, UDP does not establish a connection before sending data.
o It simply forwards packets to the destination without ensuring reliability or order.
2. Minimal Overhead
o UDP adds only an 8-byte header, containing:
▪ Source Port (16 bits)
▪ Destination Port (16 bits)
▪ Length (16 bits)
▪ Checksum (16 bits)
o Since there is no need for handshaking or connection management, it is faster
than TCP.
3. Process-to-Process Communication (Demultiplexing)
o UDP enables multiple processes on a single host to share the network.
o It uses port numbers to identify sending and receiving processes.
o The combination of port and host address acts as a demultiplexing key.
4. No Error Handling or Flow Control
o UDP does not guarantee packet delivery, order, or error correction.
o If a packet is lost or arrives out of order, UDP does nothing to recover it.
o It relies on the application layer to handle errors if needed.
5. Independent Packets (Datagrams)
o Each UDP packet is independent and carries its own destination information.
o The receiver cannot assume any relationship between received packets.
6. Checksum for Data Integrity
o UDP includes a checksum to verify the correctness of the data.
o The checksum covers the UDP header, message body, and a pseudoheader (from
the IP header).
o If the checksum is incorrect, the packet is discarded.

How UDP Works

1. A client process sends a message to a server process using a destination port.


2. The server receives the message, extracts the source port, and can send a reply.
3. If the client doesn’t know the server’s port, it contacts a well-known port (e.g., DNS at
port 53).
4. Some systems use a port mapper service to dynamically assign ports for services.

Use Cases of UDP

• Real-time applications (e.g., video streaming, online gaming, VoIP) where speed is
more important than reliability.
• DNS (Domain Name System) queries, where fast lookup is needed.
• Broadcast and multicast applications, such as network discovery protocols.
Conclusion

UDP acts as a simple demultiplexer, allowing processes to communicate using port numbers
with minimal overhead. It is fast but unreliable, making it suitable for applications that
prioritize speed over accuracy.

Justify why UDP is a simple demultiplexer.

Why is UDP a Simple Demultiplexer?

UDP (User Datagram Protocol) is called a simple demultiplexer because it delivers data to the
correct application using only port numbers without additional processing. Here’s why:

1. Uses Only Port Numbers for Delivery

• UDP identifies applications solely based on port numbers (Source Port & Destination
Port).
• When data arrives, UDP checks the destination port number and forwards it to the
appropriate application.
• Unlike TCP, it does not establish a connection or maintain session states.

2. No Connection Establishment

• UDP does not perform a handshake (unlike TCP's three-way handshake).


• It sends packets immediately, making it faster and more efficient.
• This makes UDP a direct "pass-through" mechanism for data.

3. No Error Handling or Retransmission

• UDP does not check for lost, duplicate, or out-of-order packets.


• It simply delivers packets as they arrive, making it lightweight and low-latency.

4. Independent Datagram Processing

• Each UDP packet (datagram) is treated separately.


• The receiver cannot assume any relationship between packets.
• There is no sequencing or acknowledgment, unlike TCP.

5. Minimal Overhead

• UDP adds only an 8-byte header, keeping the protocol simple.


• No need for extra control mechanisms like flow control or congestion control.

Conclusion

UDP is a simple demultiplexer because it directly forwards packets to the correct


application using only port numbers. It does not track state, perform error correction, or
ensure delivery order, making it a fast, lightweight, and efficient transport protocol.

Discuss on reliable byte stream/TCP.

Reliable Byte Stream (TCP)

Introduction

The Transmission Control Protocol (TCP) is a connection-oriented transport protocol that


ensures reliable, in-order, and error-free delivery of data. Unlike User Datagram Protocol
(UDP), which offers an unreliable, connectionless service, TCP provides stream-based
communication, meaning it delivers data as a continuous flow of bytes rather than discrete
packets.

Key Features of TCP

1. Reliable and Ordered Delivery


o Ensures that all transmitted data reaches the destination without errors, in the
correct order, and without duplication.
o Uses sequence numbers to track and reorder packets if they arrive out of order.
2. Connection-Oriented Communication
o Requires a three-way handshake for connection establishment before data
exchange begins.
o Ensures a proper connection termination process to free up resources after
communication ends.
3. Full-Duplex Communication
o Allows simultaneous two-way data transmission (bi-directional byte stream).
o Each TCP connection maintains two separate streams, one for each direction.
4. Flow Control
o Ensures that the sender does not overwhelm the receiver with too much data at
once.
o Implements a sliding window mechanism, where the sender adjusts its
transmission rate based on the receiver’s available buffer space.
5. Congestion Control
o Regulates the rate of data transmission to prevent network congestion.
o Uses mechanisms like slow start, congestion avoidance, fast retransmit, and
fast recovery to manage network load.
6. Error Detection and Retransmission
o Uses checksums to detect corrupted packets.
o Implements automatic retransmission of lost or corrupted packets to ensure data
integrity.
7. Multiplexing and Demultiplexing
o Uses port numbers to allow multiple applications to communicate
simultaneously over the same network.
o Ensures that data is delivered to the correct process on the receiving host.

Explain FIFO and Fair Queuing Disciplines with real world application.
Discuss Queuing Disciplines. Pg: 20
Write a note on : FIFO
"queuing discipline" refers to the set of rules that determine the order in which packets are
selected for transmission from a queue, essentially deciding which packet gets served next
when multiple packets are waiting to be sent; common queuing disciplines include First Come
First Served (FCFS), Priority Queuing, Fair Queuing (FQ), and Weighted Fair Queuing (WFQ).

Provide a scenario for congestion in a network and provide an avoidance


mechanism.

Scenario: Network Congestion in an Online Exam Portal

Scenario Description

Imagine a university conducting an online exam where thousands of students are


simultaneously submitting their answers. The exam portal is hosted on a cloud server, but due to
the high traffic load, congestion occurs.

What Happens?

• The server becomes overwhelmed, leading to slow responses or even timeouts.


• Some students experience failed submissions due to packet loss.
• The network delay causes frustration as students struggle to confirm whether their
answers were successfully submitted.

Congestion Avoidance Mechanism: Random Early Detection (RED)

To prevent congestion before it worsens, the server can implement Random Early Detection
(RED), a congestion avoidance mechanism.

How RED Works in This Scenario?

1. Monitor Traffic Load:


o The network continuously checks buffer occupancy (how full the router queue
is).
2. Early Packet Dropping:
o If the queue is lightly loaded, all packets are accepted.
o If the queue starts filling up, packets are randomly dropped before reaching full
capacity.
3. Effect on Users:
o When a few packets are dropped early, students' devices (TCP connections)
reduce their sending rate proactively.
o This prevents sudden congestion buildup, ensuring smoother traffic flow.

Alternative Avoidance Mechanisms

1. Traffic Shaping (Token Bucket)


o The university can limit the number of submissions per second using a token
bucket algorithm, preventing a traffic surge.
2. Load Balancing
o Distribute traffic among multiple servers, preventing overload on a single
system.
3. Quality of Service (QoS) Policies
o Prioritize exam submissions over less critical activities (e.g., background app
updates).

Conclusion

Network congestion during an online exam is a real-world problem that can be mitigated using
RED, traffic shaping, and load balancing. These techniques ensure fair bandwidth
distribution, prevent server crashes, and provide a smooth user experience.

Write a note on: Fair Queuing.

Fair Queuing (FQ)

Fair Queuing (FQ) is a packet scheduling algorithm that ensures fair bandwidth distribution
among multiple data flows in a network. It prevents any single flow from monopolizing the
network, ensuring equal access for all.

Working of Fair Queuing


1. Separate Queues for Each Flow
o Each data flow (e.g., a video stream, a file download) gets a separate queue in
the router.
2. Round-Robin Scheduling
o Packets are transmitted one from each queue in turn, ensuring fairness.
3. Bit-by-Bit Fairness
o Instead of sending one full packet at a time, FQ simulates sending one bit from
each flow to distribute bandwidth evenly.

Advantages of Fair Queuing

• Prevents Bandwidth Monopolization → No single flow can dominate the network.


• Fair Resource Allocation → Ensures equal bandwidth distribution.
• Better Performance for All Users → Low-latency applications (e.g., VoIP, gaming) are
not delayed by large data transfers.

Disadvantages of Fair Queuing

• Higher Complexity → Requires multiple queues and scheduling calculations.


• Not Suitable for High-Speed Routers → As the number of flows increases, managing
multiple queues becomes challenging.

Comparison with Other Queuing Techniques

Queuing Fairnes Complexit


Priority Handling Use Case
Type s y
FIFO No No Low Simple networks
Yes (high-priority
PQ No Moderate Real-time applications
first)
FQ Yes No High Balanced traffic handling

Explain FIFO and Fair Queuing Disciplines with real world application.

Queuing Disciplines: FIFO & Fair Queuing

Queuing disciplines determine how packets are processed when multiple flows compete for
network resources. Two important queuing disciplines are FIFO (First In, First Out) and Fair
Queuing (FQ).

1. FIFO (First In, First Out) Queuing


Concept

• FIFO is the simplest queuing discipline where packets are processed in the order they
arrive (like a queue in a grocery store).
• There is no priority or differentiation among packets.
• Once the queue is full, new incoming packets are dropped (tail drop).

Real-World Application: FIFO

Highway Toll Booth

• Vehicles arrive at a toll booth in order and are served one by one.
• The first car in line gets processed first, without considering urgency.
• If traffic is high, later cars have to wait (just like network congestion).

Advantages

✔ Simple and easy to implement


✔ Low processing overhead

Disadvantages

No fairness (one large flow can dominate bandwidth)


No priority for urgent packets (e.g., emergency calls vs. regular web browsing)

2. Fair Queuing (FQ) Discipline


Concept

• Unlike FIFO, Fair Queuing creates separate queues for each flow.
• It transmits packets in a round-robin fashion (one from each flow at a time).
• Ensures equal bandwidth distribution among active flows.

Real-World Application: Fair Queuing

Microphone Sharing in a Group Discussion

• Each participant (data flow) gets an equal chance to speak (send packets).
• No single speaker (flow) dominates the conversation.
• The discussion moves forward smoothly, just like FQ prevents one data stream from
monopolizing the bandwidth.

Advantages

✔ Prevents starvation of small flows


✔ Ensures fair bandwidth allocation
✔ Helps maintain low-latency performance for applications like video calls and VoIP

Disadvantages

Higher complexity (requires multiple queues)


Harder to scale in high-speed networks

Comparison Table
Feature FIFO Fair Queuing (FQ)
First Come, First
Processing Order Equal sharing among flows
Served
Priority Handling No priority Ensures fairness
Complexity Simple Complex (needs multiple queues)
Real-World
Toll booth queue Microphone sharing
Example
Suitability Low-traffic networks Balanced network usage

Write a note on :AIMD,slow start,fast retransmission and fast recovery/ discuss


TCP congestion control mechanism.

TCP Congestion Control Mechanisms

Transmission Control Protocol (TCP) uses several congestion control mechanisms to ensure
efficient data transmission while preventing network congestion. The key mechanisms include
Additive Increase, Multiplicative Decrease (AIMD), Slow Start, Fast Retransmit, and Fast
Recovery. These mechanisms work together to optimize network performance and ensure fair
bandwidth distribution.

1. Additive Increase, Multiplicative Decrease (AIMD)

AIMD is a congestion control algorithm used by TCP to adjust the congestion window (cwnd)
dynamically.

Working Mechanism:

• Additive Increase (AI):


o When no congestion is detected, TCP gradually increases cwnd.
o The increase is linear: cwnd = cwnd + 1 per Round Trip Time (RTT).
o This allows efficient bandwidth utilization without sudden congestion.
• Multiplicative Decrease (MD):
o When congestion occurs (e.g., packet loss is detected), TCP reduces cwnd
exponentially.
o The decrease follows: cwnd = cwnd × 0.5.
o This ensures rapid response to congestion, preventing excessive data flow.

Need for AIMD:

• Ensures efficient bandwidth usage while preventing congestion.


• Prevents a single sender from dominating the bandwidth.
• Achieves fairness among multiple users.

Example:

• Initially, cwnd = 10 MSS (Maximum Segment Size).


• TCP increases cwnd by 1 MSS per RTT.
• If congestion occurs (packet loss detected), cwnd is halved.
• This cycle continues, allowing steady bandwidth usage.

2. Slow Start

Slow Start is a congestion control mechanism used at the beginning of a connection or after a
timeout to avoid overloading the network.

How It Works:

1. Initial State:
o TCP starts with a small cwnd (typically 1 MSS).
o The sender gradually increases cwnd exponentially.
2. Exponential Growth:
o For every acknowledged packet, cwnd increases by 1 MSS.
o This results in growth: 1, 2, 4, 8, 16… packets per RTT.
3. Threshold Check:
o When cwnd reaches a threshold (ssthresh), TCP switches to AIMD.
4. If Congestion Occurs:
o TCP reduces cwnd to cwnd / 2 and enters congestion avoidance mode.

Example (Video Streaming Startup):

• When you press "Play" on YouTube/Netflix, the video starts with low data flow.
• TCP doubles the transmission rate each RTT.
• Once ssthresh is reached, TCP transitions to AIMD to stabilize the connection.

3. Fast Retransmit

Fast Retransmit is a mechanism that detects lost packets before a timeout occurs and retransmits
them quickly.

How It Works:

1. The sender transmits packets (e.g., P1, P2, P3, P4, P5).
2. If P3 is lost, the receiver acknowledges P2 multiple times (ACK P2, ACK P2, ACK P2).
3. When the sender receives three duplicate ACKs, it assumes P3 is lost.
4. TCP immediately retransmits P3, avoiding long timeouts.

Example:

• Sender: P1 → P2 → P3 (Lost) → P4 → P5
• Receiver: ACK P1 → ACK P2 → ACK P2 → ACK P2
• Sender detects loss and quickly retransmits P3.

4. Fast Recovery

Fast Recovery prevents unnecessary Slow Start after a packet loss and ensures smooth traffic
flow.

How It Works:

1. When Fast Retransmit resends the lost packet, TCP reduces cwnd instead of resetting it.
2. Instead of restarting Slow Start (cwnd = 1 MSS), TCP cuts cwnd in half.
3. The growth becomes linear (AIMD mode) instead of exponential.

Example:

• Without Fast Recovery: TCP resets cwnd to 1 MSS, causing a slow restart.
• With Fast Recovery: TCP reduces cwnd by half and resumes AIMD.

Real-World Applications (Streaming, Video Calls, Gaming)

• Streaming Services (YouTube/Netflix):


o Slow Start helps ramp up the video bitrate smoothly.
o AIMD adjusts quality dynamically (e.g., 1080p → 720p → 480p).
o Fast Retransmit/Recovery prevents unnecessary buffering.
• Video Calls (Zoom, Microsoft Teams):
o AIMD prevents call drops by adjusting data rates.
o Fast Retransmit ensures lost audio packets are resent quickly.
• Online Gaming:
o Fast Recovery minimizes lag during packet loss.
o AIMD ensures fair bandwidth usage for multiple players.

Conclusion

TCP congestion control mechanisms like AIMD, Slow Start, Fast Retransmit, and Fast
Recovery ensure efficient data transmission, prevent congestion, and optimize network
performance. These mechanisms are critical for maintaining smooth streaming, video calls,
and online applications while ensuring fair bandwidth distribution across users.

Write a note on :DECbit, Random early detection, source based(TCP vegas)/ discuss TCP
congestion Avoidance mechanism.

You might also like