QuestionBank 3&4
QuestionBank 3&4
2marks
10 marks
1. No Setup Required: Hosts can send packets anytime without prior connection setup.
Packets are forwarded immediately based on the forwarding table.
2. Independent Routing: Each packet is forwarded independently, meaning two packets
from the same source to the same destination may take different paths.
3. No Delivery Guarantee: The sender has no assurance that the packet will be delivered
or that the destination is active, as no acknowledgment is built-in.
4. Highly Scalable: Datagram networks do not require a dedicated path, making them
suitable for large-scale and dynamic environments like the internet.
5. Failure Tolerance: If a switch or link fails, packets can take alternate routes, ensuring
continued communication without major disruptions.
Characteristics of Virtual Circuit Switching (Connection-Oriented Switching)
1. Predefined Path: A logical connection is established before data transfer, ensuring all
packets follow the same path.
2. Reliable & Ordered Delivery: Packets arrive in sequence since they travel through the
same path, reducing the need for reordering.
3. Resource Reservation: Bandwidth and switch capacity are reserved during connection
setup, ensuring stable performance.
4. Lower Overhead After Setup: Once established, packets carry a virtual circuit
identifier instead of a full destination address, reducing processing time.
5. Failure Requires Re-establishment: If a switch or link fails, the entire virtual circuit is
disrupted, requiring a new connection setup.
Examples:
Connection-oriented networks are best suited for applications requiring high reliability and
sequential data transfer, such as video conferencing, online banking, and voice calls.
Discuss Source Routing in a switched network. Pg: 219
Source Routing in a Switched Network
Source routing is a switching technique in which the entire route a packet must take through the
network is determined by the source host. Instead of relying on switches to determine forwarding
paths dynamically, the packet itself carries all necessary routing information in its header.
How It Works:
1. The source host includes an ordered list of switch ports in the packet header.
2. Each switch reads the next port number in the header and forwards the packet
accordingly.
3. Some implementations rotate or strip the header entries after each hop to ensure the
correct path is followed.
• Strict Source Routing: Every switch along the path must be explicitly listed.
• Loose Source Routing: Only key waypoints are specified, leaving intermediate routing
flexible.
Examples:
• IP Source Routing: An optional feature in IPv4 that allows packets to follow a pre-
determined path.
• Token Ring Networks: IBM’s Token Ring used source routing to guide packets through
the network.
• Multiprotocol Label Switching (MPLS): Uses label stacks, a concept similar to source
routing, for efficient packet forwarding.
Source routing is useful in controlled environments like data centers but is rarely used in large-
scale public networks due to scalability and security concerns.
1. Efficiency: Ensures data packets take the most optimal path, reducing latency and
improving speed.
2. Reliability: Adapts to network changes, ensuring communication even in case of
failures.
3. Scalability: Supports network expansion by efficiently managing data traffic.
4. Load Balancing: Distributes network traffic effectively, preventing congestion.
5. Security: Some routing protocols include authentication mechanisms to prevent
unauthorized access.
1. Initial Setup:
o Each router maintains a routing table with entries for directly connected
neighbors.
o Example: Router A has two neighbors, B and C, so its initial table contains direct
paths to B and C.
2. Periodic Updates:
o Routers exchange routing tables with their immediate neighbors at regular
intervals.
o Example: Router A shares its table with B and C, and vice versa.
3. Distance Calculation:
o Each router updates its routing table by adding the cost (e.g., hop count) to each
destination received from its neighbours.
o Example: If A → B has cost 1 and B → D has cost 2, then A can reach D via B
with cost 3.
4. Routing Table Update:
o If a router discovers a shorter path, it updates its table and propagates the
information to neighbors.
o Example: If A initially had a route to D with a cost of 5 but learns from B that a
path exists with a cost of 3, it updates its table and informs other routers.
5. Convergence:
o The process continues until all routers have the most efficient path to each
destination.
Routing
Routing is the process of determining the best path for data packets to travel from a source to a
destination across a network. It involves selecting efficient routes based on network conditions,
cost, and performance. Routers play a key role in forwarding packets based on routing tables.
The Link State Algorithm is a more advanced and efficient routing technique than the Distance
Vector Algorithm. It is used primarily in large and complex networks.
Key Components of the Link State Algorithm:
1. Initial State:
o Each router knows the state of its directly connected links and assigns a cost
metric to each.
o Example: Router A connects to Router B with a cost of 5 and Router C with a
cost of 3.
2. Link State Advertisements (LSAs):
o Each router generates an LSA containing details of its directly connected links
and floods this information to all routers in the network.
o Example: Router A sends LSAs to B and C, and they pass it to other routers.
3. Building a Topology Map:
o Routers collect LSAs from all other routers and construct a complete network
topology.
o Example: After receiving LSAs from all routers, A knows the full network layout.
4. Shortest Path Calculation:
o Each router applies Dijkstra's Algorithm to compute the shortest path to all
destinations based on the topology map.
o Example: If Router A wants to reach Router D, it selects the shortest route based
on cumulative link costs.
Link Cost
A→B 5
A→C 3
B→D 2
C→D 4
• Step 1: Each router generates LSAs and shares with all routers.
• Step 2: Every router builds the complete topology map.
• Step 3: Using Dijkstra’s Algorithm, each router computes the shortest paths:
o A to D: A → B → D (Cost = 5 + 2 = 7)
o A to C: Direct (Cost = 3)
Disadvantages:
IPv4 is the fourth version of the Internet Protocol, used to identify devices on a network using
a unique address. It is the most widely used version of IP.
Key Features:
Limitations:
• Limited address space
• No built-in support for security or encryption
• No native support for mobility or auto-configuration
IPv4 stands for Internet Protocol version 4. It is the fourth version of the Internet Protocol
and the first widely used version to connect devices on the internet. IPv4 is a connectionless
protocol used to identify devices through an IP address.
Introduction
IPv4 (Internet Protocol Version 4) is the fourth version of the Internet Protocol (IP) and the most
widely used protocol for identifying devices on a network. It operates at the network layer of
the OSI model and provides unique addressing for devices to communicate over the internet.
1. 32-bit Addressing – Uses a 32-bit address, allowing approximately 4.3 billion (2³²)
unique addresses.
2. Decimal Notation – Represented in dotted decimal format, e.g., 192.168.1.1.
3. Classful Addressing – Divided into five classes (A, B, C, D, E) based on network size.
4. Subnetting and CIDR – Introduces subnetting for better IP management and CIDR for
efficient address allocation.
5. Connectionless Protocol – Does not guarantee delivery, as it relies on TCP for
reliability.
6. Broadcast Communication – Supports broadcasting to send data to all devices in a
network.
7. Fragmentation – Allows large packets to be divided into smaller fragments for
transmission.
Limitations of IPv4
• Limited Address Space – The 4.3 billion IP addresses are insufficient due to the rapid
growth of internet-connected devices.
• Security Concerns – IPv4 does not have built-in encryption or authentication.
• Lack of QoS Support – IPv4 does not efficiently support real-time services like VoIP.
• Address Exhaustion – Due to the growing demand, IPv4 addresses are nearly
exhausted.
Transition to IPv6
Conclusion
IPv4 remains the backbone of the internet, but due to address exhaustion and security concerns,
the transition to IPv6 is gradually taking place.
IPv6 has a simpler and more efficient header format compared to IPv4. The fixed header is 40
bytes long and contains essential information required for routing and delivery.
Key Takeaways
To address the problem of loops in bridges within an extended LAN, the Spanning Tree
Algorithm is used, here's an explanation of the Spanning Tree Algorithm that prevents infinite
looping of frames:
Spanning Tree Algorithm – Explained
Problem:
When multiple bridges are used to connect LAN segments, loops can be formed unintentionally
or intentionally (for redundancy). In such cases, broadcast and unknown-destination frames
can circulate endlessly, causing network congestion or broadcast storms.
The Spanning Tree Algorithm, developed by Radia Perlman, ensures that the network of
bridges is loop-free by logically organizing it into a spanning tree — a subset of the network
graph that:
This is implemented in practice via the Spanning Tree Protocol (STP), part of the IEEE
802.1D standard.
By keeping only one active path between any two LAN segments, the spanning tree eliminates
loops, ensuring:
• Without STP, a broadcast from one host could endlessly circulate between B1, B4, and
B6.
• With STP, only one path among B1, B4, and B6 will be selected to be part of the
spanning tree; the others will be disabled, breaking the loop.
Summary:
The Spanning Tree Algorithm is a distributed protocol that enables bridges to organize
themselves into a loop-free logical topology by electing a root bridge, computing shortest paths,
and disabling unnecessary ports.
Normally, IP addresses are tied to a fixed location. But mobile users move across different
networks (like from Wi-Fi to mobile data). Without Mobile IP, communication would break
when the device’s network changes.
In IP networks, datagram forwarding means sending data packets (called datagrams) from one
device to another through routers until it reaches its destination.
Step-by-Step Process:
Example:
A Virtual Network is a network created using software, allowing devices to communicate with
each other as if they are physically connected, even though they may be located in different
places. It operates on top of physical networks and helps in managing, segmenting, and securing
network traffic.
Key Features:
Advantages:
• Cost-effective (no need for physical wiring).
• Flexible and scalable.
• Enhances security by isolating traffic.
• Ideal for remote access and cloud services.
Example:
When employees work from home, they use a VPN to connect to the office network securely.
Though they are not in the office physically, the virtual network makes it appear like they are.
1. Limited Bandwidth:
Wireless networks often have lower data transfer rates compared to wired networks.
2. Signal Interference:
Physical obstacles, weather, or other electronic devices can cause signal loss or disruption.
3. Mobility Management:
Ensuring seamless handover when users move between different network areas is complex.
4. Battery Constraints:
Mobile devices rely on battery power, so energy-efficient communication is important.
5. Security Issues:
Wireless communication is more prone to attacks like eavesdropping and data theft.
6. Network Congestion:
High user traffic can lead to slow connections or dropped data.
7. Variable Connectivity:
Signal strength and network availability can change frequently, affecting performance.
8. Latency:
Time delay in sending and receiving data may increase due to weak or busy networks.
9. Limited Hardware Resources:
Mobile devices have less processing power and storage than desktops or servers.
10. Data Cost:
Mobile data usage can be expensive, especially in regions with limited or costly internet
access.
Module 4:
2 marks
What is congestion?
List issues which affect the network to get congestion in wired networks.
List issues which affect the network to get congestion in wireless networks.
10 marks
Discuss state transition diagram in reliable byte stream.
Discuss how connection establishment and termination are done in TCP to transmit data/3 way
handshake.
State Transition Diagram in Reliable Byte Stream (TCP)
A state transition diagram for a reliable byte stream protocol like TCP illustrates how a TCP
connection progresses through various states during its lifecycle. It helps in understanding how
TCP establishes, maintains, and terminates connections reliably.
• CLOSED → LISTEN: A server process enters the LISTEN state, waiting for
connection requests.
• CLOSED → SYN_SENT: A client initiates a connection by sending a SYN segment
(Active Open).
• LISTEN → SYN_RCVD: The server receives SYN, responds with SYN + ACK.
• SYN_SENT → ESTABLISHED: Client receives SYN + ACK, sends ACK, and the
connection is established.
• SYN_RCVD → ESTABLISHED: Server receives the final ACK, completing the
handshake.
• ESTABLISHED: The connection is active, and both parties can send/receive data.
FIN (Finish) is a flag in the TCP header used to terminate a connection between two
devices.
1. CLOSED
o The initial state where no connection exists.
o A connection is created when an application initiates a connection request.
2. LISTEN
o The server waits for an incoming connection request from a client.
o The server enters this state after executing socket() and listen().
3. SYN-SENT
o The client sends a SYN (synchronize) segment to initiate a connection.
o The client moves to this state after executing connect().
4. SYN-RECEIVED
o The server receives a SYN request and responds with SYN + ACK.
o The server transitions here after receiving a connection request.
5. ESTABLISHED
o Both client and server successfully exchange SYN and ACK segments.
o This state allows data transmission between both ends.
6. FIN-WAIT-1
o The connection is being closed from one end (active close).
o The client sends a FIN (Finish) segment and waits for an acknowledgment.
7. FIN-WAIT-2
o The other party acknowledges the FIN and prepares to close its end.
o The client moves here after receiving an ACK for its FIN.
8. CLOSING
o Both sides send a FIN simultaneously, leading to a transition to this state.
9. TIME-WAIT
o The sender of the final ACK waits for a period (typically 2 × MSL) to ensure the
other end received it.
o This prevents old duplicate packets from interfering with a new connection.
10. CLOSE-WAIT
• The receiving side acknowledges the FIN but still has data to send.
• The application will close the connection when all pending data is sent.
11. LAST-ACK
• The receiving side sends its own FIN after sending all remaining data.
• The connection is finally closed when the other end acknowledges it.
A simple demultiplexer is a system that takes incoming data and distributes it to the correct
destination without additional processing or modifications.
The User Datagram Protocol (UDP) is a simple, connectionless transport protocol that extends
host-to-host delivery into process-to-process communication. It provides minimal overhead and
direct data transmission without establishing a connection.
Key Features of UDP
1. Connectionless Protocol
o Unlike TCP, UDP does not establish a connection before sending data.
o It simply forwards packets to the destination without ensuring reliability or order.
2. Minimal Overhead
o UDP adds only an 8-byte header, containing:
▪ Source Port (16 bits)
▪ Destination Port (16 bits)
▪ Length (16 bits)
▪ Checksum (16 bits)
o Since there is no need for handshaking or connection management, it is faster
than TCP.
3. Process-to-Process Communication (Demultiplexing)
o UDP enables multiple processes on a single host to share the network.
o It uses port numbers to identify sending and receiving processes.
o The combination of port and host address acts as a demultiplexing key.
4. No Error Handling or Flow Control
o UDP does not guarantee packet delivery, order, or error correction.
o If a packet is lost or arrives out of order, UDP does nothing to recover it.
o It relies on the application layer to handle errors if needed.
5. Independent Packets (Datagrams)
o Each UDP packet is independent and carries its own destination information.
o The receiver cannot assume any relationship between received packets.
6. Checksum for Data Integrity
o UDP includes a checksum to verify the correctness of the data.
o The checksum covers the UDP header, message body, and a pseudoheader (from
the IP header).
o If the checksum is incorrect, the packet is discarded.
• Real-time applications (e.g., video streaming, online gaming, VoIP) where speed is
more important than reliability.
• DNS (Domain Name System) queries, where fast lookup is needed.
• Broadcast and multicast applications, such as network discovery protocols.
Conclusion
UDP acts as a simple demultiplexer, allowing processes to communicate using port numbers
with minimal overhead. It is fast but unreliable, making it suitable for applications that
prioritize speed over accuracy.
UDP (User Datagram Protocol) is called a simple demultiplexer because it delivers data to the
correct application using only port numbers without additional processing. Here’s why:
• UDP identifies applications solely based on port numbers (Source Port & Destination
Port).
• When data arrives, UDP checks the destination port number and forwards it to the
appropriate application.
• Unlike TCP, it does not establish a connection or maintain session states.
2. No Connection Establishment
5. Minimal Overhead
Conclusion
Introduction
Explain FIFO and Fair Queuing Disciplines with real world application.
Discuss Queuing Disciplines. Pg: 20
Write a note on : FIFO
"queuing discipline" refers to the set of rules that determine the order in which packets are
selected for transmission from a queue, essentially deciding which packet gets served next
when multiple packets are waiting to be sent; common queuing disciplines include First Come
First Served (FCFS), Priority Queuing, Fair Queuing (FQ), and Weighted Fair Queuing (WFQ).
Scenario Description
What Happens?
To prevent congestion before it worsens, the server can implement Random Early Detection
(RED), a congestion avoidance mechanism.
Conclusion
Network congestion during an online exam is a real-world problem that can be mitigated using
RED, traffic shaping, and load balancing. These techniques ensure fair bandwidth
distribution, prevent server crashes, and provide a smooth user experience.
Fair Queuing (FQ) is a packet scheduling algorithm that ensures fair bandwidth distribution
among multiple data flows in a network. It prevents any single flow from monopolizing the
network, ensuring equal access for all.
Explain FIFO and Fair Queuing Disciplines with real world application.
Queuing disciplines determine how packets are processed when multiple flows compete for
network resources. Two important queuing disciplines are FIFO (First In, First Out) and Fair
Queuing (FQ).
• FIFO is the simplest queuing discipline where packets are processed in the order they
arrive (like a queue in a grocery store).
• There is no priority or differentiation among packets.
• Once the queue is full, new incoming packets are dropped (tail drop).
• Vehicles arrive at a toll booth in order and are served one by one.
• The first car in line gets processed first, without considering urgency.
• If traffic is high, later cars have to wait (just like network congestion).
Advantages
Disadvantages
• Unlike FIFO, Fair Queuing creates separate queues for each flow.
• It transmits packets in a round-robin fashion (one from each flow at a time).
• Ensures equal bandwidth distribution among active flows.
• Each participant (data flow) gets an equal chance to speak (send packets).
• No single speaker (flow) dominates the conversation.
• The discussion moves forward smoothly, just like FQ prevents one data stream from
monopolizing the bandwidth.
Advantages
Disadvantages
Comparison Table
Feature FIFO Fair Queuing (FQ)
First Come, First
Processing Order Equal sharing among flows
Served
Priority Handling No priority Ensures fairness
Complexity Simple Complex (needs multiple queues)
Real-World
Toll booth queue Microphone sharing
Example
Suitability Low-traffic networks Balanced network usage
Transmission Control Protocol (TCP) uses several congestion control mechanisms to ensure
efficient data transmission while preventing network congestion. The key mechanisms include
Additive Increase, Multiplicative Decrease (AIMD), Slow Start, Fast Retransmit, and Fast
Recovery. These mechanisms work together to optimize network performance and ensure fair
bandwidth distribution.
AIMD is a congestion control algorithm used by TCP to adjust the congestion window (cwnd)
dynamically.
Working Mechanism:
Example:
2. Slow Start
Slow Start is a congestion control mechanism used at the beginning of a connection or after a
timeout to avoid overloading the network.
How It Works:
1. Initial State:
o TCP starts with a small cwnd (typically 1 MSS).
o The sender gradually increases cwnd exponentially.
2. Exponential Growth:
o For every acknowledged packet, cwnd increases by 1 MSS.
o This results in growth: 1, 2, 4, 8, 16… packets per RTT.
3. Threshold Check:
o When cwnd reaches a threshold (ssthresh), TCP switches to AIMD.
4. If Congestion Occurs:
o TCP reduces cwnd to cwnd / 2 and enters congestion avoidance mode.
• When you press "Play" on YouTube/Netflix, the video starts with low data flow.
• TCP doubles the transmission rate each RTT.
• Once ssthresh is reached, TCP transitions to AIMD to stabilize the connection.
3. Fast Retransmit
Fast Retransmit is a mechanism that detects lost packets before a timeout occurs and retransmits
them quickly.
How It Works:
1. The sender transmits packets (e.g., P1, P2, P3, P4, P5).
2. If P3 is lost, the receiver acknowledges P2 multiple times (ACK P2, ACK P2, ACK P2).
3. When the sender receives three duplicate ACKs, it assumes P3 is lost.
4. TCP immediately retransmits P3, avoiding long timeouts.
Example:
• Sender: P1 → P2 → P3 (Lost) → P4 → P5
• Receiver: ACK P1 → ACK P2 → ACK P2 → ACK P2
• Sender detects loss and quickly retransmits P3.
4. Fast Recovery
Fast Recovery prevents unnecessary Slow Start after a packet loss and ensures smooth traffic
flow.
How It Works:
1. When Fast Retransmit resends the lost packet, TCP reduces cwnd instead of resetting it.
2. Instead of restarting Slow Start (cwnd = 1 MSS), TCP cuts cwnd in half.
3. The growth becomes linear (AIMD mode) instead of exponential.
Example:
• Without Fast Recovery: TCP resets cwnd to 1 MSS, causing a slow restart.
• With Fast Recovery: TCP reduces cwnd by half and resumes AIMD.
Conclusion
TCP congestion control mechanisms like AIMD, Slow Start, Fast Retransmit, and Fast
Recovery ensure efficient data transmission, prevent congestion, and optimize network
performance. These mechanisms are critical for maintaining smooth streaming, video calls,
and online applications while ensuring fair bandwidth distribution across users.
Write a note on :DECbit, Random early detection, source based(TCP vegas)/ discuss TCP
congestion Avoidance mechanism.