0% found this document useful (0 votes)
8 views7 pages

Assignment 4 - CSN 341

The document discusses key concepts in the transport layer of networking, focusing on flow control, port numbers, and protocols like TCP and UDP. It explains the importance of mechanisms like sliding windows for flow control, the role of port numbers in multiplexing, and compares Selective-Repeat and Go-Back-N protocols for reliability and efficiency. Additionally, it evaluates the trade-offs between TCP and UDP in low-latency scenarios, emphasizing the significance of sequence numbers and acknowledgments in ensuring reliable data transmission.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views7 pages

Assignment 4 - CSN 341

The document discusses key concepts in the transport layer of networking, focusing on flow control, port numbers, and protocols like TCP and UDP. It explains the importance of mechanisms like sliding windows for flow control, the role of port numbers in multiplexing, and compares Selective-Repeat and Go-Back-N protocols for reliability and efficiency. Additionally, it evaluates the trade-offs between TCP and UDP in low-latency scenarios, emphasizing the significance of sequence numbers and acknowledgments in ensuring reliable data transmission.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

CSN-341

Assignment-4
Group Members:

1. Vaibhavi Makwana (22114051)


2. Jagrati Kaushik (22114040)
3. Anjalika Singh (22114007)
4. Ishaan Jain (22114039)
5. Rubaan Hasan (22114080)
6. Partha Kaushik (21114069)

Ques-1 Why is flow control important in the transport layer, and how does TCP
implement flow control through mechanisms like sliding windows? Discuss
how improper flow control can lead to issues like buffer overflow and network
congestion, and explain the potential impact on overall network performance.
Solution-
Flow control in the transport layer, like TCP, is all about making sure the sender
doesn't send more data than the receiver can handle at once.
How TCP Handles Flow Control:
TCP uses something called a sliding window:
• The sliding window is like a limit that tells the sender how much data it can
send before needing to wait for the receiver to say, "I’m ready for more."
• The receiver tells the sender, “Hey, I can only handle this much data at a time”
(based on its buffer or storage space).
• As the receiver processes the data, it sends back an acknowledgment (ACK),
saying, "Okay, I’ve handled this much, you can send more now."
If the receiver’s buffer is getting full, it tells the sender to slow down. Once space is
freed up, the sender can speed up again.
What Happens Without Proper Flow Control:
1. Buffer Overflow: If the sender sends too much data too quickly, the
receiver’s buffer could get full, causing it to lose data. This is like spilling water
because the glass is too full.
2. Network Congestion: When multiple senders overload the network by
sending too much data at once, it can clog the system, causing delays and
lost data, like a traffic jam on a busy road.
Impact on Overall Network Performance:
• Increased Latency: Improper flow control can lead to packet loss and
retransmissions, which increases the round-trip time (RTT) and overall latency
in communication.
• Decreased Throughput: Buffer overflows and retransmissions can
significantly reduce network throughput, as the sender must resend lost
packets, reducing the efficiency of data transmission.
• Unreliable Communication: In extreme cases, inadequate flow control can
lead to dropped connections, data corruption, or unreliable communication as
data transmission becomes erratic and unpredictable.

Ques-2 Why are port numbers significant in the transport layer, and how do
they facilitate multiplexing and demultiplexing of data streams? Provide
examples of specific port numbers associated with well-known services, such
as HTTP and DNS, and discuss the potential issues that can arise due to port
number conflicts or misuse.
Solution- Port numbers are essential in the transport layer (e.g., TCP and UDP) for
identifying specific processes or services running on a host. They help in
multiplexing and demultiplexing data streams, allowing multiple communication
channels between hosts to occur simultaneously.
Multiplexing and Demultiplexing:
• Multiplexing: This is when multiple data streams (from different applications)
are combined and sent over the same network connection. For example,
when you browse the web, listen to music online, and send an email, all of
these activities use the same network connection, but each uses a different
port number.
• Demultiplexing: When the data reaches your computer, the transport layer
uses the port numbers to "sort" the data and send it to the right application.
For example, web data goes to the browser (port 80 or 443), while email data
goes to the email client.
Examples of Well-Known Port Numbers
• HTTP: Port 80 (TCP)
o HTTP is the protocol for web traffic. Browsers typically communicate
with web servers over this port.
• HTTPS: Port 443 (TCP)
o Used for secure web traffic using SSL/TLS encryption.
• DNS: Port 53 (UDP)
o DNS queries often use UDP to resolve domain names to IP addresses.
• SMTP: Port 25 (TCP)
o Used for sending emails.
Port Number Conflicts and Misuse
1. Port Number Conflicts: When two applications attempt to use the same port
on a host, a conflict arises. Since only one application can bind to a particular
port at a time, this can cause one of the applications to fail or not start
properly. For example, if two web servers try to use port 80, one will encounter
a "port already in use" error.
2. Misuse of Ports: Malicious actors may exploit well-known ports for attacks,
like:
o Port Scanning: Attackers scan a host’s open ports to identify
vulnerabilities. Open ports with insecure services are a common entry
point for attacks.
o Port Hijacking: A malicious application could bind to a port that is
expected to be used by a legitimate service, redirecting traffic and
potentially stealing data.

Ques-3 Compare and contrast the Selective-Repeat and Go-Back-N protocols


in the context of reliability and efficiency. How does piggybacking in
bidirectional communication improve efficiency, and what are the practical
considerations when implementing piggybacking in real-world applications?
Solution- Both Selective-Repeat (SR) and Go-Back-N (GBN) are sliding window
protocols used to ensure reliable data transmission by handling packet loss and
errors. However, they differ in how they manage acknowledgment (ACK) and
retransmission of packets.
1. Go-Back-N (GBN):
• Reliability: If a packet is lost or arrives with an error, GBN requires the sender
to retransmit that packet and all subsequent packets, even if some of those
later packets were received correctly. This ensures reliability, but it can result
in unnecessary retransmissions.
• Efficiency: GBN can be inefficient, especially over unreliable networks,
because the sender might resend many packets that the receiver had already
received correctly. The larger the window size, the more unnecessary
retransmissions occur after an error.
2. Selective-Repeat (SR):
• Reliability: In SR, only the specific packet that was lost or corrupted is
retransmitted. The receiver stores correctly received packets in a buffer until
the missing packet arrives. Once all packets are in order, they are delivered to
the application.
• Efficiency: SR is more efficient than GBN because it minimizes unnecessary
retransmissions. However, it requires more complex receiver-side logic to
manage out-of-order packets and buffering.
Comparison:
• Reliability: Both protocols ensure reliable transmission, but GBN achieves
this by retransmitting multiple packets after a single error, while SR focuses
only on the problematic packet.
• Efficiency: SR is generally more efficient than GBN, especially in networks
with higher error rates, because it avoids unnecessary retransmissions.
However, SR's efficiency comes at the cost of more complexity in handling
out-of-order packets.

Piggybacking in Bidirectional Communication:


Piggybacking is a technique used in bidirectional communication to improve
efficiency by combining an acknowledgment (ACK) with a data packet going in the
opposite direction. Instead of sending a separate packet just to acknowledge receipt
of data, the ACK is "piggybacked" onto a data packet being sent back to the sender.
How Piggybacking Improves Efficiency:
• Fewer Packets: By combining an ACK with a data packet, piggybacking
reduces the total number of packets transmitted, which reduces overhead and
saves bandwidth.
• Less Overhead: Each packet carries overhead (header information).
Piggybacking allows the system to avoid sending additional packets that
would have been used solely for ACKs, thus lowering the overall
communication cost.
Practical Considerations in Real-World Applications:
• Timing: If one side has no data to send but still needs to acknowledge
received data, it cannot wait indefinitely to piggyback the ACK. In such cases,
a separate ACK packet must be sent to avoid communication delays.
• Resource Constraints: Piggybacking can reduce efficiency in systems where
the sender or receiver has limited buffer space or processing power. The
additional complexity of managing combined packets could offset the benefits
in resource-constrained environments.
Ques-4 In scenarios where low latency is critical, how would you evaluate the
trade-offs between using TCP and UDP? Consider the impact on reliability,
error correction, and the specific needs of applications: a) online gaming b)
video conferencing c) stock trading.
Solution-
a) Online Gaming
In online gaming, low latency is crucial for real-time interaction, especially in fast-
paced games (e.g., first-person shooters or real-time strategy games). Players must
receive updates as quickly as possible to react in real time.
• Preferred protocol: UDP
o Why UDP?: Online gaming prioritizes speed over reliability. Missing a
few packets is generally acceptable since new data (player positions,
actions) will soon arrive. Retransmitting old data can cause delays and
make the game unresponsive.
o UDP strengths: Low latency, minimal overhead.
o Trade-off: Some packet loss is tolerated, but overall game play is
smoother without the delays caused by TCP's retransmissions.
b) Video Conferencing
For video conferencing, a balance between latency and quality is necessary. Users
expect a real-time experience with minimal delay, but dropping too many packets
can reduce the quality of the audio and video feed.
• Preferred protocol: UDP (with application-level error handling)
o Why UDP?: In video conferencing, slight packet loss is often less
disruptive than delays caused by retransmitting lost packets. UDP
allows for continuous stream delivery, even if some packets are
missing, which helps maintain real-time interaction.
o UDP strengths: Low latency. Many video conferencing applications
handle errors at the application layer (e.g., Forward Error Correction or
interpolation of missing frames).
o Trade-off: Reduced quality when packets are dropped, but the
conversation continues smoothly without long pauses.
• TCP can be used in video conferencing, but it can lead to undesirable delays,
especially during poor network conditions.
c) Stock Trading
In stock trading, both low latency and reliability are critical. Traders depend on
real-time market data, and a delay of milliseconds can impact decision-making. At
the same time, data integrity is paramount, as incorrect or missing information
could lead to financial losses.
• Preferred protocol: TCP
o Why TCP?: The financial data transmitted in stock trading must be
accurate. TCP ensures that every packet is delivered reliably, which is
critical for trades to be executed correctly and for traders to make
informed decisions.
o TCP strengths: Guarantees data reliability, which is more important
than minimizing latency at all costs.
o Trade-off: Slightly higher latency due to the connection setup and
error-handling mechanisms. However, the integrity of the data is non-
negotiable in financial transactions, making TCP more suitable.

Ques-5 Discuss the importance of sequence numbers and acknowledgments


in TCP. How do these mechanisms ensure reliable data transmission and
prevent issues of: a) packet duplication b) out-of-order delivery c) Provide an
example of how TCP recovers from lost or out-of-order packets.
Solution- Sequence numbers and acknowledgments are fundamental mechanisms in
TCP that ensure reliable data transmission. Here’s a breakdown of their importance
and how they help prevent various issues.
Importance of Sequence Numbers and Acknowledgments
1. Sequence Numbers:
o Each byte of data transmitted in a TCP connection is assigned a
unique sequence number. This numbering allows the receiver to
reassemble the data in the correct order, regardless of the order in
which packets are received.
o Sequence numbers also help in identifying duplicate packets, as each
packet has a unique number.
2. Acknowledgments (ACKs):
o The receiver sends back ACKs to the sender to confirm receipt of
packets. An ACK includes the sequence number of the next expected
byte, indicating that all preceding bytes have been received
successfully.
o If the sender does not receive an ACK within a certain timeframe, it
assumes the packet was lost and will retransmit it.
Preventing Issues with Sequence Numbers and ACKs
a) Packet Duplication
• How it's prevented:
o When a receiver gets a packet with a sequence number that has
already been processed, it recognizes it as a duplicate and discards it,
only acknowledging the next expected byte.
• Example:
o If the receiver already acknowledged byte 100, receiving another
packet with byte 100 again would be ignored, as the receiver knows it
has already processed that data.
b) Out-of-Order Delivery
• How it's prevented:
o Sequence numbers allow the receiver to place packets in the correct
order. If packets arrive out of order, the receiver can hold onto them
until the missing packets arrive.
• Example:
o If packets with sequence numbers 1, 2, and 4 arrive but packet 3 is
missing, the receiver will buffer packets 1 and 2 and wait for packet 3.
Once packet 3 arrives, all packets can be delivered in the correct order.
c) Recovering from Lost or Out-of-Order Packets
• Example of recovery:
1. Lost Packet Scenario:
▪ Suppose a sender transmits packets with sequence numbers 1,
2, 3, and 4, but the ACK for packet 2 is lost.
▪ The receiver acknowledges packets 1 and 2 but does not
receive packet 3. It sends an ACK for byte 3 (indicating it
expects the next byte to be 3).
▪ The sender, not receiving an ACK for packet 2 within a timeout
period, assumes it was lost and retransmits packet 2.
▪ Once the receiver receives the retransmitted packet 2, it
processes it and then acknowledges receipt of packets 1, 2, and
3, allowing the sender to continue sending data.
2. Out-of-Order Packet Scenario:
▪ If the sender transmits packets 1, 2, and 3, and packet 3 arrives
before packet 2 at the receiver, the receiver will store packet 3
and send an ACK for packet 2, indicating it has not yet received
it.
▪ When packet 2 eventually arrives, the receiver processes it,
sends an ACK for 3, and then delivers packets 1, 2, and 3 to the
application layer in the correct order.

You might also like