0% found this document useful (0 votes)
21 views3 pages

Final2022 CCF Final Sol

The document is a final exam for a Computer Networks course, covering topics such as queueing in routers, TCP throughput, congestion control, and a distributed transaction processing system. It includes multiple questions with specific scenarios and calculations related to networking concepts. The exam assesses understanding of TCP mechanisms, flow control, and the design of communication protocols between clients and servers.

Uploaded by

hamster930413
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views3 pages

Final2022 CCF Final Sol

The document is a final exam for a Computer Networks course, covering topics such as queueing in routers, TCP throughput, congestion control, and a distributed transaction processing system. It includes multiple questions with specific scenarios and calculations related to networking concepts. The exam assesses understanding of TCP mechanisms, flow control, and the design of communication protocols between clients and servers.

Uploaded by

hamster930413
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

Final Exam

Computer Networks Fall 2022


Prof. Cheng-Fu Chou
Question 1: ``Quickies''(25%) Answer each of the following questions briefly, i.e., in at most a few sentences.

a) (10%) Where can queueing occur in a router? Briefly explain the conditions that lead to such queueing.
Queueing can occur at both the input por ts and the output ports of a router. Queueing occurs at
the output port when the arriving rate of packets to the outgoing link exceeds the link capacity.
Queue occurs on an input port when the arriving rate of packets exceeds the switch capacity; head-
of-the-line blocking can also cause queueing at the input ports.
b) (5%) Two hosts simultaneously send data through a link of capacity 1Mbps. Host A generates data with
a rate of 1Mbps and uses TCP. Host B uses UDP and transmits a 100bytes packet every 1ms Which host
will obtain higher throughput?
B host

c) (10%) Suppose a web server has 500 ongoing TCP connections. How many server-side sockets are
used? How many server-side port numbers are used? Briefly (two sentences at most each) explain your
answer.
Ans: If there are 500 ongoing connections, and nothing else happening on the server, there will 501
sockets in use – the single welcoming socket and the 500 sockets in use for server-to-client
communication. The ONLY server-sideport number in use at the server will be the single port number
associated with the welcoming socket, e.g., port 80 on a web server).

Question 2: High Speed TCP (20%)


Consider TCP over long fat pipes.
a) (10%) Please derive the TCP throughput as as a function of the loss rate (L), the round-trip time (RTT)
and the maximum segment size (MSS).
Since TCP throughput eqn = 0.75*W/RTT , TCP throughput eqn = 1.22*MSS/ (RTT*sqrt(L))
b) (10%) If we want to achieve average 20 Gbps throughput by using 1000-byte segment over a 100ms
RTT connection, what is the average size of W (the average congestion window size)? What is the
segment loss probability that Today’s TCP congestion-control algorithm could tolerate?
By using the TCP throughput eqn = 0.75*W/RTT, So W~= 166666*2, the average size of W =
3/4*166666 ~= 250000
By using the TCP throughput eqn = 1.22*MSS/ (RTT*sqrt(L)), So, L ~=

Question 3: Congestion Control and TCP (30%)

1
a) (5%) What is the difference between congestion control and flow control?
Flow control is about matching the speed of a sender to the capabilities of the receiver. Congestion
occurs when senders overutilize the resources within the network.
b) (5%) It is said that a TCP connection “probes” the network path it uses for available bandwidth. What
is meant by that?
TCP keeps increasing its send rate (by increasing its window size) until loss occurs (at which point
congestion has set in). TCP then sets its rate lower but again begins increasing its send rate to
again determine the point at which congestion sets in. In this sense, TCP is contantly probing the
network to see how much bandwidth it can use.
c) Suppose that in TCP, the sender window is of size N, the base of the window is at sequence number x,
and that the sender has just sent a complete window’s worth of segments. Let RTT be the sender-to-
receiver-to-sender round trip time, and let MSS be the segment size.
i. (10%) Is it possible that there are ACK segments in the receiver-to-sender channel for segments
with sequence numbers lower than x? Justify your answer.
It is possible. Suppose that the window size is N=1. The sender sends packet x-1, which is
delayed and so it timeouts and retransmits x-1. There are now two copies of x-1 in the
network. The receiver receives the first copy of x-1 and ACKs. The receiver then receives the
2nd copy of x-1 and ACKs. The sender receives the first ACK and sets it window base to x. At
this point, there is still an ACK for x-1 propagating back to the sender.
ii. (5%) Assuming no loss, what is the throughput (in packets/sec) of the sender-to-receiver
connection?
Assume that N is measured in segments. The sender can thus send N segments, each of size
MSS bytes every RTT secs. The throughput is this N.MSS/RTT.
iii. (5%) Suppose TCP is in its congestion avoidance phase. Assuming no loss, what will the window
size be after the N segments are ACKed?
N+1

Question 4: A Distributed Transaction Processing System (25%)


Consider a distributed transaction processing system of a client and a remote server. The client receives
transaction requests from local users. These transaction requests must be communicated to the server, which
will execute the transaction request and return the result of the transaction request. (You can think of a
transaction as requesting or updating an account balance from the server database, and the response
containing the resulting balance. The client and server communicate over a medium that can lose and delay
messages; the maximum delay in the medium is not known. The medium will not corrupt or reorder
messages.
The client should receive requests from local users (via the event callbyuser(request) ) and
return results to users (via the event returndatatouser(data) ) in the order in which the requests
were generated. The server receives messages from the client via the messagefromclient(clientmsg) event ,

2
executes a transaction via a call: result = execute(clientmsg) and sends messages to the client via the
messagetoclient(servermsg) event, where clientmsg and servermsg are messages (that you define) sent from
the client and server, respectively. The client receives messages from the server via the
messagefromserver(servermsg) event.

Give a FSM description of the client and server. Describe the format of the messages sent from client-to-
server and from server-to-client. Your protocol should be minimalist in the sense that it should not contain
any functionality that is not strictly needed to meet the above requirements.

Answer: The solution to this problem is essentially the stop and wait protocol. We need a timer because
messages can be lost and the maximum delay is not bounded. The client will have the timer and will send
copies of the previous request on timeout; the server does not need a time.
Since there may be premature timeouts at the client, there will be duplicate messages, and the client will
need to use the sequence to make sure that it returns the data to the user only once.
A checksum is not needed since messages can not be corrupted. The FSMs are shown on the following
page.

You might also like