CS6223 Distributed Systems: Tutorial 2: Q1: in This Problem You Are To Compare Reading A File
CS6223 Distributed Systems: Tutorial 2: Q1: in This Problem You Are To Compare Reading A File
2
CS6223 Distributed Systems: Tutorial 2
3
Tutorial 2: Q1 Ans.
Q1. Ans.
In the single-threaded case, the cache hits take 15 msec and
cache misses take 90 msec. The weighted average is 2/3 ×
15 + 1/3 × 90. Thus the mean request takes 40 msec and the
server can do 25 per second. For a multithreaded server, all
the waiting for the disk is overlapped, so every request takes
15 msec, and the server can handle 66 2/3 requests per
second.
4
Tutorial 2: Q2
Q2. Ans.
An important advantage is that separate processes are
protected against each other, which may prove to be
necessary as in the case of a superserver handling
completely independent services. On the other hand, process
spawning is a relatively costly operation that can be saved
when using multithreaded servers. Also, if processes do need
to communicate, then using threads is much cheaper as in
many cases we can avoid having the kernel implement the
communication (note: the threads in the same process may
share the address space).
5
Tutorial 2: Q3 Ans.
Q3. Ans.
Consider a synchronous send primitive. A simple
implementation is to send a message to the server using
asynchronous communication, and subsequently let the caller
continuously poll for an incoming acknowledgement or
response from the server. If we assume that the local
operating system stores incoming messages into a local
buffer, then an alternative implementation is to block the
caller until it receives a signal from the operating system that
a message has arrived, after which the caller does an
asynchronous receive.
6
Tutorial 2: Q4 Ans.
Q4. Ans.
This situation is actually simpler. An asynchronous send is
implemented by having a caller append its message to a
buffer that is shared with a process that handles the actual
message transfer. Each time a client appends a message to
the buffer, it wakes up the send process, which subsequently
removes the message from the buffer and sends it its
destination using a blocking call to the original send primitive.
The receiver is implemented similarly by offering a buffer that
can be checked for incoming messages by an application.
7
Tutorial 2: Q5 Ans.
Q5. Ans.
Yes, but only on a hop-to-hop basis in which a process
managing a queue passes a message to a next queue
manager by means of an RPC. Effectively, the service
offered by a queue manager to another is the storage of a
message. The calling queue manager is offered a proxy
implementation of the interface to the remote queue, possibly
receiving a status indicating the success or failure of each
operation. In this way, even queue managers see only
queues and no further communication.
8
Tutorial 2: Q6 Ans.
Q6. Ans.
The problem is the limited geographical scalability. Because
synchronous communication requires that the caller is
blocked until its message is received, it may take a long time
before a caller can continue when the receiver is far away.
The only way to solve this problem is to design the calling
application so that it has other useful work to do while
communication takes place, effectively establishing a form of
asynchronous communication.
9
Tutorial 2: Q7 Ans.
Q7 Ans.
In synchronous communication, a blocking send delays the
sending process until the message is received, which cannot
happen until the receiving process has executed a
corresponding receive operation.
The delay is considerable when different processes in
different computers are involved.
The advantage of a non-blocking send is that it avoids
delaying the sending process, allowing it to proceed with work
in parallel with the receiving process.
10
Tutorial 2: Q7 Ans. (Cont’d)
11