Distributed System Answer Key
Distributed System Answer Key
Question 18 - Distributed Mutual Exclusion Algorithm a. Ricart-Agrawala algorithm b. Token ring algorithm______________________49
Ricart-Agrawala algorithm___________________________________________________________________________________________________49
Token ring algorithm_______________________________________________________________________________________________________51
Question 21 -. Replica placement – 3 types of replicas a. Client driven, b. Server driven, c. Permanent____________________________70
Primary-based remote-write protocol with a fixed server to which all read and write operations are forwarded.___________________74
Primary-backup protocol in which the primary migrates to the process wanting to perform an update.___________________________77
22. Primary based consistency protocols______________________________________________________________________________77
23. Fault tolerance – Different types of failures________________________________________________________________________78
24. Design issues – Failure masks (Process Resilience)___________________________________________________________________78
25. Five different types of classes of failure in RPC system - Solution along with the listing_____________________________________80
a. Scalable reliable multicasting_____________________________________________________________________________________80
b. Hierarchical – Nonhierarchical scalable_____________________________________________________________________________80
Fault_Tolerance.pdf 26-34 Slides____________________________________________________________________________________80
26. Explain Virtual synchrony_______________________________________________________________________________________85
The logical organization of a distributed system to distinguish between message receipt and message delivery.___________________86
27. General CORBA architecture and CORBA services____________________________________________________________________92
28. Messaging – Interoperability____________________________________________________________________________________94
29. DCOM – client server processor architecture of it.___________________________________________________________________96
QP-Sep2010.With the supporting diagram, explain in detail D-COM________________________________________________________97
30. Globe Object Model – Architecture and Services____________________________________________________________________99
Question 31 - NFS – Architecture (basic), file system operations supported OR Explain the basic NFS architecture for unix systems. Also
list any eight file system operations supported by NFS_________________________________________________________________102
Question 32 - Naming scheme in NFS with different types of mounting___________________________________________________104
33. Caching and replication scheme in NFS.__________________________________________________________________________107
34. Organization of CODA file system – Fault tolerance and security______________________________________________________108
QP-Sep2010 - With reference to CODA file system, explain communication, process and server replication_______________________110
35. DSM System – Different algorithms, Granularity & page replacement__________________________________________________111
36. List and explain load distribution algorithms in distributed systems and 4 components.___________________________________115
37. Sender/Receiver distributed algorithm___________________________________________________________________________115
38. Adaptive load distribution algorithm_____________________________________________________________________________115
From 39 to 42 Questions and Answers follow DS-Secu.pdf______________________________________________________________115
43. Explain advantages of distributed systems________________________________________________________________________116
QP-Mar2010 - Explain about the architectural model of Distributed system________________________________________________116
44. Access protocol – Security for bus and ring topology – CDMA_________________________________________________________118
45. Message passing models used to develop communication primitives___________________________________________________119
QP-Sep2010. Write short notes on memory coherence_________________________________________________________________123
QP-Sep2010.Explaing in detail the block cipher DES____________________________________________________________________124
Appendix______________________________________________________________________________________________126
Questions from old question papers (Test 1)_________________________________________________________________________126
Introduction_____________________________________________________________________________________________________________126
RPC____________________________________________________________________________________________________________________126
DNS____________________________________________________________________________________________________________________126
Model__________________________________________________________________________________________________________________126
RMI____________________________________________________________________________________________________________________126
Mobile_________________________________________________________________________________________________________________ 127
Communication__________________________________________________________________________________________________________127
44. Access protocol – Security for bus and ring topology – CDMA
45. Message passing models used to develop communication primitives
46. Compatibility/Resource management for specific issues
Exam Questions
QP-Sep2010. Briefly explain reliable client server communication
QP-Sep2010. Write short notes on memory coherence
QP-Sep2010.Explaing in detail the block cipher DES
Solutions
Question 1 - Goals of distributed system. Define distributed system.
Question 2 - Different forms of transparency
Question 3 - Multiprocessor and Multicomputer
Flynn’s Classification of multiple-processor machines:
{SI, MI} x {SD, MD} = {SISD, SIMD, MISD, MIMD}
1. Retry request message-retransmit the request msg until either a reply is received or the server is assumed to have failed
2. Duplicate filtering-filtering duplicate requests at the server when retransmissions are used
A single mechanism for communication: procedure calls (but with doors, it is not transparent)
Question 9 - RMI – static Vs Dynamic
Reference : Page 3 of http://www.cs.gmu.edu/~setia/cs707/slides/rmi-imp.pdf
This question is related to ‘Implementation of Name Resolution’. Let us take an example of ftp://ftp.cs.vu.nl/pub/globe/index.txt.
There are now two ways to implement name
resolution.
In iterative name resolution, a name resolver hands over the complete name to the root name server. It is assumed that the
address where the root server can be contacted, is well known. The root server will resolve the path name as far as it can, and
return the result to the client. In our example, the root server can resolve only the label nl, for which it will return the address of
the associated name server.
Recursive Name Resolution
An alternative to iterative name resolution is to use recursion during name resolution. Instead of returning each intermediate
result back to the client’s name resolver, with recursive name resolution, a name server passes the result to the next name server
it finds. So, for example, when the root name server finds the address of the name server implementing the node named nl, it
requests that name server to resolve the path name nl:<vu, cs, ftp, pub, globe, index.txt>. Using recursive name resolution as
well, this next server will resolve the complete path and eventually return the file index.txt to the root server, which, in turn, will
pass that file to the client’s name resolve
Comparison
1. The main drawback of recursive name resolution is that it puts a higher performance demand on each name server
2. There are two important advantages to recursive name resolution.
a. caching results is more effective compared to iterative name resolution.
b. communication costs may be reduced
3. With iterative name resolution, caching is necessarily restricted to the client’s name resolver.
Reference: http://www.cs.vu.nl/~ast/books/ds1/04.pdf
DNS -
1. Distributed Synchronization
a. Communication between processes in a distributed system can have unpredictable delays
b. No common clock or other precise global time source exists for distributed algorithms
c. Requirement: We have to establish causality, i.e., each observer must see
d. event 1 before event 2
2. Why need to synchronize clocks
Purpose
If you need a uniform time (without a UTC-receiver per computer), but you cannot establish a central time-server:
1. Peers elect a master
2. Master polls all nodes to give him their times by the clock
3. The master estimates the local times of all nodes regarding the involved message transfer times.
4. Master uses the estimated local times for building the arithmetic mean
a. Add fault tolerance
Averaging algorithm
a) The time daemon asks all the other machines for their clock values.
b) The machines answer.
c) The Time daemon tells everyone how to adjust their clock
Ref:http://www.cis.upenn.edu/~lee/07cis505/Lec/lec-ch6-synch1-PhysicalClock-v2.pdf
Lamport’s Algorithm
Each process Pi maintains a local counter Ci and adjusts this counter according to the following rules:
1. For any two successive events that take place within Pi, Ci is incremented by 1.
2. Each time a message m is sent by process Pi, the message receives a timestamp Tm = Ci.
3. Whenever a message m is received by a process Pj, Pj adjusts its local counter Cj:
Fidge’s Algorithm
2. The local clock value is incremented at least once before each primitive event in a process.
3. The current value of the entire logical clock vector is delivered to the receiver for every outgoing message.
5. Upon receiving a message, the receiver sets the value of each entry in its local timestamp vector to the
maximum of the two corresponding values in the local vector and in the remote vector received.
The element corresponding to the sender is a special case; it is set to one greater than the value
received, but only if the local value is not greater than that received.
Example
Assign the Fidge’s logical clock values for all the events in the below timing diagram. Assume that each process’s logical clock is
set to 0 initially.
Question 16 - Lamport logical clock [problems with this approach]
Assign the Lamport’s logical clock values for all the events in the below timing diagram. Assume that each process’s logical clock
is set to 0 initially.
Solution
Question 17. Election of a coordinator - a. Bully algorithm && b. Ring
algorithm
Bully algorithm
The bully algorithm is a method in distributed computing for dynamically selecting a coordinator by process
ID number.
When a process P determines that the current coordinator is down because of message timeouts or failure of
the coordinator to initiate a handshake, it performs the following sequence of actions:
1. P broadcasts an election message (inquiry) to all other processes with higher process IDs.
2. If P hears from no process with a higher process ID than it, it wins the election and broadcasts
victory.
3. If P hears from a process with a higher ID, P waits a certain amount of time for that process to
broadcast itself as the leader. If it does not receive this message in time, it re-broadcasts the
election message.
Note that if P receives a victory message from a process with a lower ID number, it immediately initiates a
new election. This is how the algorithm gets its name - a process with a higher ID number will bully a lower ID
process out of the coordinator position as soon as it comes online.
http://www.scribd.com/doc/6919757/BULLY-ALGORITHM
Ring algorithm
http://www2.cs.uregina.ca/~hamilton/courses/330/notes/distributed/distributed.html
☞ We assume that the processes are arranged in a logical ring; each process knows the address of one other process, which is
its neighbour in the clockwise direction.
☞ The algorithm elects a single coordinator, which is the process with the highest identifier.
☞ Election is started by a process which has noticed that the current coordinator has failed. The process places its identifier in an
election message that is passed to the following process.
☞ When a process receives an election message it compares the identifier in the message with its own. If the arrived identifier is
greater, it forwards the received election message to its neighbour; if the arrived identifier is smaller it substitutes its own identifier
in the election message before forwarding it.
☞ If the received identifier is that of the receiver itself ⇒ this will be the coordinator. The new coordinator sends an elected
message through the ring.
Example:
Suppose that we have four processes arranged in a ring: P1 à P2 à P3 à P4 à P1 …
P4 is coordinator
Suppose P1 + P4 crash
Suppose P2 detects that coordinator P4 is not responding
P2 sets active list to [ ]
P2 sends “Elect(2)” message to P3; P2 sets active list to [2]
P3 receives “Elect(2)”
This message is the first message seen, so P3 sets its active list to [2,3]
P3 sends “Elect(3)” towards P4 and then sends “Elect(2)” towards P4
The messages pass P4 + P1 and then reach P2
P2 adds 3 to active list [2,3]
P2 forwards “Elect(3)” to P3
P2 receives the “Elect(2) message
P2 chooses P3 as the highest process in its list [2, 3] and sends an “Elected(P3)” message
P3 receives the “Elect(3)” message
P3 chooses P3 as the highest process in its list [2, 3] + sends an “Elected(P3)” message
1. Non-token-based: each process freely and equally competes for the right to use the
shared resource; requests are arbitrated by a central control site or by distributed agreement.
2. Token-based: a logical token representing the access right to the shared resource is passed
in a regulated fashion among the processes; whoever holds the token is allowed to enter the
critical section.
Ricart-Agrawala algorithm
The Ricart-Agrawala Algorithm is an algorithm for mutual exclusion on a distributed system.
Terminology
● A site is any computing device which is running the Ricart-Agrawala Algorithm
● The requesting site is the site which is requesting entry into the critical section.
● The receiving site is every other site which is receiving the request from the requesting site.
Algorithm
Requesting Site:
● Sends a message to all sites. This message includes the site's name, and the current timestamp of the
system according to its logical clock (which is assumed to be synchronized with the other sites)
Receiving Site:
● Upon reception of a request message, immediately send a timestamped reply message if and only if:
● the receiving process is not currently interested in the critical section OR
● the receiving process has a lower priority (usually this means having a later timestamp)
● Otherwise, the receiving process will defer the reply message. This means that a reply will be sent only
after the receiving process has finished using the critical section itself.
Critical Section:
● Requesting site enters its critical section only after receiving all reply messages.
● Upon exiting the critical section, the site sends all deferred reply messages.
Problems
The algorithm is expensive in terms of message traffic; it requires 2(n-1) messages for entering a CS: (n-1) requests and (n-1)
replies.
The failure of any process involved makes progress impossible if no special recovery measures are taken.
☞ The logical ring topology is unrelated to the physical interconnections between the computers.
The algorithm
☞ It can take from 1 to n-1 messages to obtain a token. Messages are sent around the ring even when no process requires the
token ⇒ additional load on the network.
The algorithm works well in heavily loaded situations, when there is a high probability that the process which gets the token wants
to enter the CS. It works poorly in lightly loaded cases.
☞ If a process fails, no progress can be made until a reconfiguration is applied to extract the process from the ring.
☞ If the process holding the token fails, a unique process has to be picked, which will regenerate the token and pass it along the
ring; an election algorithm
Strong consistency models: Operations on shared data are synchronized (models not using synchronization operations):
Weak consistency models: Synchronization occurs only when shared data is locked and unlocked (models with synchronization
operations):
1. General weak consistency
2. Release consistency
3. Entry consistency
Observation: The weaker the consistency model, the easier it is to build a scalable solution
The result of any execution is the same as if the (read and write) operations by all processes on the data store were
executed in some sequential order, and the operations of each individual process appear in this sequence in the order
specified by its program.
Observations
1. When processes run concurrently on possibly different machines, any valid interleaving of read and write operations is
acceptable behavior
2. All processes see the same interleaving of executions.
3. Nothing is said about time
4. A process “sees” writes from all processes but only its own reads
Goal: Show how we can perhaps avoid system-wide consistency, by concentrating on what specific clients want, instead of what
should be maintained by servers.
Most large-scale distributed systems (i.e., databases) apply replication for scalability, but can support only weak consistency.
Example
1. DNS: Updates are propagated slowly, and inserts may not be immediately visible.News: Articles and reactions are pushed
and pulled throughout the Internet, such that reactions can be seen before postings.
2. Lotus Notes: Geographically dispersed servers replicate documents, but make no attempt to keep (concurrent) updates
mutually consistent.
3. WWW: Caches all over the place, but there need be no guarantee that you are reading the most recent version of a page.
Important
● Client-centric consistency provides guarantees for a single client concerning the consistency of access to a data store by
that client
● No guarantees are given concerning concurrent accesses by different clients
Monotonic-Read Consistency
Example 1: Automatically reading your personal calendar updates from different servers. Monotonic Reads guarantees that the
user sees all updates, no matter from which server the automatic reading takes place.
Example 2: Reading (not modifying) incoming mail while you are on the move. Each time you connect to a different e-mail
server, that server fetches (at least) all the updates from the server you previously visited.
Monotonic-Write Consistency
1. Example 1: Updating a program at server S2, and ensuring that all components on which compilation and linking
depends, are also placed at S2.
2. Example 2: Maintaining versions of replicated files in the correct order everywhere (propagate the previous version to the
server where the newest version is installed).
Update Propagation
•Important design issues in update propagation:
1.Propagate only notification/invalidation of update (often used for caches)
3.Propagate the update operation to other copies (also called active replication)
Observation: No single approach is the best, but depends highly on available bandwidth and read-to-write ratio at replicas.
•Pushing updates: server-initiated approach, in which update is propagated regardless whether target asked for it or not.
•Pulling updates: client-initiated approach, in which client requests to be updated.
Observation: We can dynamically switch between pulling and pushing using leases: A contract in which the server promises to push updates
to the client until the lease expires.
Issue: Make lease expiration time dependent on system’s behavior (adaptive leases):
•Age-based leases: An object that hasn’t changed for a long time, will not change in the near future, so provide a long-lasting lease
•Renewal-frequency based leases: The more often a client requests a specific object, the longer the expiration time for that client (for that
object) will be
•State-based leases: The more loaded a server is, the shorter the expiration times become
22. Primary based consistency protocols
•A message sent to the group is delivered to all of the “copies” of the process (the group members), and then only one of them performs the
required service.
•If one of the processes fail, it is assumed that one of the others will still be able to function (and service any pending request or operation).
Flat Groups versus Hierarchical Groups
(a) Communication in a flat group. (b) Communication in a simple hierarchical group.
Failure Masking and Replication
•By organizing a fault tolerant group of processes, we can protect a single vulnerable process.
Primary-backup protocol. A primary coordinates all write operations. If it fails, then the others hold an election to replace the primary
Replicated-write protocols. Active replication as well as quorum based protocols. Corresponds to a flat group.
A system is said to be k fault tolerantif it can survive faults in k components and still meet its specifications.
For fail-silent components, k+1are enough to be k fault tolerant.
For Byzantine failures, at least 2k+1components are needed to achieve k fault tolerance.
Requires atomic multicasting: all requests arrive at all servers in same order.
•An appropriate exception handling mechanism can deal with a missing server. However, such technologies tend to be very language-specific,
and they also tend to be non-transparent (which is a big DS „no-no‟).
•Dealing with lost request messages can be dealt with easily using timeouts. If no ACK arrives in time, the message is resent. Of course, the
server needs to be able to deal with the possibility of duplicate requests.
•Server crashes are dealt with by implementing one of three possible implementation philosophies:
1.At least once semantics: a guarantee is given that the RPC occurred at least once, but (also) possibly more that once.
2.At most once semantics: a guarantee is given that the RPC occurred at most once, but possibly not at all.
3.No semantics: nothing is guaranteed, and client and servers take their chances!
•It has proved difficult to provide exactly once semantics.
Server Crashes (1)
•Remote operation: print some text and (when done) send a completion message.
•Why was there no reply? Is the server dead, slow, or did the reply just go missing? Emmmmm?
•A request that can be repeated any number of times without any nasty side-effects is said to be idempotent. (For example: a read of a static
web-page is said to be idempotent).
•Nonidempotentrequests (for example, the electronic transfer of funds) are a little harder to deal with. A common solution is to employ unique
sequence numbers. Another technique is the inclusion of additional bits in a retransmission to identify it as such to the server.
Several receivers have scheduled a request for retransmission, but the first retransmission request leads to the suppression of others
1.Unordered multicasts
2.FIFO-ordered multicasts
3.Causally-ordered multicasts
4.Totally-ordered multicasts
Three communicating processes in the same group. The ordering of events per process is shown along the vertical axis.
Four processes in the same group with two different senders, and a possible delivery order of messages under FIFO-ordered multicasting
(b) Process 6 sends out all its unstable messages, followed by a flush message
Process 6 installs the new view when it has received a flush message from everyone else
27. General CORBA architecture and CORBA services
Interoperability
DCOM Services
CORBA Service DCOM/COM+ Service Windows 2000 Service
Collection ActiveX Data Objects -
Query None -
Concurrency Thread concurrency -
Transaction COM+ Automatic Distributed Transaction
Transactions Coordinator
Event COM+ Events -
Notification COM+ Events -
Externalization Marshaling utilities -
Life cycle Class factories, JIT activation -
Licensing Special class factories -
Naming Monikers Active Directory
Property None Active Directory
Trading None Active Directory
Persistence Structured storage Database access
Relationship None Database access
Security Authorization SSL, Kerberos
Time None None
8
NFS: Client Caching
• Potential for inconsistent versions at different clients.
• Solution approach:
– Whenever file cached, timestamp of last modification on server is cached as
well.
– Validation: Client requests latest timestamp from server (getattributes), and
compares against local timestamp. If fails, all blocks are invalidated.
• Validation check:
– at file open
– whenever server contacted to get new block
– after timeout (3s for file blocks, 30s for directories)
• Writes:
– block marked dirty and scheduled for flushing.
– flushing: when file is closed, or a sync occurs at client.
• Time lag for change to propagate from one client to other:
– delay between write and flush
– time to next cache validation
Replica Servers
• NFS ver 4 supports replications
• Entire file systems must be replicated
• FS_LOCATION attribute for each file
• Replicated servers: implementation specific
10
CODA: Communication
• Interprocess communication using RCP2
(http://www.coda.cs.cmu.edu/doc/html/rpc2_manual.html)
• RPC2 provides reliable RPC over UDP.
• Support for Side Effects
– RPC connections may be associated with Side-Effects to allow
application-specific network optimizations to be performed. An
example is the use of a specialized protocol for bulk transfer of large
files. Detailed information pertinent to each type of side effect is
specified in a Side Effect Descriptor.
– Adding support for a new type of side effect is analogous to adding a
new device driver in Unix. To allow this extensibility, the RPC code
has hooks at various points where side-effect routines will be called.
Global tables contain pointers to these side effect routines. The basic
RPC code itself knows nothing about these side-effect routines.
• Support for MultiRPC (enables for parallel calls, e.g.
invalidations)
10
CODA: Communication
• Interprocess communication using RCP2
(http://www.coda.cs.cmu.edu/doc/html/rpc2_manual.html)
• RPC2 provides reliable RPC over UDP.
• Support for Side Effects
– RPC connections may be associated with Side-Effects to allow
application-specific network optimizations to be performed. An
example is the use of a specialized protocol for bulk transfer of large
files. Detailed information pertinent to each type of side effect is
specified in a Side Effect Descriptor.
– Adding support for a new type of side effect is analogous to adding a
new device driver in Unix. To allow this extensibility, the RPC code
has hooks at various points where side-effect routines will be called.
Global tables contain pointers to these side effect routines. The basic
RPC code itself knows nothing about these side-effect routines.
• Support for MultiRPC (enables for parallel calls, e.g.
invalidations)
Coda: Processes
• Clear distinction between client and server processes
• Venus processes represent clients.
• Vice processes represent servers.
• All processes realized as collection of user-level threads.
• Additional low-level thread handles I/O operations (why?)
(Replication incomplete)
Issues:
Keeping track of remote data locations
Overcoming/reducing communication delays and protocol overheads when accessing
remote data.
Making shared data concurrently available to improve performance.
Types of algorithms:
Central-server
Data migration
Read-replication
Full-replication
Migration Algorithm
If map table points to a remote page, migrate the page before mapping it to the requesting process’s
address space.
–System can be made fault tolerant through replication of data and services
–Data can be files and directories and services can be the processes that provide functionality
•Modular expandability
–New hardware and software can be easily added without replacing the existing system
Minicomputer Model
•DS consists of several minicomputers
–e.g. VAX processors
•Each machine supports multiple users and share resources
•Ratio between no. of processors to no. off users is usually less than one
Workstation Model
•Consists of several workstations ( up to several thousands)
•Each user has a workstation at his disposal, which consist of powerful processor, memory and display
•With the help of DFS, users can access data regardless of its location
•Ratio between no. of processors to no. of users is usually 1
•e.g. Athena and Andrew
B) Issues in DS
Global Knowledge
Naming
Scalability
Compatibility
Process Synchronization
Resource Management
Security
Structuring
Client-Server Model
Global Knowledge
Lack of global shared memory, global clock, unpredictable message delays
Lead to unpredictable global state, difficult to order events (A sends to B, C sends to D:
may be related)
Naming
Need for a name service: to identify objects (files, databases), users, services (RPCs).
Replicated directories? : Updates may be a problem.
Need for name to (IP) address resolution.
Distributed directory: algorithms for update, search, ...
Scalability
System requirements should (ideally) increase linearly with the number of computer
systems
Includes: overheads for message exchange in algorithms used for file system updates,
directory management...
Compatibility
Binary level: Processor instruction level compatibility
Execution level: same source code can be compiled and executed
Protocol level: Mechanisms for exchanging messages, information (e.g., directories)
understandable.
Process Synchronization
Distributed shared memory: difficult.
Resource Management
Data/object management: Handling migration of files, memory values. To achieve a
transparent view of the distributed system.
Main issues: consistency, minimization of delays, ..
Security
Authentication and authorization
Structuring
Monolithic Kernel: Not needed (e.g.,) file management not needed fully on diskless workstations.
Collective kernel: distributed functionality on all systems.
Micro kernel + set of OS processes
Micro kernel: functionality for task, memory, processor management. Runs on all
systems.
OS processes: set of tools. Executed as needed.
Object-oriented system: services as objects.
Object types: process, directory, file, …
Operations on the objects: encapsulated data can be manipulated
•Depending on the network topology, a link may connect more than two sites in the computer network,
•It is possible that several sites will want to transmit information over a link simultaneously
•This difficulty occurs mainly in a ring or multiaccess bus network. In this case, the transmitted information
may become scrambled and must be discarded
•Several techniques have been developed to avoid repeated collisions, including collision detection, token
passing, and message slots.
–CSMA/CD:
•Before transmitting a message over a link, a site must listen to determine whether another message is
currently being transmitted over that link;
•this technique is called carrier sense with multiple access (CSMA). If the link is free, the site can start
transmitting
•Otherwise it must wait (and continue to listen) until the link is free. If two or more sites begin transmitting
at exactly the same time (each thinking that no other site is using the link), then they will register a
collision detection (CD)and will stop transmitting.
Token Passing:
–A unique message type, known as a token, continuously circulates in the system (usually a ring
structure)
–A site that wants to transmit information must wait until the token arrives. It removes the token from the
ring and begins to transmit its messages
–When the site completes its round of message passing, it transmits the token
–This action, in turn, allows another site to receive and remove the token, and to starts its message
transmission. If the token gets lost, then the systems must detect the loss and generate a new token
–They usually do that by declaring an election, to elect a unique site where a new token will be generated.
A token-passing scheme has been adopted by the IBM and HP/Apollo systems. The benefit of a token-
passing network is that performance is constant.
•Message Slots:
–A number of fixed-length message slots continuously circulate in the system (usually a ring structure)
–Each slot can hold a fixed-sized message and control information (such as what the source and destination are, and
whether the slot is empty or full)
–A site that is ready to transmit must wait until an empty slot arrives. If then inserts its message into the slot, setting
the appropriate control information
–The slot with its message then continues in the network. When it arrives at a site, that site inspects the control
information to determine whether the slot contains a message for this site. If not, that site re-circulates the slot and
message. Otherwise, it removes the message, resetting the control information to indicate that the slot is empty.
•There are two models that are widely accepted to develop distributed operating system.
–Message Passing
–Remote procedure call.
•In the unbufferedoption, data is copied from one user buffer to another user directly.
•With Blocking primitives, the send primitive does not return the control to the user program until the
message has been sent (an unreliable blocking primitive) or until an acknowledgment has been received (
a reliable blocking primitive).
–Data representation.
•If any parameter are passed to the remote machine where the procedure will execute.
•On completion, the result are passed back from server to client and resuming execution as if it had called
a local procedure.
RPC System Components
•Message module
–IPC module of Send/Receive/Reply
–responsible for packing and unpacking of arguments and results (this is also referred to as “marshaling”)
–these procedures are automatically generated by “stub generators” or “protocol compilers” (more later)
•Client stub
–packs the arguments with the procedure name or ID into a message
–sends the msgto the server and then awaits a reply msg
–unpacks the results and returns them to the client
•Server stub
–receives a request msg
–unpacks the arguments and calls the appropriate server procedure
–when it returns, packs the result and sends a reply msgback to the client
Triple DES or 3DESSecurity concerns over DES led to the creation of Triple DES(3DES). In 3DES,l the
plaintext is first encrypted using the key K1,then decrypted using the key K2 and then finally encrypted
once again using the key K3. This is called the 3 key triple DES.
Appendix
Introduction
1. With Suitable examples explain the fields of application of distributed systems?
2. What is global state? Explain the mechanism of distributed snapshot.
3. What are Goals of DS? Explain briefly.
4. Explain the different system architectures of DS.
5. Enumerate the fundamental characters required for DS.
6. How DS operating system differs from normal OS.
7. Discuss about various characteristics of DS.
RPC
1. Describe Remote Procedure call RPC with example.
2. What is Distributed object based system? Discuss how object-based system is differ from
conventional RPC system.
3. Explain the mechanism of RPC with diagram.
DNS
1. Write a note on i) DNS ii)X.500
2. Explain about Directory and Discovery services of Name Services.
3. Discuss the problems raised by the uses of aliases in a name service and indicate how, if
at all these may be overcome.
Model
1. Describe thread synchronization, thread scheduling and thread implementation in
Distributed OS.
2. What is fundamental model? Discuss the features.
3. Explain about the Architecture Models of DS.
4. What is Architect Model? Discuss in brief Client-server model.
RMI
1. Describe RMI. Discuss the design issues of RMI.
Mobile
1. Write a note on i) Mobile code ii)Mobile Agent
2. Differentiate between Mobile agents and codes.
Communication
1. Discuss in brief the Client –Server communication process.
2. Discuss the architecture of CORBA.
3. With a supporting diagram explain the general organization of an internet search engine
showing three different layers(UI layer, processing layer and data layer).