0% found this document useful (0 votes)
6 views8 pages

Module 5 Previous Year Questions With Solution

The document outlines five data-centric consistency models: Strict, Sequential, Linearizability, Causal, and FIFO Consistency, each with varying levels of guarantees on data visibility and operation ordering. It also discusses design and implementation issues of Distributed Shared Memory (DSM), including granularity, memory coherence, and thrashing. Additionally, it covers fault tolerance in distributed systems, detailing types of failures such as transient, intermittent, and permanent faults, as well as various failure models like crash, omission, and Byzantine failures.

Uploaded by

khushiejain92
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views8 pages

Module 5 Previous Year Questions With Solution

The document outlines five data-centric consistency models: Strict, Sequential, Linearizability, Causal, and FIFO Consistency, each with varying levels of guarantees on data visibility and operation ordering. It also discusses design and implementation issues of Distributed Shared Memory (DSM), including granularity, memory coherence, and thrashing. Additionally, it covers fault tolerance in distributed systems, detailing types of failures such as transient, intermittent, and permanent faults, as well as various failure models like crash, omission, and Byzantine failures.

Uploaded by

khushiejain92
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 8

Q.

Explain any 5 data centric consistency models with example data stores

1. Strict Consistency Model:


 Any read on a data item x returns a value corresponding to the result of
the most recent write on x
 This is the strongest form of consistency model.

2. Sequential Consistency Model


In this model the result of any execution is the same as if the (read &
write) operations by all processes on the data store were executed is
some sequential order & the operations of each individual process
appear in this sequence in the order specified by its program.
3. Linearizability Consistency Model
 The operations of each individual process appear in sequence order
specified by its program.
 If tsOP1(x) < tsOP2(y), then operation OP1(x) should proceed OP2(y)
in this sequence.
Linearizable = sequential + operation ordering according to global
time

4. Causal Consistency Model


 In this model, all processes see only those memory reference operations
in the correct order that are potentially causally related.
 A memory reference operation is said to be causally related to another
memory reference operation if the first operation is influenced by the
second operation.
5. FIFO Consistency Model:
 It is weaker than causal consistency.
 This model ensures that all write operations performed by a single
process are seen by all other processes in the order in which they
were performed, like a single process in a pipeline.
e.g. FIFO consistency writes done by a single process are seen by all other
processes in the order in which they were issued, but writes from different
processes may be seen in a different order by different processes.

6. Weak Consistency Model


 The basic idea behind the weak consistency model is enforcing
consistency on group of memory reference operations rather than
individual operations.
 It uses a special variable called a synchronization variable which is
used to synchronize memory.
 When a process accesses a synchronization variable, the entire
memory is synchronized by making visible the changes made to the
memory to all other processes.
Q. Discuss design and implementation issues of distributed shared memory.
Design & Implementation Issues:

1.Granularity
Granularity refers to the block size of a DSM system, that is, to the unit of
sharing and the unit of data transfer across the network when a network
block fault occurs. Possible units are a few words, a page, or a few pages.
Selecting proper block size is an important part of the design of a DSM
system because block size is usually a measure of the granularity of
parallelism explored and the amount of network traffic generated by
network block faults.
2.Structure of shared-memory space:
Structure refers to the layout of the shared data in memory. The structure of
the shared-memory space of a DSM system is normally dependent on the
type of applications that the DSM system is intended to support.
3.Heterogeneity:
The DSM systems built for homogeneous systems need not address the
heterogeneity issue. However, if the underlying system environment is
heterogeneous, the DSM system must be designed to take care of
heterogeneity so that it functions properly with machines having different
architectures.
4.Memory coherence and access synchronization:
In a DSM system that allows replication of shared data items, copies of
shared data items may simultaneously be available in the main memories of
a number of nodes.
5.Thrashing:
In a DSM system, data blocks migrate between nodes on demand.
Therefore, if two nodes compete for write access to a single data item, the
corresponding data block may be transferred back and forth at such a high
rate that no real work can get done. A DSM system must use a policy to
avoid this situation (usually known as thrashing).
6.Data location and access:
To share data in a DSM system, it should be possible to locate and retrieve
the data accessed by a user process. Therefore, a DSM system must
implement some form of data block locating mechanism in order to service
network data block faults to meet the requirement of the memory
coherence semantics being used.
7.Replacement strategy:
If the local memory of a node is full, a cache miss at that node implies not
only a fetch of the accessed data block from a remote node but also a
replacement. That is, a data block of the local memory must be replaced by
the new data block. Therefore, a cache replacement strategy is also
necessary in the design of a DSM system.

Q. What is fault tolerance and describe different types of failure models.


• A Distributed System should be fault-tolerant. It should be able to
continue functioning in the presence of faults.
• Fault tolerance is related to dependability.
• Dependability is a term that covers a number of useful requirements.
These requirements include:
1. Availability: Highly available systems work at a given instant in time
2. Reliability: This is the ability of a computer system to run continuously
without failure. Unlike availability, reliability is defined in a time interval
instead of an instant in time. A highly reliably system, works constantly in a
long period of time without interruption
3. Safety: This is when a system fails to carry out its corresponding
processes correctly & its operations are incorrect, but no shattering event
happens.
4. Maintainability: A highly maintainability system can also show a great
measurement of accessibility, especially if the corresponding failures can be
noticed & fixed mechanically.
Types of fault:
• Transient faults occur once and then disappear. If the operation is
repeated, the fault goes away.
• An intermittent fault occurs, then vanishes of its own accord, then
reappears, and so on.
• A permanent fault is one that continues to exist until the faulty
component is replaced.
Failure Models
• A crash failure occurs when a server prematurely halts, but was working
correctly until it stopped. An important aspect of crash failures is that
once the server has halted, nothing is heard from it anymore. A typical
example of a crash failure is an operating system that comes to a
grinding halt, and for which there is only one solution: reboot it.
• An omission failure occurs when a server fails to respond to a request.
several things might go wrong. In the case of a receive omission failure,
possibly the server never got the request in the first place. Note that it
may well be the case that the connection between a client and a server
has been correctly established, but that there was no thread listening to
incoming requests.
• Send omission failure happens when the server has done its work, but
somehow fails in sending a response. Such a failure may happen, for
example, when a send buffer overflows while the server was not
prepared for such a situation.
• Timing failures occur when the response lies outside a specified real-
time interval.
• A serious type of failure is a response failure, by which the server's
response is simply incorrect. Two kinds of response failures may happen.
• In the case of a value failure, a server simply provides the wrong reply to
a request. For example, a search engine that systematically returns Web
pages not related to any of the search terms used has failed.
• The other type of response failure is known as a state transition failure.
this kind of failure happens when the server reacts unexpectedly to an
incoming request. For example, if a server receives a message it cannot
recognize, a state transition failure happens if no measures have been
taken to handle such messages.
• Byzantine failures : In distributed systems, Byzantine failures represent
the most challenging type of fault, where components can behave
unpredictably or maliciously, sending conflicting or incorrect data,
making it difficult to achieve consensus and maintain system integrity.
• Arbitrary failures, also known as Byzantine failures. In effect, when
arbitrary failures occur, clients should be prepared for the worst. In
particular, it may happen that a server is producing output it should
never have produced, but which cannot be detected as being incorrect.
Worse yet a faulty server may even be maliciously working together with
other servers to produce intentionally wrong answers

You might also like