0% found this document useful (0 votes)
32 views27 pages

Distributed System Bank

Uploaded by

Nandini Ganjewar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
32 views27 pages

Distributed System Bank

Uploaded by

Nandini Ganjewar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 27

*Distributed System*

1] Introduction
Q-1] What is Distributed Computing System?
Q-2] Distributed computing system models.
Q-3] Why distributed computing system gaining popularity.
Q-4] What is distributed operating system.
Q-5] Explain Distributed Computing Environment?
Q-6] Explain Message Passing System and its features.
Q-7] Explain:
1] Synchronization 2] Buffering
Q-8] Explain Process Addressing.

2] Remote Procedure Calls


Q-1] Explain RPC.
Q-2] Explain implementation of RPC mechanism.
Q-3] Explain RPC messages.
Q-4] Explain parameter passing semantics.
Q-5] Explain Call semantics and its types.
Q-6] Explain communication protocol for RPCs.
Q-7] Explain Client-Server Binding.

3] Distributed Shared Memory


Q-1] Explain DSM and architecture of DSM.
Q-2] Design and implementation issues of DSM.
Q-3] Structure of shared memory space.
Q-4] Explain consistency model.
Q-5] Explain Thrashing.
Q-6] Explain Synchronization and its issues.
4] Resource Management
Q-1] Explain Resource management.
Q-2] Features of good global scheduling algorithm.
Q-3] Explain:
1] Task Assignment Approach:
2] Load Balancing Approach:
Q-4] Explain process migration and its features.

5] Distributed File System


Q-1] What is distributed file system and Features of a good distributed file system.
Q-2] Explain file models.
Q-3] Explain file accessing models.
Q-4] Atomic transaction and properties.
Q-5] Explain design principles.

1] Introduction

Q-1] What is Distributed Computing System?


A distributed computing system is basically a collection of processors interconnected
by a communication network in which each processor has its own local memory and the
communication between any two processors of the system takes place by message passing
over the communication network.
Evolution of Distributed computing system:
1] Early computers were very expensive and very large in size.
2] A few computers and were available only in research laboratories of universities and
industries.
3] Batching similar jobs improved CPU utilization.
4] Automatic job sequencing.
5] Multiprogramming- Organizing jobs.
6] Time Sharing- Multiple users simultaneously execute.
Computer architectures has two types:
1] Tightly Coupled System
2] Loosely Coupled System

1] Tightly coupled systems-


1] In these systems, there is a single systemwide primary memory that is shared
by all the processors.
2] Communication between the processors takes place through the shared
memory.
2] Loosely coupled systems-
1] In these systems, the processors do not share memory, and each processor has
its own local memory.
2] It is Referred as distributed computing system.

Q-2] Distributed computing system models.


1] Minicomputer Model:
1] The minicomputer model is a simple extension of the centralized time-sharing
system.
2] In this model consists of a few minicomputers interconnected by a
communication network.
3] The minicomputer model may be used when resource sharing with remote users
is desired.
4] ARPAnet is based on the minicomputer model.
2] Workstation Model:
1] A distributed computing system based on
the workstation model consists of several
workstations interconnected by a
communication network.
2] This

model is not simple to


implement.
3] The Sprite system and an
experimental system is based on
the workstation model.

3] Workstation-Server Model:
1] The workstation model is a network of personal
workstations.
2] A distributed computing system based on the
workstation server model consists of a few
minicomputers and several workstations
interconnected by a communication network.
3] Each minicomputer as a server machine.
4] The V-System is based on the workstation-server
model.

4] Processor-Pool Model:
1] Processor-pool model consist of large number of
minicomputers.
2] In this model, run server and file server interconnected to
communication network.

5] Hybrid Model:
1] Hybrid model is combined of workstation-server and processor
pool model.
2] The processors in the pool can be allotted dynamically.
3] Hybrid model gives granted response to interactive jobs.
4] Hybrid model is more expensive to implement.

Q-3] Why distributed computing system gaining popularity.


The installation and use of distributed computing systems are rapidly increasing.
1] Information Sharing among Distributed Users:
1] Distributed computing systems was a desire for efficient communication
facility by sharing information.
2] This facility may be useful in many ways.
2] Resource Sharing:
1] Information is not the only thing that can be shared in a distributed computing
system.
2] Sharing of software resources such as software libraries and databases as well
as hardware resources done by distributed computing system.
3] Better price-performance ratio:
This is one of the most important reasons for the growing popularity of distributed
computing systems.
4] Shorter Response Times and Higher Throughput:
The most commonly used performance metrics are response time and throughput of user
processes.
5] Higher Reliability:
A reliable system prevents loss of information even in the event of component failures.
6] Extensibility and Incremental Growth:
Advantage of distributed computing systems is they are capable of incremental growth.
Q-4] What is distributed operating system.
1] Operating system as a program that controls the resources of a computer
system.
2] Distributed computing systems can be classified into two types:
1] Network operating systems
2] Distributed operating systems.
3] The three most important features commonly used to differentiate between
these two types of operating systems:
1] System Image:
1] Distributed operating system hides the existence of multiple computers and
provides a single-system image to its users.
2] Network operating system is aware of multiple computers.
2] Autonomy:
1] The set of system calls that an OS supports are implemented by a set of
programs called kernel of computer system.
2] The kernel manages and control the hardware of computer system.
3] Fault tolerance:
A network OS provides no fault tolerance, in distributed OS fault tolerance is very
high.

Issues in designing a distributed operating system:


1] Transparency
2] Reliability
3] Flexibility
4] Performance
5] Security

Q-5] Explain Distributed Computing Environment?


1] DCE stands for Distributed Computing Environment.
2] DCE means “It is not an operating system, nor is it an application. Rather, it is an
integrated set of services and tools that can be installed as a coherent environment on top
of existing operating systems”.
3] A primary goal of DCE is vendor independence.
4] It runs on many different kinds of computers, operating systems produced by different
vendors.
For example, OSF/I, WINDOWS
5] DCE is a middleware software layered between the DCE applications layer and the
operating system and networking layer.

DCE Components:
1] Threads Package
2] Remote Procedure Call
3] Name Services
4] Security Services
5] Distribute file service
6] Distribute time service

Q-6] Explain Message Passing System and its


features.
1] Inter process communication requires
information sharing among two or more
processes.
2] The two methods for information sharing as
follows,
1] Shared-data approach
2] Message-passing approach

Message Passing system:


A message system is a subsystem of a distributed operating system that provides a set
of message-based IPC Protocols.
Features of Good message passing system:
1] Simplicity:
1] A message passing system should be simple and easy to use.
2] It is straight forward to communicate with existing once.
2] Uniform-Semantics:
Message passing system used for following two types of inter process communication:

1] Local communication
2] remote communication
3] Efficiency:
1] Efficiency is critical issues for message passing.
2] Message passing system is not efficient.
4] Correctness:
1] Correctness is feature related to IPC protocols for Group communication.
2] Issues related to correctness as follows:
1] Atomicity
2] Ordered delivery
5] Security:
A good message system capable of providing a secure end to end communication.
6] Portability:
The Message-passing system should itself by portable.

Issues in IPC by Message passing:


1] A message is a block of information.
2] It consists of fixed-length header and variable-size collection of typed data
objects.

Header consists of following elements:

1] Address:
1] It contains character that uniquely identify the sending and receiving processes
in the network.
2] It has 2 parts:
1] Sending process address
2] Receiving process address
2] Sequence number:
It is useful for identifying lost message and duplicate message.
3] Structural information:
1] This element has two parts.
2] Element specifies length of variable size data.
In design of IPC protocol for message passing system, the following issues need to be
considered:
1] Who is the sender?
2] Who is the receiver?
3] Is there one receiver or many receivers?
These issues are addressed by message-oriented IPC protocol.

Q-7] Explain:
1] Synchronization:
1] A central issue in the communication structure is the synchronization
imposed on the communicating processes by the communication primitives.
2] The semantics used for synchronization may be broadly classified as
blocking and non-blocking types.

3] The synchronization imposed on the communicating processes basically


depends on one of the two types of semantics used for the send and receive
primitives.
4] When both the send and receive primitives of a communication between two
processes use blocking semantics, the communication is said to be synchronous;
otherwise, it is asynchronous.

2] Buffering:
1] Messages can be transmitted from one
process to another by copying the body of the
message from the address space of the sending
process to the address space of the receiving
process.
2] In inter process communication, the
message-buffering strategy is strongly related to
synchronization strategy.
These Three types of buffering strategies are:

1] No buffering
2] Single message buffer
3] Multiple message buffer

Q-8] Explain Process Addressing.


A message-passing system usually supports two types of process addressing:
1] Explicit addressing:
The process with which communication is desired is explicitly named as a parameter
in the communication primitive used.
2] Implicit addressing:
1] A process willing to communicate does not explicitly name a process for
communication.
2] The sender names a service instead of a process. This type of primitive is
useful in client-server communications when the client is not concerned with
particular server.
3] This type of process addressing is also known as functional addressing.

With the two basic types of process addressing used in communication primitives.
2] Remote Procedure Calls

Q-1] Explain RPC.


1] RPC stands for Remote Procedure Call.
2] Distributed systems is one of the primary motivations for developing the RPC facility.
3] The RPC has become a widely accepted IPC mechanism in distributed systems.
4] The popularity of RPC is using following features:
1] Simple call syntax
2] Familiar semantics
3] Its efficiency

Types of RPC:

• Call-back RPC
• Broadcast RPC
• Batch-mode RPC

The RPC model:


The RPC model is similar to procedure call model used for the transfer of control and
data within a program.

1] The caller sends a call message to the callee and waits for a reply message.
2] The request message contains the remote procedure's parameters.
3] The server process executes the procedure and then returns the result of procedure
execution in a reply message to the client process.
4] Once the reply message is received, the result of procedure execution is extracted, and
the caller's execution is resumed.
Transparency of RPC:
A transparent RPC mechanism is one in which local procedures and remote procedures
are indistinguishable to programmers.
This requires the following two types of transparencies:
1. Syntactic transparency
2. Semantic transparency

Q-2] Explain implementation of RPC mechanism.


1] The implementation of an RPC mechanism is based on the concept of stubs.
2] A separate stub procedure is associated with each of the two processes.
3] Implementation of an RPC mechanism involves the following five elements of program:
1. The client 2. The client stub 3. The RPC Runtime
4. The server stub 5. The Server

1] Client:
The client is a user process that initiates a remote procedure call. To make a remote
procedure call, the client makes a perfectly normal local call that invokes a corresponding
procedure in the client stub.
2] Client Stub:
The client stub is responsible for carrying out the following two tasks:
1] On receipt of a call request from the client
2] On receipt of the result of procedure execution
3] RPC Runtime:
The RPC Runtime handles transmission of messages across the network between
client and server machines. It is responsible for retransmissions, packet routing, and
encryption.
4] Server Stub:
The job of the server stub is very similar to the client stub
5] Server:
On receiving a call request from the server stub, the server executes the appropriate
procedure and returns the result to the server stub.

Q-3] Explain RPC messages.


1] Any remote procedure call involves a client process and a server process
that are possibly located on different computers.
2] Based on this mode of interaction, the two types of messages involved in the
implementation of an RPC system are as follows:
1] Call Message:
Since a call message is used to request execution of a particular remote procedure,
the two basic components necessary in a call message are as follows:
1. The identification information of the remote procedure to be executed
2. The arguments necessary for the execution of the procedure

2] Reply Message:
When the server of an RPC receives a call message from a client, it could be faced
with one condition.
Q-4] Explain parameter passing semantics.
1] The choice of parameter-passing semantics is crucial to the design of an RPC
mechanism.
2] The two choices are call-by-value and call-by-reference.
1] Call by value:
In call by value method, all parameters are copied into a message that is transmitted
from the client to the server through the intervening network.
This poses no problems for simple compact types such as integers, counters, small
arrays, and so on.
Therefore, this method is not suitable for passing parameters involving voluminous
data.
2] Call by reference
Most RPC mechanisms use the call-by-value semantics for parameter passing.
The call-by-reference semantics is known as call-by-object-reference.

Q-5] Explain Call semantics and its types.


1] In RPC, the caller and the callee processes are possibly located on different
nodes.
2] Failure of communication links between the caller and the callee nodes is also
possible
3] call semantics of an RPC system that determines how often the remote
procedure may be executed under fault conditions depends on this part of the
RPC Runtime code.
4] The different types of call semantics used in RPC systems are described
below.
1] Possibly or May-be Call Semantics
2] last-One Call Semantics
3] last-of-Many Call Semantics
4] At-least-Once Call Semantics
5] Exactly-Once Call Semantics

Q-6] Explain communication protocol for RPCs.


Based on the needs of different systems, several communication protocols have been
proposed for use in RPCs.
1] The request Protocol:
1] This protocol is also known as the R protocol.
2] It is used in RPCs in which the called procedure has nothing to return as the
result of procedure execution.
3] The protocol provides may-be call semantics and requires no retransmission of
request messages.

2] The Request/Reply Protocol:


1] This protocol is also known as the RR protocol.
2] It is useful for the design of systems involving simple RPCs.
3] The protocol is based on the idea of using implicit acknowledgment to
eliminate explicit acknowledgment messages.

3] The Request/Reply/Acknowledge Reply:


1] Protocol This protocol is also known as the RRA protocol.
2] The implementation of exactly-once call semantics with RR protocol requires
the server to maintain a record of the replies in its reply cache.

Q-7] Explain Client-Server Binding.


1] It is necessary for a client to know the location of a server before a remote
procedure call can take place between them.
2] The process by which a client becomes associated with a server so that calls
can take place is known as binding.
1] Sarver Naming
2] Server Locating
3] Distributed Shared Memory

Q-1] Explain DSM and architecture of DSM.


1] DSM provides a virtual address space shared among processes on loosely coupled
processors.
2] DSM is sometimes also referred to as
Distributed Shared Virtual Memory (DSVM).
Advantages of DSM:
1] Simpler Abstraction
2] Better portability
3] Better performance
4] flexible

Q-2] Design and implementation issues of DSM.


Important issues involved in the design and implementation of DSM systems are as
follows:
1] Granularity:
Granularity refers to the block size of a DSM system, that is to the unit of sharing
and the unit of data transfer across the network when a network block fault occurs.
Selecting proper block size is an important part of the design of a DSM system.
2] Structure of shared-memory space:
Structure refers to the layout of the shared data in memory. The structure of the
shared-memory space of a DSM system is normally dependent on the type of applications
that the DSM system is supported.
3] Memory coherence and access synchronization.
4] Data location and access:
To share data in a DSM system, it should be possible to locate and retrieve the data
accessed by a user process.
5] Replacement strategy:
A cache replacement strategy is necessary in the design of a DSM system.
6] Thrashing:
In a DSM system, data blocks migrate between nodes on demand. A DSM system
must use a policy to avoid this situation is known as thrashing.
Q-3] Structure of shared memory space.
Structure defines the abstract view of the shared-memory space to be presented to the
application programmers of a DSM system.
The structure and granularity of a DSM system are closely related.
The three commonly used approaches for structuring the shared memory space of a DSM
system are as follows:
1] No structuring:
Most DSM systems do not structure their shared-memory space. In these systems,
the shared-memory space is simply a linear array of words. Therefore, it is simple and easy
to design such a DSM system.
2] Structuring by data type:
In this method, the shared-memory space is structured either as a collection of
objects and collection of variables in the source language and Midway. The granularity in
DSM systems is an object or a variable.
3] Structuring as a database:
Another method is to structure the shared memory like a database. Its
sharedmemory space is ordered as an associative called a tuple space.
Therefore, access to shared data is non transparent. In most other systems, access
to shared data is transparent.

Q-4] Explain consistency model.


A consistency model is referring to the degree of consistency
1] Strict Consistency Model:
The strict consistency model is the strongest form of memory coherence, having the
most stringent consistency requirement.
A shared-memory system is said to support the strict consistency model if the.
2] Sequential Consistency Model:
The sequential consistency model was proposed by Lamport.
A shared-memory system is said to support the sequential consistency model if all
processes see the same order of all memory access operations on the shared memory.
3] Causal Consistency Model:
The causal consistency model, proposed by Hutto and Ahamad.
In causal consistency model, all processes see only those memory reference operations
in the same order that are potentially causally related.
4] Processor Consistency Model:
The processor consistency model, proposed by Goodman, is very similar to the
PRAM consistency model.
That is, a processor consistent memory is both coherent and adheres to the PRAM
consistency model.

Q-5] Explain Thrashing.


Thrashing is said to occur when the system spends a large amount of time transferring
shared data blocks from one node to another.
Thrashing may occur in the following situations:
1. When interleaved data accesses made by processes on two or more nodes
2. When blocks with read-only permissions are repeatedly invalidated The
following methods may be used to solve the thrashing problem in DSM
systems:
1. Providing application-controlled locks.
2. Nailing a block to a node for a minimum amount of time.
3. Tailoring the coherence algorithm to the shared-data usage patterns.

Q-6] Explain Synchronization and its issues.


The synchronization mechanisms are suitable for distributed systems.
In particular, the following synchronization-related issues are described:
• Clock synchronization • Event ordering • Mutual exclusion
• Deadlock • Election algorithms
1] Clock Synchronization:
Every computer needs a timer mechanism to keep track of current time and also for
various accounting purposes such as calculating the time spent by a process.
In a distributed system, an application may have processes that concurrently run-on
multiple nodes of the system.
2] Event Ordering:
Keeping the clocks in a distributed system synchronized to within 5 or 10 msec is an
expensive and nontrivial task.
The happened-before relation on a set of events satisfies the following conditions:
1. If a and b are events in the same process and a occurs before b, then a b.
2. If a is the event of sending a message by one process and b is the event of the
receipt of the same message by another process, then a b.
3. If a b and b c, then a c is a transitive relation.

3] Mutual exclusion:
An algorithm for implementing mutual exclusion satisfy the following requirements:
1. Mutual exclusion: Given a shared resource accessed by multiple concurrent
processes, at any time only one process should access the resource.
2. No starvation: If every process that is granted the resource eventually releases it,
every request must be eventually granted.
4] Deadlock:
The sequence of events required to use a resource by a process is as follows:
1. Request: The process first makes a request for the resource.
2. Allocate: The system allocates the resource to the requesting process as soon as
possible.
3. Release: After the process has finished using the allocated resource, it releases the
resource to the system.
5] Election algorithm:
Election algorithms are based on the following assumptions:
1. Each process in the system has a unique priority number.
2. Whenever an election is held, the process having the highest priority number
among the currently active processes is elected as the coordinator.
3. On recovery, a failed process can take appropriate actions to re-join the set of
active processes.
5] Distributed File System

Q-1] What is distributed file system and Features of a good distributed file system.
1] A file system is a subsystem of an operating system that performs file management
activities such as organization, storing, retrieval, sharing, and protection of files.
2] A file system provides an abstraction of a storage device.
A distributed file system supports the following:
1] Remote information sharing
2] User mobility
3] Availability

A distributed file system provides following types of services:


1] Storage service 2] Name service
Features of good distributed file system:
1] Transparency:
1] Structure transparency 2] Naming transparency
3] Replication transparency

2] User mobility:
In a distributed system, a user should not be forced to work on a specific node but
should have the flexibility to work on different nodes at different times.
3] Performance:
The performance of a file system is usually measured as the average amount of time
needed to satisfy client requests.
4] Simplicity:
Several issues influence the simplicity and ease of use of a distributed file system.
The most important issue is that the semantics of the distributed file system.
5] Scalability:
A good distributed file system should be designed to easily cope with the growth of
nodes and users in the system.
6] High availability:
A distributed file system should continue to function even when failure of one or
more components.
Q-2] Explain file models.
Different file systems use different conceptual models of a file.
The two most commonly used criteria for file modelling are structure and
modifiability.
1] Unstructured and Structured Files:
1] A file is an unstructured sequence of data.
2] In this model, there is no substructure known to the file server and the contents
of each file of the file system appears to the file server as an uninterpreted
sequence of bytes.
3] The operating system is not interested in the information stored in the files.
4] UNIX and MS-DOS use this file model.
5] Another file model that is rarely used nowadays is the structured file model. In
this model, a file appears to the file server as an ordered sequence of records.
6] Records of different files of the same file system can be of different size.
7] Therefore, many types of files exist in a file system, each having different
properties.
2] Mutable and Immutable Files:
1] Files According to the modifiability criteria, files are of two types-mutable and
immutable.
2] Most existing operating systems use the mutable file model.
3] In this model, an update performed on a file overwrites on its old contents to
produce the new contents.
4] A file is represented as a single stored sequence that is altered by each update
operation.

Q-3] Explain file accessing models.


1] A client's request to access a file is serviced depends on the file accessing
model used by the file system.
2] The file-accessing model of a distributed file system depends on two
factors-the method used for accessing remote files and the unit of data access.
1] Accessing Remote Files:
A distributed file system may use one of the following models to service a client's
file access request when the accessed file is a remote file:
1] Remote service model:
1] In this model, the processing of the client's request is performed at the
server's node.
2] The client's request for file access is delivered to the server, the server
machine performs the access request, and finally the result is forwarded back to the
client.
2] Data-caching model:
1] In the remote service model, every remote file access request results in
network traffic.
2] The data-caching model attempts to reduce the amount of network traffic by
taking advantage of the locality feature found in file accesses.
2] Unit of Data Transfer:
Unit of data transfer refers to the fraction of a file data that is transferred to and from
clients as a result of a single read or write operation.
The four commonly used data transfer models as follows:
1] File-level transfer model: In this model, when an operation requires file data
to be transferred across the network in either direction between a client and a
server, the whole file is moved.
2] Block-level transfer model: In this model, file data transfers across the
network between a client and a server take place in units of file blocks.
3] Byte-level transfer model: In this model, file data transfers across the
network between a client and a server take place in units of bytes. This model
provides maximum flexibility.
4] Record-level transfer model: In this model, file data transfers across the
network between a client and a server take place in units of records.

Q-4] Atomic transaction and properties.


An atomic transaction is a collection of operations that take place indivisibly in the
presence of failures and concurrent computations.
Transactions have the following properties:
1] Atomicity:
This property ensures that to the outside world all the operations of a transaction
appear to have been performed indivisibly. Two essential requirements for atomicity are
atomicity with respect to failures and atomicity with respect to concurrent access.
2] Serializability:
This property ensures that concurrently executing transactions do not interfere with
each other. This property also known as isolation property.
3] Permanence:
This Property also known as durability property. This property ensures that once a
transaction completes successfully, the results of its operations become permanent and
cannot be lost.
Q-5] Explain design principles.
Principles for designing distributed file systems as follows:
1] Clients have cycles to burn:
This principle says that, if possible, it is always preferable to perform an operation on
a client's machine rather than performing it on a server machine. This is because a server is
a common resource for all clients. This principle aims at enhancing the scalability of the
design.
2] Cache whenever possible:
Better performance, scalability and autonomy motivate this principle. Caching of
data at clients' sites improves overall system performance. Caching also enhances
scalability.
3] Exploit usage properties:
This principle says that, depending on usage properties, files should be grouped into
a small number of easily identifiable classes, and then class-specific properties should be
exploited for independent optimization for improved performance.
4] Minimize systemwide knowledge and change:
This principle is aimed at enhancing the scalability of design. The use of hierarchical
system structure is an application of this principle.
5] Trust the fewest possible entities:
This principle is aimed at enhancing the security of the system. For example, it is
much simpler to ensure security based on the integrity of the much smaller number of
servers rather than trusting thousands of clients.
6] Batch if possible:
Batching helps in improving performance greatly. For example, grouping operations
together can improve throughput.

4] Resource Management

Q-1] Explain Resource management.


1] Distributed systems are characterized by resource multiplicity and system transparency.
2] Every distributed system consists of a number of resources interconnected by a network.
3] A resource can be logical, such as a shared file, or physical, such as a CPU.
4] These techniques can be broadly classified into three types:
1. Task assignment approach
2. Load-balancing approach
3. Load-sharing approach

Q-2] Features of good global scheduling algorithm.


1] Dynamic in Nature:
A good process-scheduling algorithm should be able to take care of the dynamically
changing load of the various nodes of the system.
Process assignment decisions should be based on the current load of the system.
2] Quick Decision-Making Capability:
A good process-scheduling algorithm must make quick decisions.
This is an extremely important aspect of the algorithms
3] Balanced System Performance and Scheduling:
Several global scheduling algorithms collect global state information and use this
information in making process assignment decisions.
4] Scalability:
A scheduling algorithm should be capable of handling small as well as large
networks.
5] Fault Tolerance:
A good scheduling algorithm should not be disabled by the crash of one or more
nodes of the system.

Q-3] Explain:
1] Task Assignment Approach:
1] In this approach, a process is considered to be composed of multiple tasks
and the goal is to find an optimal assignment policy for the tasks of an individual
process.
Typical assumptions found in task assignment work are as follows:
• A process has already been split into pieces called tasks.
• The amount of computation required by each task and the speed of each processor
are known.
• The cost of processing each task on every node of the system is known.
• The inter process communication costs between every pair of tasks is known.
2] Load Balancing Approach:
1] The scheduling algorithms using this
approach are known as load-balancing
algorithms.
2] These algorithms are also known as
load-levelling algorithm.
A Taxonomy of load-Balancing Algorithms:
1] Static versus Dynamic
2] Deterministic versus

Q-4] Explain process migration and its features.


1] Process migration is the relocation of a process from its current location to
another node.
2] A process may be migrated either before it starts executing on its source
node or during the course of its execution.

Process migration involves the following major steps:


1. Selection of a process that should be migrated
2. Selection of the destination node to which the selected process should be
migrated
3. Actual transfer of the selected process to the destination node
Desirable Features of a Good Process Migration Mechanism:
1] Transparency:
Transparency is an important requirement for a system that supports process
migration.
The following levels of transparency can be identified:
1. Object access level.
2. System call and inter process communication level.
2] Efficiency:
Efficiency is another major issue in implementing process migration.
The main sources of inefficiency involved with process migration are as follows:
• The time required for migrating a process
• The cost of locating an object
• The cost of supporting remote execution once the process is migrated
3] Robustness:
The process migration mechanism must also be robust in the sense that the failure
of a node other than the one.
4] Communication between Co-processes of a Job:
Process migration is the parallel processing among the processes of a single job
distributed over several nodes.

You might also like