Module 4b

Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 29

Multiprocessor OS

The functional capabilities often required in an OS for a multiprogrammed computer include the resource allocation and management schemes, memory and dataset protection, prevention of system deadlocks and abnormal process termination or process handling. Also need techniques for efficient utilization of resources and must provide I-O and load balancing schemes.

Presence of more than one processing unit in the system introduces a new dimension into the design of the OS.

There are 3 organizations that been utilized in the design of OS for multiprocessors: masterslave configuration, separate supervisor for each processor and floating supervisor control.

In master slave mode one processor called the master maintains the status of all processors in the system and apportions the work to all the slave processors.
An eg of the master slave mode is in the Cyber170, where the OS is executed by one peripheral processor P0. All other processors are treated as slaves to P0.

When there is a separate supervisor system (kernel) running in each processor, the OS characteristics are very different from the master-slave systems.
This is similar to the approach taken by the computer networks where each processor contains a copy of basic kernel. Resource sharing occurs at a highest level. Each processor services its own needs.

Since there is some interaction between the processors it is necessary for some of the supervisory code to be reentrant or replicated to provide separate copies for each processor. Although each supervisor has its own set of private tables, some tables are common and shared by the whole system. This creates table access problems.
The method used in accessing the shared resources depends on the degree of coupling among the processors. The separate supervisor OS is not as sensitive to a catastrophic failure as a master slave system.

The floating supervisor control scheme treats all the processors as well as other resources symmetrically or as an anonymous pool of resources. This is the most difficult mode of operation and the most flexible.
In this mode the supervisor routine floats from one processor to another, although several of the processors may be executing supervisory service routines simultaneously. This type of systems can attain better load balancing over all types of resources.

Eg of OS that execute in this mode are the MVS and VM in the IBM 3081 and the Hydra on C.mmp.

Software requirements for multiprocessors


Program control structures are provided to aid the programmers in developing efficient parallel algorithms. 3 basic nonsequential program control structure have been identified. These control structures are characterized by the fact that programmer need only focus on a small program and not on the overall control of the computation.

The I eg. is the message based organization which was used in the Cm* OS. In this, computation is performed by multiple homogenous processes that execute independently and interact via messages. The grain size of a typical process depends on the system. The II is the chore structure. Here all codes are broken into small units. The process that executes the unit of code is called a chore. An important characteristic of chore is that once it begins execution, it runs to completion. To avoid long waits, chores are small.

The III control structure is that of production system now often used in AI. Productions are expressions of the form <antecedent, consequent>. Whenever the Boolean antecedent evaluates to true, the consequent may be performed. In contrast to chores, production consequents may or may not include code which might block.

In a production system 4 scheduling strategies are required (a) to control the selection of antecedents to be evaluated next (b) to order the execution of selected antecedents (c) to select the subset runnable consequents to be executed, (d) to order the execution of the selected consequents.

OS requirements Basic goals of OS:

--To provide programmer interface to the m/c --Manage resources --Provide mechanisms to implement policies --Facilitate matching applications to the m/c.

The sharing of the multiple processors may be achieved by placing the several processes together in shared memory and providing a mechanism for rapidly switching the attention of a processor from one process to another. This operation is often called context switching.

Sharing of the processors introduces 3 subordinate problems: 1. The protection of the resources of one process from willful or accidental damage by other processes 2. The provision for communication among processes and b/w user processes and supervisor processes. 3. The allocation of resources among processes so that resource demands can always be fulfilled.

Exploiting concurrency for multiprocessing A parallel program for a multiprocessor consists of 2 or more interacting processes. A process is a sequential program that executes concurrently with other processes.

Language features to exploit parallelism Processes re concurrent if their executions overlap in time . No prior knowledge is available about the speed at which concurrent processes are executed.

One way to denote concurrency is to use FORK and JOIN statements.

A very common problem occurs when 2 or more concurrent processes share data which is modifiable. If a process is allowed to access a set of variables that is being updated by another processes concurrently, erroneous results will occur in the computation.

So controlled access of the shared variables should be required of the computations so as to guarantee that a process will have mutually exclusive access to the sections of programs and data which are nonreentrant or modifiable. Such segments of programs are called critical sections.

Following are the assumptions regarding the critical sections: 1. Mutual exclusion: At most one process can be in a critical section at a time. 2. Termination: The critical section is executed in a finite time. 3. Fair scheduling: A process attempting to enter the critical section will eventually do so in a finite time.

The deadlock occurs because 2 processes enter their critical sections in opposite order and create a situation in which each process is waiting indefinitely for the completion of a region within the other process. The circular wait is a condition for deadlock. The deadlock is possible it is assumed that a resource cannot be released by a process waiting for an allocation of another resource.

From this technique, an algorithm can be designed to find a subset of resources that would incur the minimum cost if preempted. This approach means that after each preemption, the detection algorithm must be reinvoked to check whether a deadlock still exists.

A process which has a resource preempted from it must make a subsequent request for the resource to be reallocated to it.

Synchronization is a general term for timing constraints of this type of communication imposed on interactions between concurrent processes.
The simplest form of interaction is an exchange of timing signals b/w 2 processes.

An eg is the use of interrupts to signal the completion of asynchronous peripheral operations to the processor. Another type of timing signals events was used in early multiprocessing systems to synchronize concurrent processes.

Program and algorithm restructuring


2 major issues in decomposing algorithm can be identified as partitioning and assignment Partitioning is the division of an algorithm into procedures, modules and processes. Assignment refers to the allocation of these units to processors.

You might also like