0% found this document useful (0 votes)
107 views18 pages

6) Unit II - Multiprocessor Scheduling

The document discusses scheduling principles for multiprocessor operating systems. It describes different approaches for synchronizing processes across multiple processors including no explicit synchronization, static assignment to processors, and dynamic load balancing. It also covers methods for assigning processes to processors such as master/slave and peer models. The key points are that scheduling depends on application granularity and number of processors, and that dynamic scheduling can help balance loads to improve processor utilization.

Uploaded by

Hehdhdhdjjd
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
107 views18 pages

6) Unit II - Multiprocessor Scheduling

The document discusses scheduling principles for multiprocessor operating systems. It describes different approaches for synchronizing processes across multiple processors including no explicit synchronization, static assignment to processors, and dynamic load balancing. It also covers methods for assigning processes to processors such as master/slave and peer models. The key points are that scheduling depends on application granularity and number of processors, and that dynamic scheduling can help balance loads to improve processor utilization.

Uploaded by

Hehdhdhdjjd
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 18

Operati

ng
System
s:
Internal Multiprocessor
s and
Design Scheduling
Principl
es
Synchronization Granularity
and Processes
 No explicit
synchronization among
processes
 each represents a
separate,
independent
application or job

 Typical use is in a
time-sharing system
 The approach taken
will depend on the
degree of
granularity of
applications and the
number of
processors available
 A disadvantage of static assignment is that one
processor can be idle, with an empty queue, while
another processor has a backlog
 to prevent this situation, a common queue can be used
 another option is dynamic load balancing
 Both dynamic and static methods
require some way of assigning a process
to a processor
 Approaches:
 Master/Slave
 Peer
 Key kernel functions always run on a particular processor
 Master is responsible for scheduling
 Slave sends service request to the master
 Is simple and requires little enhancement to a
uniprocessor multiprogramming operating system
 Conflict resolution is simplified because one processor has
control of all memory and I/O resources
 Kernel can execute on any processor
 Each processor does self-scheduling from the pool of
available processes
 Usually processes are not dedicated to processors

 A single queue is used for all processors


 if some sort of priority scheme is used, there are multiple
queues based on priority
 Thread execution is separated from the rest of the definition
of a process

 An application can be a set of threads that cooperate and


execute concurrently in the same address space

 In a multiprocessor system kernel-level threads can be


used to exploit true parallelism in an application

 Dramatic gains in performance are possible in multi-


processor systems

 Small differences in thread management and scheduling can


have an impact on applications that require significant
interaction among threads
a set of related thread
processes are not
scheduled to run on a set of
assigned to a particular
processors at the same
processor
time, on a one-to-one basis

provides implicit scheduling the number of threads in a process


defined by the assignment of can be altered during the course of
threads to processors execution
 Simplest approach and carries over most directly from a
uniprocessor environment

 Versions of load sharing:


 first-come-first-served
 smallest number of threads first
 preemptive smallest number of threads first
 Central queue occupies a region of memory that must be
accessed in a manner that enforces mutual exclusion
 can lead to bottlenecks

 Preemptive threads are unlikely to resume execution on the


same processor
 caching can become less efficient
 Simultaneous scheduling of the threads that make up a single
process

 Useful for medium-grained to fine-grained parallel applications


whose performance severely degrades when any part of the
application is not running while other parts are ready to run
 Also beneficial for any parallel application
Example of Scheduling Groups
With Four and One Threads

Suppose that we have N processors and M applications,


each of which has N or fewer threads. Then each application could be given
1/ M of the available time on the N processors, using time slicing.
 When an application is scheduled, each of its threads is
assigned to a processor that remains dedicated to
that thread until the application runs to completion
 If a thread of an application is blocked waiting for I/O
or for synchronization with another thread, then that
thread’s processor remains idle
 there is no multiprogramming of processors
 For some applications it is possible to provide language
and system tools that permit the number of threads
in the process to be altered dynamically
 this would allow the operating system to adjust the load
to improve utilization

 Both the operating system and the application are


involved in making scheduling decisions
 This approach is superior to gang scheduling or
dedicated processor assignment for applications that
can take advantage of it

You might also like