Os Unit Ii
Os Unit Ii
Os Unit Ii
UNIT-II
Basic Concepts
Almost all programs have some alternating cycle of CPU number crunching
and waiting for I/O of some kind. ( Even a simple fetch from memory takes a
long time relative to CPU speeds. )
In a simple system running a single process, the time spent waiting for I/O is
wasted, and those CPU cycles are lost forever.
A scheduling system allows one process to use the CPU while another is
waiting for I/O, thereby making full use of otherwise lost CPU cycles.
The challenge is to make the overall system as "efficient" and "fair" as
possible, subject to varying and often dynamic conditions, and where
"efficient" and "fair" are somewhat subjective terms, often subject to shifting
priority policies.
CPU bursts vary from process to process, and from program to program, but an extensive study
shows frequency patterns similar to that shown in Figure 6.2:
Figure 6.2 Histogram of CPU- burst Duration
Whenever the CPU becomes idle, it is the job of the CPU Scheduler
( a.k.a. the short-term scheduler ) to select another process from the
ready queue to run next.
The storage structure for the ready queue and the algorithm used to
select the next process are not necessarily a FIFO queue. There are
several alternatives to choose from, as well as numerous adjustable
parameters for each algorithm, which is the basic subject of this entire
chapter.
6.1.4 Dispatcher
The dispatcher is the module that gives control of the CPU to the
process selected by the scheduler. This function involves:
o Switching context.
There are several different criteria to consider when trying to select the "best"
scheduling algorithm for a particular situation and environment, including:
o CPU utilization - Ideally the CPU would be busy 100% of the time, so
as to waste 0 CPU cycles. On a real system CPU usage should range
from 40% ( lightly loaded ) to 90% ( heavily loaded. )
o Throughput - Number of processes completed per unit time. May range
from 10 / second to 1 / hour depending on the specific processes.
o Turnaround time - Time required for a particular process to complete,
from submission time to completion. ( Wall clock time. )
o Waiting time - How much time processes spend in the ready queue
waiting their turn to get on the CPU.
( Load average - The average number of processes sitting in the
ready queue waiting their turn to get into the CPU. Reported in 1-
minute, 5-minute, and 15-minute averages by "uptime" and
"who". )
o Response time - The time taken in an interactive program from the
issuance of a command to the commence of a response to that command.
In general one wants to optimize the average value of a criteria ( Maximize
CPU utilization and throughput, and minimize all the others. ) However some
times one wants to do something different, such as to minimize the maximum
response time.
Sometimes it is most desirable to minimize the variance of a criteria than the
actual value. I.e. users are more accepting of a consistent predictable system
than an inconsistent one, even if it is a little bit slower.
6.3 Scheduling Algorithms
The following subsections will explain several common scheduling strategies, looking
at only a single CPU burst each for a small number of processes. Obviously real
systems have to deal with a lot more simultaneous processes executing their CPU-I/O
burst cycles.
FCFS is very simple - Just a FIFO queue, like customers waiting in line
at the bank or the post office or at a copying machine.
Unfortunately, however, FCFS can yield some very long average wait
times, particularly if the first process to get there takes a long time. For
example, consider the following three processes: