Unit 04
Unit 04
Unit 04
AVK 1
Operating System (22516) Unit No.4
4.1 Concept:
In a single-processor system, only one process can run at a time; any others must wait
until the CPU is free and can be rescheduled. The objective of multiprogramming is to
have some process running at all times, to maximize CPU utilization. The idea is
relatively simple. A process is executed until it must wait for the completion of some I/O
request. In a simple computer system, the CPU then just sits idle. All this waiting time is
wasted; no useful work is accomplished. With multiprogramming, we try to use this time
productively. Several processes are kept in memory at one time. When one process has to
wait, the operating system takes the CPU away from that process and gives the CPU to
another process. This pattern continues. Every time one process has to wait, another
process can take over use of the CPU.
Scheduling of this kind is a fundamental operating-system function. Almost all computer
resources are scheduled before use. The CPU is one of the primary computer resources.
Thus, its scheduling is central to operating-system design.
CPU-I/O Burst Cycle
The success of CPU scheduling depends on an observed property of processes: Process
execution consists of a cycle of CPU execution and I/O wait. Processes switches between
these two states. Process execution begins with a CPU burst. That is followed by an I/O
burst, which is followed by another CPU burst, then another I/O burst, and so on.
Eventually, the final CPU burst ends with a system request to terminate execution This is
shown in Figure .
AVK 2
Operating System (22516) Unit No.4
The durations of CPU bursts have been measured extensively. Although they vary
greatly from process to process and from computer to computer. An I/O-bound
program typically has many short CPU bursts. A CPU-bound program might have a
few long CPU bursts.
CPU Scheduler
Whenever the CPU becomes idle, the operating system must select one of the processes
in the ready queue to be executed. The selection process is carried out by the short-
term scheduler (or CPU scheduler). The scheduler selects a process from the
processes in memory that are ready to execute and allocates the CPU to that process.
Note that the ready queue is not necessarily in a first-in, first-out (FIFO) queue fashion.
In the various scheduling algorithms, a ready queue can be implemented as a FIFO
queue, a priority queue, a tree, or simply an unordered linked list. All the processes in
the ready queue are lined up waiting for a chance to run on the CPU. The records in the
queues are generally process control blocks (PCBs) of the processes.
AVK 3
Operating System (22516) Unit No.4
AVK 4
Operating System (22516) Unit No.4
3. When a process switches from the waiting state to the ready state (for
example, at completion of I/O)
Throughput. If the CPU is busy in executing processes, then work is being done. One
measure of work is the number of processes that are completed per time unit, called
throughput. For long processes, this rate may be one process per hour; for short
transactions, it may be 10 processes per second.
Turnaround time. From the point of view of a particular process, the important criterion
is how long it takes to execute that process. The interval from the time of submission of a
process to the time of completion is the turnaround time. Turnaround time is the sum of
the periods spent waiting to get into memory, waiting in the ready queue, executing on
the CPU, and doing I/O.
Waiting time. The CPU scheduling algorithm does not affect the amount of time during
which a process executes or does I/O; it affects only the amount of time that a process
spends waiting in the ready queue. Waiting time is the sum of the periods spent waiting in
the ready queue.
AVK 5
Operating System (22516) Unit No.4
Response time. In an interactive system, turnaround time may not be the best criterion.
Often, a process can produce some output fairly early and can continue computing new
results while previous results are being output to the user. Thus, another measure is the
time from the submission of a request until the first response is produced. This measure,
called response time, is the time it takes to start responding, not the time it takes to output
the response. The turnaround time is generally limited by the speed of the output device.
It is desirable to maximize CPU utilization and throughput and to minimize turnaround
time, waiting time, and response time.
Scheduling Algorithms
First-Come, First-Served Scheduling
P3 3 6
P4 4 8
Consider the following set of processes that arrive at time 0, with the length of the CPU
burst given in milliseconds:
If the processes arrive in the order Pi, P2, P3, and this algorithm is used. Then the result
observed is shown in the following Gant Chart.
The waiting time is 0 milliseconds for process Pi, 24 milliseconds for process P2, and 27
milliseconds for process P3. Thus, the average waiting time is (0 + 24 + 27)/3 = 17
milliseconds. If the processes arrive in the order P?, P3, P1, however, the results will be
as shown in the following Gantt chart:
AVK 8
Operating System (22516) Unit No.4
next CPU burst of a process, rather than its total length. We use the term SJF because
most people and textbooks use this term to refer to this type of scheduling.
Non preemptive SJF
Process Average time CPU burst Time (mS)
P0 0 5
P1 1 3
P2 2 8
P3 3 6
P0 P1 P3 P2
0 5 8 14 22
Waiting time of each process is as follows –
AVK 9
Operating System (22516) Unit No.4
As an example of SJF scheduling, consider the following set of processes, with the length
of the CPU burst given in milliseconds:
Process Burst Time
Pi 6
P2 8
P3 7
P4 3
Using SJF scheduling, we would schedule these processes according to the following
Gantt chart:
The waiting time is 3 milliseconds for process P1, 16 milliseconds for process P2, 9
milliseconds for process P3, and 0 milliseconds for process P4. Thus, the average waiting
time is (3 + 16 + 9 + 0)/4 = 7 milliseconds. By comparison, if we were using the FCFS
scheduling scheme, the average waiting time would be 10.25 milliseconds.
preemptive SJF or shortest Remaining Time
As an example, consider the following four processes, with the length of the CPU burst
given in milliseconds:
Process arrival Time Burst Time
PP1 0 8
P2 1 4
P3 2 9
P4 3 5
If the processes arrive at the ready queue at the times shown and need the indicated burst
times, then the resulting preemptive SJF or shortest remaining Time schedule is as
depicted in the following Gantt chart:
AVK 10
Operating System (22516) Unit No.4
Process P1 is started at time 0, since it is the only process in the queue. Process P2
arrives at time 1. The remaining time for process P1 (7 milliseconds) is larger than the
time required by process P2 (4 milliseconds), so process P1 is preempted, and process P2
is scheduled. The average waiting time for this example is ((10 - 1) + (1 - 1) + (17 - 2) +
(5 - 3))/4 = 26/4 = 6.5 milliseconds. Nonpreemptive SJF scheduling would result in an
average waiting time of 7.75 milliseconds.
Priority Scheduling
The SJF algorithm is a special case of the general priority scheduling algorithm. A
priority is associated with each process, and the CPU is allocated to the process with the
highest priority. Equal-priority processes are scheduled in FCFS order. An SJF algorithm
is simply a priority algorithm where the priority (p) is the inverse of the (predicted) next
CPU burst. The larger the CPU burst, the lower the priority, and vice versa.
As an example, consider the following set of processes, assumed to have arrived at time
0, in the order Pi, P2,p3,p4 and P5, with the length of the CPU burst given in
milliseconds:
Process 1 Burst Time Priority
P1 10 3
P2 1 1
P3 2 4
P4 1 5
P5 5 2
Using priority scheduling, we would schedule these processes according to the Gant
Chart
AVK 11
Operating System (22516) Unit No.4
process. A nonpreemptive priority scheduling algorithm will simply put the new process
at the head of the ready queue.
A major problem with priority scheduling algorithms is starvation or indefinite block-
ing, A process that is ready to run but waiting for the CPU can be considered blocked.
This algorithm can leave some low-priority processes waiting indefinitely for the CPU. In
a heavily loaded computer system, higher-priority processes can prevent a low-priority
process from getting the CPU
A solution to the problem of indefinite blockage of low-priority processes is aging.
Aging is a technique of gradually increasing the priority of processes that wait in the
system for a long time.
Round-Robin Scheduling
P0 P1 P2 P3 P1 P2
0 3 7 11 15 16 19
AVK 13
Operating System (22516) Unit No.4
The average waiting time is ((10 –4) + 4+7)=17/3 = 5.66 milliseconds. In the RR
scheduling algorithm, no process is allocated the CPU for more than 1 time quantum in
a row (unless it is the only runnable process). If a process's CPU burst exceeds 1 time
quantum, that process is preempted and is put back in the ready queue. The RR
scheduling algorithm is thus preemptive.
The performance of the RR algorithm depends on the size of the time quantum. At
one extreme, if the time quantum is extremely large, the RR policy is the same as the
FCFS policy. If the time quantum is extremely small (say, 1 millisecond), the RR
approach is called processor sharing. In software, we need also to consider the effect
of context switching on the performance of RR scheduling. Let us assume that we have
only one process of 10 time units. If the quantum is 12 time units, the process finishes
in less than 1 time quantum, with no overhead. If the quantum is 6 time units, however,
the process requires 2 quanta, resulting in a context switch. If the time quantum is1
time unit, then nine context switches will occur, slowing the execution of the process
accordingly (Figure).
AVK 14
Operating System (22516) Unit No.4
Although the time quantum should be large compared with the context-switch time, it
should not be too large. If the time quantum is too large, RR scheduling degenerates to
FCFS policy. A rule of thumb is that 80 percent of the CPU bursts should be shorter than
the time quantum.
lowest priority
Figure Multilevel queue s scheduling.
AVK 15
Operating System (22516) Unit No.4
For example, separate queues might be used for foreground and background processes.
The foreground queue might be scheduled by an RR algorithm, while the background
queue is scheduled by an FCFS algorithm
An example of a multilevel queue scheduling algorithm with five queues, listed below in
order of priority:
1. System processes
2. Interactive processes
3. Interactive editing processes
4. Batch processes
5. Student processes
Each queue has absolute priority over lower-priority queues. No process in the batch
queue, for example, could run unless the queues for system processes, interactive processes,
and interactive editing processes were all empty. If an interactive editing process entered
the ready queue while a batch process was running, the batch process would be preempted.
Another possibility is to time-slice among the queues. Here, each queue gets a certain
portion of the CPU time, which it can then schedule among its various processes. For
instance, in the foreground background queue example, the foreground queue can be given
80 percent of the CPU time for RR scheduling among its processes, whereas the
background queue receives 20 percent of the CPU to give to its processes on an FCFS
basis.
Deadlocks
In a multiprogramming environment, several processes may compete for a finite number
of resources. A process requests resources; and if the resources are not available at that
time, the process enters into waiting state. Sometimes, a waiting process is never again
able to change state, because that requested resources are held by other waiting processes.
This situation is called a deadlock.
System Model
A system consists of a finite number of resources to be distributed among a number of
competing processes. The resources are divided into several types, such as Memory
space, CPU cycles, files, and I/O devices (such as printers and DVD drives).
AVK 16
Operating System (22516) Unit No.4
A process must request a resource before using it and must release the resource after
using it. A process may request as many resources as it requires to carry out its
designated task. The number of resources requested may not exceed the total number of
resources available in the system i.e., a process cannot request three printers if the system
has only two.
Under the normal mode of operation, a process may utilize a resource in only the
following sequence:
1.Request. If the request cannot be granted immediately (for example, if the resource is
being used by another process), then the requesting process must wait until it can acquire
the resource.
2.Use. The process can operate on the resource (for example, if the resource is a printer,
the process can print on the printer).
Resources cannot be removed from the processes are used to completion or released
voluntarily by the process holding it.
4.Circular wait. The processes in the system form a circular list or chain where
each process in the list is waiting for a resource held by the next process in the list.
Deadlocks Handling
we can deal with the deadlock problem in one of three ways:
AVK 17
Operating System (22516) Unit No.4
• We can use a protocol to prevent or avoid deadlocks, ensuring that the system will
never enter a deadlock state.
• We can allow the system to enter a deadlock state, detect it, and recover.
• We can ignore the problem altogether and pretend that deadlocks never occur in the
system.
To ensure that deadlocks never occur, the system can use either a deadlock-prevention
or a deadlock-avoidance scheme. Deadlock prevention provides a set of methods for
ensuring that at least one of the necessary conditions cannot hold. These methods prevent
Deadlock Prevention
For a deadlock to occur, each of the four necessary conditions must hold. By ensuring
that at least one of these conditions cannot hold, we can prevent the occurrence of a
deadlock.
1.Mutual Exclusion
The mutual-exclusion condition must hold for nonsharable resources. For example, a
printer cannot be simultaneously shared by several processes. Shamble resources, in
contrast, do not require mutually exclusive access and thus cannot be involved in a
deadlock. Read-only files are a good example of a sharable resource. If several
processes attempt to open a read-only file at the same time, they can be granted
simultaneous access to the file. A process never needs to wait for a sharable resource. In
general, however, we cannot prevent deadlocks by denying the mutual-exclusion
condition, because some resources are intrinsically nonsharable.
2. Hold and Wait
To ensure that the hold-and-wait condition never occurs in the system, we must guarantee
that, whenever a process requests a resource, it does not hold any other resources. One
protocol that can be used requires each process to request and be allocated all its
resources before it begins execution.
An alternative protocol allows a process to request resources only when it has none. A
process may request some resources and use them. Before it can request any additional
resources, however, it must release all the resources that it is currently allocated.
AVK 18
Operating System (22516) Unit No.4
Both these protocols have two main disadvantages. First, resource utilization may be low,
since resources may be allocated but unused for a long period.
Second, starvation is possible. A process that needs several popular resources may have
to wait indefinitely, because at least one of the resources that it needs is always allocated
to some other process.
3. No Preemption
The third necessary condition for deadlocks is that there be no preemption of resources
that have already been allocated. To ensure that this condition does not hold, we can use
the following protocol. If a process is holding some resources and requests another
resource that cannot be immediately allocated to it (that is, the process must wait), then
all resources currently being held are preempted. The preempted resources are added to
the list of resources for which the process is waiting. The process will be restarted only
when it can regain its old resources, as well as the new ones that it is requesting.
4.Circular wait
The fourth and final condition for deadlocks is the circular-wait condition, One way to
ensure that this condition never holds is to impose a total ordering of all resource types
and to require that each process requests resources in an increasing order of enumeration.
A process can't request for a lesser priority resource. This ensures that not a single
process can request a resource which is being utilized by some other process and no cycle
will be formed.
AVK 19
Operating System (22516) Unit No.4
Deadlock Avoidance
In deadlock avoidance, the request for any resource will be granted if the resulting state
of the system doesn't cause deadlock in the system. The state of the system will
continuously be checked for safe and unsafe states.
In order to avoid deadlocks, the process must tell OS, the maximum number of resources
a process can request to complete its execution.
The simplest and most useful approach states that the process should declare the
maximum number of resources of each type it may ever need. The Deadlock avoidance
algorithm examines the resource allocations so that there can never be a circular wait
condition.
Deadlock avoidance requires that the operating system be given in advance additional
information concerning which resources a process will request and use during its lifetime.
With this additional knowledge, it can decide for each request whether or not the process
should wait. To decide whether the current request can be satisfied or must be delayed,
the system must consider the resources currently available, the resources currently allo-
cated to each process, and the future requests and releases of each process.
Banker's Algorithm
The name was chosen because the algorithm could be used in a banking system to ensure
that the bank never allocated its available cash in such a way that it could no longer
satisfy the needs of all its customers.
When a new process enters the system, it must declare the maximum number of
instances of each resource type that it may need. This number may not exceed the total
number of resources in the system. When a user requests a set of resources, the system
must determine whether the allocation of these resources will leave the system in a safe
state. If it will, the resources are allocated; otherwise, the process must wait until some
other process releases enough resources.
AVK 20
Operating System (22516) Unit No.4
Several data structures must be maintained to implement the banker's algorithm. These
data structures encode the state of the resource-allocation system.
This algorithm calculates resources allocated, required and available before allocating
resources to any process to avoid deadlock. It contains two matrices on a dynamic basis.
Matrix A contains resources allocated to different processes at a given time. Matrix B
maintains the resources which are still required by different processes at the same time.
Algorithm F: Free resources
Step 1: When a process requests for a resource, the OS allocates it on a trial basis.
Step 2: After trial allocation, the OS updates all the matrices and vectors. This updating
can be done by the OS in a separate work area in the memory.
Step 3: It compares F vector with each row of matrix B on a vector to vector basis.
Step 4: If F is smaller than each of the row in Matrix B i.e. even if all free resources are
allocated to any process in Matrix B and not a single process can completes its task then
OS concludes that the system is in unstable state.
Step 5: If F is greater than any row for a process in Matrix B the OS allocates all required
resources for that process on a trial basis. It assumes that after completion of process, it
will release all the recourses allocated to it. These resources can be added to
the free vector.
Safety Algorithm
We can now present the algorithm for finding out whether or not a system is in a safe
state. This algorithm can be described as follows:
AVK 21
Operating System (22516) Unit No.4
AVK 22