Chapter 5 and Chapter 6 - OS
Chapter 5 and Chapter 6 - OS
Introduction
CPU scheduling is the basis of multi programmed operating systems.
By switching the CPU among processes, the operating system can
make the computer more productive
When a computer is multi-programmed, it frequently has multiple
processes computing for the CPU at the same time.
This situation occurs whenever two or more processes are
simultaneously in the ready state.
If only one CPU is available, a choice has to be made which process
to run next.
The part of the operating system that makes the choice is called the
scheduler and the algorithm it uses is called the scheduling
algorithm.
1
CPU Scheduling
Back in the old days of batch systems with input in the form of
3
Cont…
Preemptive Scheduling
CPU scheduling decisions may take place under the
following four circumstances:
1. When a process switches from the running state to the
waiting state (for. example, I/O request, or invocation of wait
for the termination of one of the child processes).
2. When a process switches from the running state to the ready
state (for example, when an interrupt occurs).
3. When a process switches from the waiting state to the ready
state (for example, completion of I/O).
4. When a process terminates.
4
Cont…
Dispatcher
5
CPU Scheduling :- Scheduling Criteria
◦ Max throughput
7
CPU Scheduling
Scheduling Algorithms
1. First-Come, First-Served (FCFS) Scheduling
The simplest of all scheduling algorithm is non-preemptive FCFS
scheduling algorithm.
Processes are assigned the CPU in the order they request it.
When the first job enters the system from the outside in the
morning, it is started immediately and allowed to run as long as it
wants too.
As other jobs come in, they are put onto the end of the queue.
When the running process blocks, the first process on the queue
is run next.
When a blocked process becomes ready, like a newly arrived job,
it is put on the end of the queue.
8
CPU Scheduling
Example:
Process Burst time
P1 = 24, P2 = 3, P3 = 3
Suppose that the processes arrive in the order: P1, P2,
P3.
A diagram to show this schedule is:
9
CPU Scheduling
2. Shortest Job First (SJF) Scheduling
It is a non preemptive batch algorithm that assumes the run times
are known in advance.
Associate with each process the length of its next CPU burst. Use
these lengths to schedule the process with the shortest time.
When several equally important jobs are sitting in the input queue
waiting to be started, the scheduler picks the shortest job first.
For example: here we found four jobs A, B, C, and D with run
times of 8, 4, 4, and 4 minutes respectively.
10
CPU Scheduling
11
CPU Scheduling
12
CPU Scheduling
3.
13
CPU Scheduling
4. Round-Robin (RR)Scheduling
One of the oldest, simplest, fairest, and mostly used algorithm
Each process is assigned a time interval called its quantum, which it
is allowed to run.
If the process is still running at the end of the quantum, the CPU is
preempted and given to another process.
It is easy to implement, all the scheduler needs to do is maintain a list
of runnable processes.
When the process uses up its quantum, it is put on the end of the list.
The only interesting issue with round robin is the length of the
quantum
14
CPU Scheduling
Fig. Round-Robin scheduling (a) the list of runnable processes. (b) the list of
runnable processes after B uses up its quantum.
Setting the quantum too short causes too many processes switches
and lowers the CPU efficiency. But setting it too long may cause
poor response to short interactive request. A quantum around 20-
50 msec. is often a reasonable compromise
15
CPU Scheduling
16
CPU Scheduling
5. Priority Scheduling
Round-Robin scheduling makes the implicit assumption that all
processes are equally important.
The need to take external factors into account leads to priority
scheduling.
The basic idea is each process is assigned a priority, and the runnable
process with the highest priority is allowed to run.
(smallest integer means highest priority).
It can be preemptive or non-preemptive.
Problem: starvation (or indefinite blocking) – low priority processes
may never execute.
Solution: aging as time progresses increase the priority of the process.
17
CPU Scheduling
6. Multilevel Queue
Ready queue is partitioned into separate queues.
Example: foreground (interactive), background (batch)
Each queue has its own scheduling algorithm.
Example: foreground – RR, background - FCFS
Scheduling must be done between the queues.
(Possibility of starvation).
Time slice - each queue gets a certain amount of CPU time
which it can schedule amongst its processes.
Example: 80% to foreground in RR, 20% to background in
FCFS.
18
CPU Scheduling
19
CPU Scheduling
Example of multilevel feedback queue
Three queues:
◦ Q0 - time quantum 8 milliseconds
◦ Q1 - time quantum 16 milliseconds
◦ Q2 - FCFS
Scheduling
20
CPU Scheduling
Summary
FCFS simple but causes short jobs to wait for long jobs.
SJF is optimal giving shortest waiting time but need to know
length of next burst.
SJF is a type of priority scheduling - may suffer from starvation
- prevent using aging
RR is gives good response time, it is preemptive. Problem
selecting the quantum.
Multiple queue Algorithms use the best of each algorithm by
having more than one queue. Feedback queues allow jobs to
move from queue to queue.
21
CPU Scheduling
Exercise
Suppose that the ff jobs arrive as indicated for scheduling and
execution on a single CPU.
Job Arrival time Size (msec.) Priority
J1 0 12 1(Gold)
J2 2 4 3(Bronze)
J3 5 2 1(Gold)
J4 8 10 3(Bronze)
1. Draw aJ5
Gantt chart showing
10FCFS and calculate
6 Avg. wait. Time and Avg. turn aroun
2(Silver)
time
2. Draw a Gantt chart showing non preemptive SJF and calculate Avg. wait. Time and
Avg. turn around time
3. Draw a Gantt chart showing SRTF and calculate Avg. wait. Time and Avg. turn aroun
time
4. Draw a Gantt chart showing RR (q=4 msec.) and calculate Avg. wait. Time and Avg. tur
around time
5. Draw a Gantt chart showing Preemptive Priority and calculate Avg. wait. time and Avg
turn around time 22
Chapter 6:- Concurrency Control
Unit Structure
Objectives
Principal of Concurrency
Race Condition
Mutual Exclusion
Semaphores
Monitors
Summary
Model Question
23
Objectives
After going through this unit, you will be able to:
To introduce the concurrency control and Race
condition, critical section problem, where solutions
can be used to ensure the consistency of shared
data.
To present both software and hardware solutions of
the critical section problem.
24
Principle of concurrency
A cooperating process is one that can affect or be affected by
the other processes executing in the system.
Cooperating processes may either directly share a logical
address space(that is, both code and data), or be allowed to share
data only through files. The former case is achieved through the
use of lightweight processes or threads. Concurrent access to
shared data may result in data inconsistency.
In this lecture, we discuss various mechanisms to ensure the
orderly execution of cooperating processes that share a logical
address space, so that data consistency is maintained.
25
Cont…
The concurrent processes executing in the operating system may be either
independent processes or cooperating processes.
A process is independent if it cannot affect or be affected by the other
processes executing in the system.
On the other hand, a process is cooperating if it can affect or be affected
by the other processes executing in the system.
Processes (or threads) that cooperate to solve problems must exchange
information. Two approaches:
◦ Shared memory (Shared memory is more efficient (no copying), but
isn’t always possible)
◦ Message passing (copying information from one process address space
to another)
There are several reasons for providing an environment that allows process
cooperation
Information sharing
Computation speedup
Modularity
Convenience
26
Race condition
When several processes access and manipulate the
same data concurrently and the outcome of the
execution depends on the particular order in which the
access takes place, is called a race condition
A race condition occurs when multiple processes or
threads read and write data items so that the final result
depends on the order of execution of instructions in the
multiple processes.
27
Cont…
Suppose that two processes, P1 and P2, share the global variable a. At
some point in its execution, P1 updates a to the value 1, and at some point
in its execution, P2 updates a to the value 2. Thus, the two tasks are in a
race to write variable a. In this example the "loser" of the race (the process
that updates last) determines the final value of a.
Therefore Operating System Concerns of following things
1. The operating system must be able to keep track of the various
processes
2. The operating system must allocate and deallocate various resources for
each active process.
3. The operating system must protect the data and physical resources of
each process against unintended interference by other processes.
4. The functioning of a process, and the output it produces, must be
independent of the speed at which its execution is carried out relative to
the speed of other concurrent processes.
28
Cont…
29
The Critical-Section Problem
The important feature of the system is that, when one
process is executing in its critical section, no other process
is to be allowed to execute in its critical section. Thus, the
execution of critical sections by the processes is mutually
exclusive in time.
The critical-section problem is to design a protocol that the
30
Cont…
31
Mutual Exclusion
A critical section is the code that accesses shared data or
resources.
A solution to the critical section problem must ensure that only
one process at a time can execute its critical section (CS).
Two separate shared resources can be accessed concurrently.
32
Other Mechanisms for Mutual Exclusion
33
Semaphores
The solutions of the critical section problem represented in the
section is not easy to generalize to more complex problems. To
overcome this difficulty, we can use a synchronization tool call
a semaphore
A semaphore is an integer variable (S) which can only be
accessed in the following ways:
Initialize (S)
P(S) // {wait(S)}
V(S) // {signal(S)}
A semaphore S is an integer variable that, a part from
initialization, is a accessed two standard atomic operations:
wait and signal. This operations were originally termed P(for t)
and V (for signal).
The operating system must ensure that all operations are
indivisible, and that no other access to the semaphore variable
is allowed 34
Cont…
The Classical definition of wait and signal are
Wait (S)
{
while (S <=0)
S =S – 1;
}
signal(S)
{
S = S + 1;
}
The integer value of the semaphore in the wait and signal operations must be
executed indivisibly. That is, when one process modifies the semaphore
value, no other process can simultaneously modify that same semaphore
value.
In addition, in the case of the wait(S), the testing of the integer value of S
(S 0), and its possible modification (S := S – 1), must also be executed without
interruption. 35
Semaphores are not provided by hardware. But they have
several attractive properties:
1. Semaphores are machine independent.
2. Semaphores are simple to implement.
3. Correctness is easy to determine.
4. Can have many different critical sections with different
semaphores.
5. Semaphore acquire many resources simultaneously.
Drawback of Semaphore
1. They are essentially shared global variables.
2. Access to semaphores can come from anywhere in a program.
3. There is no control or guarantee of proper usage.
4. There is no linguistic connection between the semaphore and
the data to which the semaphore controls access.
5. They serve two purposes, mutual exclusion and scheduling
constraints.
36
Monitors
The monitor is a programming-language construct that provides
equivalent functionality to that of semaphores and that is easier
to control.
The monitor construct has been implemented in a number of
programming languages, including Concurrent Pascal, Pascal-
Plus, Modula-2, Modula-3, and Java. It has also been
implemented as a program library. This allows programmers to
put monitor locks on any object.
37
Summary
Critical section is a code that only one process at a time can
be executing. Critical section problem is design an algorithm
that allows at most one process into the critical section at a
time, without deadlock. Solution of the critical section problem
must satisfy mutual exclusion, progress, bounded waiting.
Semaphore is a synchronization variable that tasks on positive
integer values. Binary semaphore are those that have only two
values 0 and 1. semaphores are not provided by hardware.
Semaphore is used to solve critical section problem.
A monitor is a software module consisting of one or more
procedures, an initialization sequence and local data.
Components of monitors are shared data declaration, shared
data initialization, operations on shared data and
synchronization statement.
38
Model Question
Q.1 Explain in brief race condition?
Q.2 Define the term critical section?
Q.3 What are the requirement for critical section problem?
Q.4 Write a short note on:
a) Semaphore
b) Monitors
Q.5 What are semaphores? How do they implement mutual
exclusion?
Q.6 Describe hardware solution to the critical section problem?
39
CHAPTER 7 :- DEADLOCK
Unit Structure
• Introduction
• Deadlock Characterization
• Method for Handling Deadlock
• Deadlock Prevention Recovery
• Avoidance and Protection
• Deadlock Detection
• Recovery from Deadlock
• Summary
• Model Question
40
Objectives
41
Introduction
In a multiprogramming environment, several processes may compete
for a finite number of resources. A process requests resources; if the
resources are not available at that time, the process enters a wait
state. It may happen that waiting processes will never again change
state, because the resources they have requested are held by other
waiting processes. This situation is called a deadlock.
If a process requests an instance of a resource type, the allocation of
any instance of the type will satisfy the request. If it will not, then
the instances are not identical, and the resource type classes have not
been defined properly.
A process must request a resource before using it, and must release
the resource after using it. A process may request as many resources
as it requires to carry out its designated task.
42
Cont…
Under the normal mode of operation, a process may utilize a
resource in only the following sequence:
1. Request: If the request cannot be granted immediately, then the
requesting process must wait until it can acquire the resource.
2. Use: The process can operate on the resource.
3. Release: The process releases the resource
43
Deadlock Characterization
In a deadlock, processes never finish executing and system
resources are tied up, preventing other jobs from ever
starting.
Necessary Conditions
44
Cont…
1. Mutual exclusion: At least one resource must be held in a non-
sharable mode; that is, only one process at a time can use the
resource. If another process requests that resource, the requesting
process must be delayed until the resource has been released.
2. Hold and wait : There must exist a process that is holding at least
one resource and is waiting to acquire additional resources that are
currently being held by other processes.
3. No preemption : Resources cannot be preempted; that is, a
resource can be released only voluntarily by the process holding it,
after that process, has completed its task.
4. Circular wait: There must exist a set {P0, P1, ..., Pn } of waiting
processes such that P0 is waiting for a resource that is held by P1,
P1 is waiting for a resource that is held by P2, …., Pn-1 is waiting
for a resource that is held by Pn, and Pn is waiting for a resource
that is held by P0.
45
Methods for Handling Deadlocks
46
Cont…
Deadlock avoidance, on the other hand, requires that the operating
system be given in advance additional information concerning which
resources a process will request and use during its lifetime. With this
additional knowledge, we can decide for each request whether or not
the process should wait. Each request requires that the system
consider the resources currently available, the resources currently
allocated to each process, and the future requests and releases of
each process, to decide whether the current request can be satisfied
or must be delayed.
If a system does not employ either a deadlock-prevention or a
deadlock avoidance algorithm, then a deadlock situation may occur
If a system does not ensure that a deadlock will never occur, and also
does not provide a mechanism for deadlock detection and recovery,
then we may arrive at a situation where the system is in a deadlock
state yet has no way of recognizing what has happened.
47
Deadlock Prevention
For a deadlock to occur, each of the four necessary-conditions
must hold. By ensuring that at least on one these conditions
cannot hold, we can prevent the occurrence of a deadlock.
◦ Mutual Exclusion – not required for sharable resources; must
hold for non sharable resources
◦ Hold and Wait – must guarantee that whenever a process requests
a resource, it does not hold any other resources.
◦ No Preemption – o If a process that is holding some resources
requests another resource that cannot be immediately allocated to
it, then all resources currently being held are released.
◦ Circular Wait – impose a total ordering of all resource types, and
require that each process requests resources in an increasing order
of enumeration.
48
Deadlock Avoidance
Requires that the system has some additional a priori information
available.
Simplest and most useful model requires that each process declare the
maximum number of resources of each type that it may need.
The deadlock-avoidance algorithm dynamically examines the
resource-allocation state to ensure that there can never be a circular-
wait condition.
Resource-allocation state is defined by the number of available and
allocated resources, and the maximum demands of the processes.
49
Deadlock Detection
If a system does not employ either a deadlock-prevention or
a deadlock avoidance algorithm, then a deadlock situation
may occur. In this environment, the system must provide:
◦ An algorithm that examines the state of the system to determine
whether a deadlock has Occurred.
◦ An algorithm to recover from the deadlock
50
Summary
A deadlocked state occurs when two or more processes are waiting
indefinitely for an event that can be caused only one of the waiting
processes. There are three principal methods for dealing with
deadlocks:
◦ Use some protocol to prevent or avoid deadlocks, entering that the system
will never enter a deadlocked state.
◦ Allow the system to enter a deadlocked state, detect it, and then recover.
◦ Ignore the problem altogether and pretend that deadlocks never occur in
the system.
Deadlock prevention is a set of methods for ensuring that at least one
of the necessary condition cannot hold.
Deadlock avoidance requires additional information about how
resources are to be requested.
Deadlock avoidance algorithm dynamically examines the resource
allocation state to ensure that a circular wait condition can never exist.
Deadlock occur only when some process makes a request that cannot
e granted immediately. 51
Model Question
52