0% found this document useful (0 votes)
19 views52 pages

Chapter 5 and Chapter 6 - OS

Uploaded by

Hana Yaregal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views52 pages

Chapter 5 and Chapter 6 - OS

Uploaded by

Hana Yaregal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 52

Chapter Five: CPU Scheduling

Introduction
 CPU scheduling is the basis of multi programmed operating systems.
By switching the CPU among processes, the operating system can
make the computer more productive
 When a computer is multi-programmed, it frequently has multiple
processes computing for the CPU at the same time.
 This situation occurs whenever two or more processes are
simultaneously in the ready state.
 If only one CPU is available, a choice has to be made which process
to run next.
 The part of the operating system that makes the choice is called the
scheduler and the algorithm it uses is called the scheduling
algorithm.
1
CPU Scheduling
 Back in the old days of batch systems with input in the form of

card images on a magnetic tape, the scheduling algorithm was


simple: just run the next job on the tape.
 With timesharing systems, the scheduling algorithm became more
complex because there were generally multiple users waiting for
service.
Basic Concepts
 The idea of multiprogramming is relatively simple. process is
executed until it must wait, typically for the completion of some
I/O request. In a simple computer system, the CPU would then
just sit idle.
 Scheduling is a fundamental operating-system function.Almost all
computer resources are scheduled before use.
2
Cont…
 CPU - I/O Burst Cycle
The success of CPU scheduling depends on the following
observed property of processes: Process execution consists of a
cycle of CPU execution and I/O wait. Processes alternate back and
forth between these two states.
 Context Switch
To give each process on a multi programmed machine a fair share
of the CPU, a hardware clock generates interrupts periodically.
This allows the operating system to schedule all processes in main
memory (using scheduling algorithm) to run on the CPU at equal
intervals. Each switch of the CPU from one process to another is
called a context switch.

3
Cont…
 Preemptive Scheduling
CPU scheduling decisions may take place under the
following four circumstances:
1. When a process switches from the running state to the
waiting state (for. example, I/O request, or invocation of wait
for the termination of one of the child processes).
2. When a process switches from the running state to the ready
state (for example, when an interrupt occurs).
3. When a process switches from the waiting state to the ready
state (for example, completion of I/O).
4. When a process terminates.

4
Cont…
 Dispatcher

Dispatcher module gives control of the CPU to the process


selected by the short-term scheduler; this involves:
◦ switching context
◦ switching to user mode
◦ jumping to the proper location in the user program to restart
that program
◦ Dispatch latency - time it takes for the dispatcher to stop one
process and start another running.

5
CPU Scheduling :- Scheduling Criteria

 Different CPU scheduling algorithms have different properties and may


favor one class of processes over another.
 In choosing which algorithm to use in a particular situation, we must
consider the properties of the various algorithms.
 Many criteria have been suggested for comparing CPU scheduling
algorithms.
 Criteria that are used include the following:
◦ CPU utilization - keep the CPU as busy as possible
◦ Throughput - number of processes that complete their execution per time unit
◦ Turnaround time - amount of time to execute a particular process
◦ Waiting time - amount of time a process has been waiting in the ready queue
◦ Response time - amount of time it takes from when a request was submitted until
the first response is produced, not output (for time sharing environment)
◦ Fairness – comparable processes should get comparable services (gives each
process a fair share of the CPU).
6
CPU Scheduling:- Optimization
◦ Max CPU utilization

◦ Max throughput

◦ Minimum turnaround time

◦ Minimum waiting time

◦ Minimum response time

7
CPU Scheduling
 Scheduling Algorithms
1. First-Come, First-Served (FCFS) Scheduling
 The simplest of all scheduling algorithm is non-preemptive FCFS
scheduling algorithm.
 Processes are assigned the CPU in the order they request it.
 When the first job enters the system from the outside in the
morning, it is started immediately and allowed to run as long as it
wants too.
 As other jobs come in, they are put onto the end of the queue.
 When the running process blocks, the first process on the queue
is run next.
 When a blocked process becomes ready, like a newly arrived job,
it is put on the end of the queue.
8
CPU Scheduling
 Example:
 Process Burst time
 P1 = 24, P2 = 3, P3 = 3
 Suppose that the processes arrive in the order: P1, P2,
P3.
A diagram to show this schedule is:

Waiting time for:


P1 = 0
P2 = 24
P3 = 27
Average waiting time: (0 + 24 + 27)/3 = 17

9
CPU Scheduling
2. Shortest Job First (SJF) Scheduling
It is a non preemptive batch algorithm that assumes the run times
are known in advance.
Associate with each process the length of its next CPU burst. Use
these lengths to schedule the process with the shortest time.
When several equally important jobs are sitting in the input queue
waiting to be started, the scheduler picks the shortest job first.
For example: here we found four jobs A, B, C, and D with run
times of 8, 4, 4, and 4 minutes respectively.

10
CPU Scheduling

In the case of (a) turnaround time for A is 8 mints, For B is 12


mints. For C is 16 mints and for D is 20 mints. For an average of
14 mints. For the case of (b) ) turnaround for A is 4 mints. For B
is 8 mints. For C is 12 mints and for D is 20 mints. For an average
of 11 mints.
Shortest Job First is provably Optimal.. It is worth pointing out that shortest
job first is only optimal when all the jobs are available simultaneously. Eg
five processes having run times of 2, 4, 1, 1, 1 respectively. Their arrival
times are 0, 0,3,3, and 3. check the difference of the average turn around time
if they run in the order of A, B, C, D and E and in the order of B, C, D, E, A.

11
CPU Scheduling

12
CPU Scheduling
3.

13
CPU Scheduling

4. Round-Robin (RR)Scheduling
One of the oldest, simplest, fairest, and mostly used algorithm
Each process is assigned a time interval called its quantum, which it
is allowed to run.
If the process is still running at the end of the quantum, the CPU is
preempted and given to another process.
It is easy to implement, all the scheduler needs to do is maintain a list
of runnable processes.
When the process uses up its quantum, it is put on the end of the list.
The only interesting issue with round robin is the length of the
quantum
14
CPU Scheduling

Fig. Round-Robin scheduling (a) the list of runnable processes. (b) the list of
runnable processes after B uses up its quantum.

Setting the quantum too short causes too many processes switches
and lowers the CPU efficiency. But setting it too long may cause
poor response to short interactive request. A quantum around 20-
50 msec. is often a reasonable compromise

15
CPU Scheduling

Example: with time quantum = 20

16
CPU Scheduling
5. Priority Scheduling
Round-Robin scheduling makes the implicit assumption that all
processes are equally important.
The need to take external factors into account leads to priority
scheduling.
The basic idea is each process is assigned a priority, and the runnable
process with the highest priority is allowed to run.
(smallest integer means highest priority).
It can be preemptive or non-preemptive.
Problem: starvation (or indefinite blocking) – low priority processes
may never execute.
Solution: aging as time progresses increase the priority of the process.
17
CPU Scheduling
6. Multilevel Queue
 Ready queue is partitioned into separate queues.
 Example: foreground (interactive), background (batch)
 Each queue has its own scheduling algorithm.
 Example: foreground – RR, background - FCFS
 Scheduling must be done between the queues.

o Fixed priority scheduling


 Example: serve all from foreground then from background.

(Possibility of starvation).
 Time slice - each queue gets a certain amount of CPU time
which it can schedule amongst its processes.
 Example: 80% to foreground in RR, 20% to background in
FCFS.

18
CPU Scheduling

7. Multilevel Feedback Queue


A process can move between the various queues; aging can be
implemented this way.
Multilevel-feedback-queue scheduler defined by the following
parameters:
 number of queues
 method used to determine when to upgrade a process
 method used to determine when to demote a process
 method used to determine which queue a process will enter
when that process needs service

19
CPU Scheduling
 Example of multilevel feedback queue
 Three queues:
◦ Q0 - time quantum 8 milliseconds
◦ Q1 - time quantum 16 milliseconds
◦ Q2 - FCFS
 Scheduling

A new job enters queue Q0 which is served FCFS. When it


gains CPU, job receives 8 milliseconds. If it does not finish in 8
milliseconds, job is moved to queue Q1 . At Q1 , job is again
served FCFS and receives 16 additional milliseconds. If it still
does not complete, it is preempted and moved to queue Q2 .

20
CPU Scheduling
 Summary

 FCFS simple but causes short jobs to wait for long jobs.
 SJF is optimal giving shortest waiting time but need to know
length of next burst.
 SJF is a type of priority scheduling - may suffer from starvation
- prevent using aging
 RR is gives good response time, it is preemptive. Problem
selecting the quantum.
 Multiple queue Algorithms use the best of each algorithm by
having more than one queue. Feedback queues allow jobs to
move from queue to queue.

21
CPU Scheduling
 Exercise
Suppose that the ff jobs arrive as indicated for scheduling and
execution on a single CPU.
Job Arrival time Size (msec.) Priority
J1 0 12 1(Gold)
J2 2 4 3(Bronze)
J3 5 2 1(Gold)
J4 8 10 3(Bronze)
1. Draw aJ5
Gantt chart showing
10FCFS and calculate
6 Avg. wait. Time and Avg. turn aroun
2(Silver)
time
2. Draw a Gantt chart showing non preemptive SJF and calculate Avg. wait. Time and
Avg. turn around time
3. Draw a Gantt chart showing SRTF and calculate Avg. wait. Time and Avg. turn aroun
time
4. Draw a Gantt chart showing RR (q=4 msec.) and calculate Avg. wait. Time and Avg. tur
around time
5. Draw a Gantt chart showing Preemptive Priority and calculate Avg. wait. time and Avg
turn around time 22
Chapter 6:- Concurrency Control

 Unit Structure
Objectives
 Principal of Concurrency
 Race Condition
 Mutual Exclusion
 Semaphores
Monitors
Summary
Model Question

23
Objectives
After going through this unit, you will be able to:
 To introduce the concurrency control and Race
condition, critical section problem, where solutions
can be used to ensure the consistency of shared
data.
 To present both software and hardware solutions of
the critical section problem.

24
Principle of concurrency
A cooperating process is one that can affect or be affected by
the other processes executing in the system.
 Cooperating processes may either directly share a logical
address space(that is, both code and data), or be allowed to share
data only through files. The former case is achieved through the
use of lightweight processes or threads. Concurrent access to
shared data may result in data inconsistency.
 In this lecture, we discuss various mechanisms to ensure the
orderly execution of cooperating processes that share a logical
address space, so that data consistency is maintained.

25
Cont…
 The concurrent processes executing in the operating system may be either
independent processes or cooperating processes.
 A process is independent if it cannot affect or be affected by the other
processes executing in the system.
 On the other hand, a process is cooperating if it can affect or be affected
by the other processes executing in the system.
 Processes (or threads) that cooperate to solve problems must exchange
information. Two approaches:
◦ Shared memory (Shared memory is more efficient (no copying), but
isn’t always possible)
◦ Message passing (copying information from one process address space
to another)
 There are several reasons for providing an environment that allows process
cooperation
 Information sharing
 Computation speedup
 Modularity
 Convenience
26
Race condition
When several processes access and manipulate the
same data concurrently and the outcome of the
execution depends on the particular order in which the
access takes place, is called a race condition
A race condition occurs when multiple processes or
threads read and write data items so that the final result
depends on the order of execution of instructions in the
multiple processes.

27
Cont…
 Suppose that two processes, P1 and P2, share the global variable a. At
some point in its execution, P1 updates a to the value 1, and at some point
in its execution, P2 updates a to the value 2. Thus, the two tasks are in a
race to write variable a. In this example the "loser" of the race (the process
that updates last) determines the final value of a.
 Therefore Operating System Concerns of following things
1. The operating system must be able to keep track of the various
processes
2. The operating system must allocate and deallocate various resources for
each active process.
3. The operating system must protect the data and physical resources of
each process against unintended interference by other processes.
4. The functioning of a process, and the output it produces, must be
independent of the speed at which its execution is carried out relative to
the speed of other concurrent processes.

28
Cont…

 Process Interaction can be defined as


• Processes unaware of each other
• Processes indirectly aware of each other
• Processes directly aware of each other
 Concurrent processes come into conflict with each other
when they are competing for the use of the same resource.
 Two or more processes need to access a resource during the
course of their execution. Each process is unaware of the
existence of the other processes. There is no exchange of
information between the competing processes.

29
The Critical-Section Problem
 The important feature of the system is that, when one
process is executing in its critical section, no other process
is to be allowed to execute in its critical section. Thus, the
execution of critical sections by the processes is mutually
exclusive in time.
 The critical-section problem is to design a protocol that the

processes can use to cooperate.


 Each process must request permission to enter its critical
section.

30
Cont…

A solution to the critical-section problem must satisfy the


following three requirements:
1. Mutual Exclusion: If process Pi is executing in its critical section, then
no other processes can be executing in their critical sections.
2. Progress: If no process is executing in its critical section an there exist
some processes that wish to enter their critical sections, then only those
processes that are not executing in their remainder section can participate
in the decision of which will enter its critical section next, and this
selection cannot be postponed indefinitely.
3 Bounded Waiting: There exist a bound on the number of times that other
processes are allowed to enter their critical sections after a process has
made a request to enter its critical section and before that request is
granted.

31
Mutual Exclusion
A critical section is the code that accesses shared data or
resources.
 A solution to the critical section problem must ensure that only
one process at a time can execute its critical section (CS).
 Two separate shared resources can be accessed concurrently.

32
Other Mechanisms for Mutual Exclusion

 Spinlocks: a busy-waiting solution in which a process


wishing to enter a critical section continuously tests some
lock variable to see if the critical section is available.
Implemented with various machine-language instructions
 Disable interrupts before entering CS, enable after leaving

33
Semaphores
 The solutions of the critical section problem represented in the
section is not easy to generalize to more complex problems. To
overcome this difficulty, we can use a synchronization tool call
a semaphore
 A semaphore is an integer variable (S) which can only be
accessed in the following ways:
 Initialize (S)
 P(S) // {wait(S)}
 V(S) // {signal(S)}
 A semaphore S is an integer variable that, a part from
initialization, is a accessed two standard atomic operations:
wait and signal. This operations were originally termed P(for t)
and V (for signal).
 The operating system must ensure that all operations are
indivisible, and that no other access to the semaphore variable
is allowed 34
Cont…
 The Classical definition of wait and signal are
Wait (S)
{
while (S <=0)
S =S – 1;
}
signal(S)
{
S = S + 1;
}
 The integer value of the semaphore in the wait and signal operations must be
executed indivisibly. That is, when one process modifies the semaphore
value, no other process can simultaneously modify that same semaphore
value.
 In addition, in the case of the wait(S), the testing of the integer value of S
(S 0), and its possible modification (S := S – 1), must also be executed without
interruption. 35
 Semaphores are not provided by hardware. But they have
several attractive properties:
1. Semaphores are machine independent.
2. Semaphores are simple to implement.
3. Correctness is easy to determine.
4. Can have many different critical sections with different
semaphores.
5. Semaphore acquire many resources simultaneously.
 Drawback of Semaphore
1. They are essentially shared global variables.
2. Access to semaphores can come from anywhere in a program.
3. There is no control or guarantee of proper usage.
4. There is no linguistic connection between the semaphore and
the data to which the semaphore controls access.
5. They serve two purposes, mutual exclusion and scheduling
constraints.
36
Monitors
 The monitor is a programming-language construct that provides
equivalent functionality to that of semaphores and that is easier
to control.
 The monitor construct has been implemented in a number of
programming languages, including Concurrent Pascal, Pascal-
Plus, Modula-2, Modula-3, and Java. It has also been
implemented as a program library. This allows programmers to
put monitor locks on any object.

37
Summary
 Critical section is a code that only one process at a time can
be executing. Critical section problem is design an algorithm
that allows at most one process into the critical section at a
time, without deadlock. Solution of the critical section problem
must satisfy mutual exclusion, progress, bounded waiting.
 Semaphore is a synchronization variable that tasks on positive
integer values. Binary semaphore are those that have only two
values 0 and 1. semaphores are not provided by hardware.
Semaphore is used to solve critical section problem.
 A monitor is a software module consisting of one or more
procedures, an initialization sequence and local data.
Components of monitors are shared data declaration, shared
data initialization, operations on shared data and
synchronization statement.
38
Model Question
Q.1 Explain in brief race condition?
Q.2 Define the term critical section?
Q.3 What are the requirement for critical section problem?
Q.4 Write a short note on:
a) Semaphore
b) Monitors
Q.5 What are semaphores? How do they implement mutual
exclusion?
Q.6 Describe hardware solution to the critical section problem?

39
CHAPTER 7 :- DEADLOCK
 Unit Structure
• Introduction
• Deadlock Characterization
• Method for Handling Deadlock
• Deadlock Prevention Recovery
• Avoidance and Protection
• Deadlock Detection
• Recovery from Deadlock
• Summary
• Model Question
40
Objectives

After going through this unit, you will be able to:


◦ To develop a description of deadlocks, which prevent
sets of concurrent processes from completing their
tasks.
◦ To present number of different methods for preventing
or avoiding deadlocks in a computer system.

41
Introduction
 In a multiprogramming environment, several processes may compete
for a finite number of resources. A process requests resources; if the
resources are not available at that time, the process enters a wait
state. It may happen that waiting processes will never again change
state, because the resources they have requested are held by other
waiting processes. This situation is called a deadlock.
 If a process requests an instance of a resource type, the allocation of
any instance of the type will satisfy the request. If it will not, then
the instances are not identical, and the resource type classes have not
been defined properly.
 A process must request a resource before using it, and must release
the resource after using it. A process may request as many resources
as it requires to carry out its designated task.

42
Cont…
 Under the normal mode of operation, a process may utilize a
resource in only the following sequence:
1. Request: If the request cannot be granted immediately, then the
requesting process must wait until it can acquire the resource.
2. Use: The process can operate on the resource.
3. Release: The process releases the resource

43
Deadlock Characterization
 In a deadlock, processes never finish executing and system
resources are tied up, preventing other jobs from ever
starting.
 Necessary Conditions

A deadlock situation can arise if the following four


conditions hold simultaneously in a system:
1. Mutual exclusion
2. Hold and wait
3. No preemption
4. Circular wait

44
Cont…
1. Mutual exclusion: At least one resource must be held in a non-
sharable mode; that is, only one process at a time can use the
resource. If another process requests that resource, the requesting
process must be delayed until the resource has been released.
2. Hold and wait : There must exist a process that is holding at least
one resource and is waiting to acquire additional resources that are
currently being held by other processes.
3. No preemption : Resources cannot be preempted; that is, a
resource can be released only voluntarily by the process holding it,
after that process, has completed its task.
4. Circular wait: There must exist a set {P0, P1, ..., Pn } of waiting
processes such that P0 is waiting for a resource that is held by P1,
P1 is waiting for a resource that is held by P2, …., Pn-1 is waiting
for a resource that is held by Pn, and Pn is waiting for a resource
that is held by P0.

45
Methods for Handling Deadlocks

 Also called Dead detection


 Principally, there are three different methods for dealing
with the deadlock problem:
 We can use a protocol to ensure that the system will
never enter a deadlock state.
 We can allow the system to enter a deadlock state and
then recover.
 Ignore the problem and pretend that deadlocks never
occur in the system; used by most operating systems,
including UNIX.

46
Cont…
 Deadlock avoidance, on the other hand, requires that the operating
system be given in advance additional information concerning which
resources a process will request and use during its lifetime. With this
additional knowledge, we can decide for each request whether or not
the process should wait. Each request requires that the system
consider the resources currently available, the resources currently
allocated to each process, and the future requests and releases of
each process, to decide whether the current request can be satisfied
or must be delayed.
 If a system does not employ either a deadlock-prevention or a
deadlock avoidance algorithm, then a deadlock situation may occur
If a system does not ensure that a deadlock will never occur, and also
does not provide a mechanism for deadlock detection and recovery,
then we may arrive at a situation where the system is in a deadlock
state yet has no way of recognizing what has happened.

47
Deadlock Prevention
 For a deadlock to occur, each of the four necessary-conditions
must hold. By ensuring that at least on one these conditions
cannot hold, we can prevent the occurrence of a deadlock.
◦ Mutual Exclusion – not required for sharable resources; must
hold for non sharable resources
◦ Hold and Wait – must guarantee that whenever a process requests
a resource, it does not hold any other resources.
◦ No Preemption – o If a process that is holding some resources
requests another resource that cannot be immediately allocated to
it, then all resources currently being held are released.
◦ Circular Wait – impose a total ordering of all resource types, and
require that each process requests resources in an increasing order
of enumeration.

48
Deadlock Avoidance
 Requires that the system has some additional a priori information
available.
 Simplest and most useful model requires that each process declare the
maximum number of resources of each type that it may need.
 The deadlock-avoidance algorithm dynamically examines the
resource-allocation state to ensure that there can never be a circular-
wait condition.
 Resource-allocation state is defined by the number of available and
allocated resources, and the maximum demands of the processes.

49
Deadlock Detection
 If a system does not employ either a deadlock-prevention or
a deadlock avoidance algorithm, then a deadlock situation
may occur. In this environment, the system must provide:
◦ An algorithm that examines the state of the system to determine
whether a deadlock has Occurred.
◦ An algorithm to recover from the deadlock

50
Summary
A deadlocked state occurs when two or more processes are waiting
indefinitely for an event that can be caused only one of the waiting
processes. There are three principal methods for dealing with
deadlocks:
◦ Use some protocol to prevent or avoid deadlocks, entering that the system
will never enter a deadlocked state.
◦ Allow the system to enter a deadlocked state, detect it, and then recover.
◦ Ignore the problem altogether and pretend that deadlocks never occur in
the system.
 Deadlock prevention is a set of methods for ensuring that at least one
of the necessary condition cannot hold.
 Deadlock avoidance requires additional information about how
resources are to be requested.
 Deadlock avoidance algorithm dynamically examines the resource
allocation state to ensure that a circular wait condition can never exist.
 Deadlock occur only when some process makes a request that cannot
e granted immediately. 51
Model Question

Q.1 Write a short note on deadlock?


Q.2 Explain the characteristic of deadlock?
Q.3 Describe various methods for deadlock prevention?
Q.4 Explain how deadlocks are detected and corrected?
Q.5 What are the difference between a deadlock
prevention and deadlock Avoidance?

52

You might also like