0% found this document useful (0 votes)
22 views39 pages

Unit 3 (With Page Number)

Uploaded by

Yashi Upadhyay
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views39 pages

Unit 3 (With Page Number)

Uploaded by

Yashi Upadhyay
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 39

Unit- III CPU Scheduling

Lecture 7
Process Concept:

Process:

 A process is an executing program, including the current values of the program


counter, registers, and variables.
 Difference between a process and a program is that the program is a group of
instructions whereas the process is the activity.
 Process can be described:
 I/O Bound Process- spends more time doing I/O then computation.
 CPU Bound Process- spends more time doing computation.

Process States:

 Start : The process has just arrived.


 Ready : The process is waiting to grab the processor.
 Running : The process has been allocated by the processor.
 Waiting : The process is doing I/O work or blocked.
 Halted : The process has finished and is about to leave the system.

1
Lecture 8

Process/Task Control Block (PCB):

In the OS, each process is represented by its PCB (Process Control Block). The PCB,
generally contains the following information:

• Process State: The state may be new, ready, running, and waiting, halted, and so on.
• Process ID
• Program Counter (PC) value: The counter indicates the address of the next
instruction to be executed or this process.

• Register values: The registers vary in number and type, depending on the computer
architecture. They include accumulators, index registers, stack pointers, and general-
purpose registers, plus any condition-code information. Along with the program counter,
this state information must be saved when an interrupt occurs, to allow the process to be
continued correctly afterward.

• Memory Management Information (page tables, base/bound registers etc.):


• Processor Scheduling Information ( priority, last processor burst time etc.)
• I/O Status Info (outstanding I/O requests, I/O devices held, etc.)
• List of Open Files
• Accounting Info.: This information includes the amount of CPU and
real time used, time limits, account members, job or process numbers, and so on.

2
If we have a single processor in our system, there is only one running process at a time.
Other ready processes wait for the processor.

In multiprogramming systems, the processor can be switched from one process to


another. Note that when an interrupt occurs, PC and register contents for the running
process (which is being interrupted) must be saved so that the process can be continued
correctly afterwards. Switching between processes occurs as depicted below.

3
Lecture 9

Operations on process:

A. Process Creation

 Parent process create children processes, which, in turn create other processes,
forming a tree of processes

 Resource sharing

 Parent and children share all resources


 Children share subset of parent’s resources

i. Parent and child share no resources

 Execution

i. Parent and children execute concurrently

ii. Parent waits until children terminate

4
B. Process Termination

 Process executes last statement and asks the operating system to delete it
(exit)

i. Output data from child to parent (via wait)

ii. Process’ resources are deallocated by operating system

 Parent may terminate execution of children processes (abort)

i. Child has exceeded allocated resources

ii. Task assigned to child is no longer required

iii. If parent is exiting

 Some operating system do not allow child to


continue if its parent terminates

Process Address Space:

 The process address space consists of the linear address range presented to each
process. Each process is given a flat 32- or 64-bit address space, with the size
depending on the architecture. The term "flat" describes the fact that the address
space exists in a single range. (As an example, a 32-bit address space extends
from the address 0 to 429496729.)
 Some operating systems provide a segmented address space, with addresses
existing not in a single linear range, but instead in multiple segments. Modern
virtual memory operating systems generally have a flat memory model and not
a segmented one.

 A memory address is a given value within the address space, such as 4021f000.
The process can access a memory address only in a valid memory area.
Memory areas have associated permissions, such as readable, writable, and
executable, that the associated process must respect. If a process accesses a
memory address not in a valid memory area, or if it accesses a valid area in an
invalid manner, the kernel kills the process with the dreaded "Segmentation

5
Fault" message.

6
 Memory areas can contain all sorts of goodies, such as

 A memory map of the executable file's code, called the text section.
 A memory map of the executable file's initialized global variables,
called the data section.
 A memory map of the zero page (a page consisting of all zeros, used for
purposes such as this) containing uninitialized global variables, called
the bss section
 A memory map of the zero page used for the process's user-space stack
(do not confuse this with the process's kernel stack, which is separate
and maintained and used by the kernel)
 An additional text, data, and bss section for each shared library, such as
the C library and dynamic linker, loaded into the process's address space.
 Any memory mapped files
 Any shared memory segments
 Any anonymous memory mappings, such as those associated with
malloc().

Process Identification Information

 Process Identifier (process ID or PID) is a number used by most


operating system kernels (such as that of UNIX, Mac OS X or
Microsoft Windows) to temporarily uniquely identify a process.
 This number may be used as a parameter in various function calls
allowing processes to be manipulated, such as adjusting the process's
priority or killing it altogether.
 In Unix-like operating systems, new processes are created by the fork()
system call. The PID is returned to the parent enabling it to refer to the
child in further function calls. The parent may, for example, wait for the
child to terminate with the waitpid() function, or terminate the process
with kill().

7
Lecture 10
Threads:

Introduction to Thread:

 A thread is a basic unit of CPU


utilization; it comprises a thread ID,
a program counter, a register set,
and a stack. It shares with other
threads belonging to the same
process its code section, data section,
and other operating-system
resources, such as open files and
signals.
 A traditional process has a single
thread of control. If a process has
multiple threads of control, it can
perform more than one task at a
time.
Benefits of Threads:

A. Responsiveness. Multithreading an interactive application may allow a program


to continue running even if part of it is blocked or is performing a lengthy operation.
B. Resource sharing. Threads share the memory and the resources of the process to
which they belong. The benefit of sharing code and data is that it allows an
application to have several different threads of activity within the same address
space.
C. Economy. Allocating memory and resources for process creation is costly.
Because threads share resources of the process to which they belong, it is more
economical to create and context- switch threads.
D. Utilization of multiprocessor architectures. The benefits of multithreading can be
greatly increased in a multiprocessor architecture, where threads may be running in
parallel on different processors. A single threaded process can only run on one CPU,
8
no matter how many are available.

9
Multithreading Models (Management of Threads):

E. Threads may be provided either at the user level, for user threads, or by the kernel,
for kernel threads. User threads are supported above the kernel and are managed
without kernel support, whereas kernel threads are supported and managed directly
by the operating system. There must exist a relationship between user threads and
kernel threads. There are three common ways of establishing this relationship.
A. Many-to-One Model:

The many-to-one model maps many user-level


threads to one kernel thread. Thread
management is done by the thread library in user
space, so it is efficient; but the entire process will
block if a thread makes a blocking system call.
Also, because only one thread can access the
kernel at a time, multiple threads are unable to run
in parallel on multiprocessors.

B. One-to-One Model:

The one-to-one model maps each user thread to a


kernel thread. It provides more concurrency
than the many-to-one model by allowing
another thread to run when a thread makes a
blocking system call; it also allows multiple
threads to run in parallel on multiprocessors.
The only drawback to this model is that
creating a user thread requires creating the
corresponding kernel thread.

C. Many-to-Many Model:

The many-to-many model multiplexes many user-level threads to a smaller or


equal number of kernel threads. The one-to-one model allows for greater

10
concurrency.

11
The many-to-many model suffers from
neither of these shortcomings:
Developers can create as many user
threads as necessary, and the
corresponding kernel threads can run
in parallel on a multiprocessor. Also,
when a thread performs a blocking
system call, the kernel can schedule
another thread for execution.

Lecture 11

CPU Scheduling Concept:

The main objective of CPU Scheduling is to maximize CPU utilization. Basically we


use process scheduling to maximize CPU utilization. Process Scheduling is done by
following ways:

CPU-I/O Burst Cycle:


The success of CPU scheduling depends
on an observed property of processes:
Process execution consists of a cycle of
CPU execution and I/O wait. Processes
alternate between these two states. Process
execution begins with a CPU burst. That
is followed by an I/O burst, which is
followed by another CPU burst, then
another I/O burst, and so on. Eventually,
the final CPU burst ends with a system
request to terminate execution.

12
Scheduling Queue: queue is generally stored as a linked list. A ready-queue header
contains pointers to the first and final PCBs in the list. Each PCB includes a pointer field
that points to the next PCB in the ready queue.
 Job queue – set of all processes in the system

 Ready queue – set of all processes residing in main memory,


ready and waiting to execute

 Device queues – set of processes waiting for an I/O device

 Processes migrate among the various queues

Schedulers: A process migrates among the various scheduling queues throughout its
lifetime. The operating system must select, for scheduling purposes, processes from
these queues in some fashion. The selection process is carried out by the
appropriate scheduler. Schedulers have two types:
1. Long Term Scheduler 2. Short Term Scheduler
A. selects which process should be A. selects which process should be
brought into the ready queue. executed next & allocates CPU.
B. L.T.S. is invoked very B. STS is invoked very
infrequently (seconds, minutes). frequently (milliseconds).
C. L.T.S. controls the degree of
multiprogramming (number of
processes in memory)

13
Medium-Term Scheduler:
Some operating systems, such as time-sharing systems, may introduce an
additional, intermediate level of scheduling. medium-term scheduler is that
sometimes it can be advantageous to remove processes from memory (and from
active contention for the CPU) and thus reduce the degree of multiprogramming.
Later, the process can be reintroduced into memory, and its execution can be
continued where it left off. This scheme is called swapping.

Dispatcher:

Dispatcher is the module that gives control of the CPU to the process selected by
the short-term scheduler. This function involves:

 Switching Context
o When CPU switches to another process, the system must save the
state of the old process and load the saved state for the new process.
o Context-switch time is overhead; the system does no useful work while
switching.
o Time dependent on hardware support.
 Switching to user mode
 Jumping to the proper location in the user program to restart that program.

The dispatcher should be as fast as possible, given that it is invoked during every

14
process switch. The time it takes for the dispatcher to stop one process and start
another running is known as dispatch latency.

15
Q. Write short note on Preemptive Scheduling & Non-Preemptive Scheduling:

Ans. Preemptive scheduling: The preemptive scheduling is prioritized. The


highest priority process should always be the process that is currently utilized.

Non-Preemptive scheduling: When a process enters the state of running, the state
of that process is not deleted from the scheduler until it finishes its service time.

Non-Preemptive Scheduling may be of switching from running to waiting state,


running to ready state, waiting to ready states, process terminates; while others are
preemptive.

Scheduling Performance Criteria:

 CPU Utilization: We want to keep the CPU as busy as possible.


Conceptually, CPU utilization can range from 0 to 100 percent. In a real
system, it should range from 40 percent (for a lightly loaded system) to 90
percent (for a heavily used system).

Processor Utilization = (Processor Busy Time / (Processor Busy Time +


Processor Idle time))*100

 Throughput: the number of processes that are completed per time unit,
called throughput.

Throughput = No. of Process Completed / Time Unit

 Turnaround Time: The amount of time to execute a particular


process is called turnaround time.

Turnaround Time = T(Process Completed) – T(Process


Submitted) 16
 Waiting Time: the amount of time that a process spends waiting in the ready
queue.

Waiting Time = Turnaround Time – Processing Time

 Response Time: time from the submission of a request until the


first response is produced. This measure, called response time.

Response Time = T(First Response) – T(submission of


request)

 Optimization Criteria:
 Max CPU utilization
 Max throughput
 Min turnaround time
 Min waiting time
 Min response time

Scheduling Algorithms:

A. First-Come, First-Served Scheduling


B. Shortest-Job-First Scheduling
C. Priority Scheduling
D. Round-Robin Scheduling
E. Multilevel Queue Scheduling
F. Multilevel Feedback Queue Scheduling
G. Multiple Processor Scheduling
H. Real Time Scheduling

A. First-Come, First-Served Scheduling (FCFS):

With this scheme, the process that requests the CPU first is allocated the CPU first.
The implementation of the FCFS policy is easily managed with a FIFO queue. When
a process enters the ready queue, its PCB is linked onto the tail of the queue. When

17
the CPU is free, it is allocated to the process at the head of the queue. The running
process is then removed from the queue.
Example:
Process p1,p2,p3,p4,p5 having arrival time of 0,2,3,5,8 microseconds and processing
time 3,3,1,4,2 microseconds, Draw Gantt Chart & Calculate Average Turn Around
Time, Average Waiting Time, CPU Utilization & Throughput using FCFS.
Processes Arrival Time Processing Time T.A.T. W.T.
T(P.C.)-T(P.S.) TAT- T(Proc.)
P1 0 3 3-0=3 3-3=0
P2 2 3 6-2=4 4-3=1
P3 3 1 7-3=4 4-1=3
P4 5 4 11-5=6 6-4=2
P5 8 2 13-8=5 5-2=3
GANTT CHART:

P1 P2 P3 P4 P5
0 3 6 7 11 13
Average T.A.T. =(3+4+4+6+5)/5 = 22/5 = 4.4 Microsecond
Average W.T. = (0+1+3+2+3)/5 =9/5 = 1.8 Microsecond
CPU Utilization = (13/13)*100 = 100%
Throughput = 5/13 = 6.38

18
Lecture 12

B. Shortest-Job-First Scheduling (SJF):

 Associate with each process the length of its next CPU burst. Use
these lengths to schedule the process with the shortest time
 Two schemes:
i. nonpreemptive – once CPU given to the process it cannot be
preempted until completes its CPU burst
ii. preemptive – if a new process arrives with CPU burst length less
than remaining time of current executing process, preempt. This
scheme is known as the Shortest-Remaining-Time-First (SRTF)
 SJF is optimal – gives minimum average waiting time for a given set of
processes

Example:
Process p1,p2,p3,p4 having burst time of 6,8,7,3 microseconds. Draw Gantt Chart
& Calculate Average Turn Around Time, Average Waiting Time, CPU Utilization &
Throughput using SJF.
Processes Burst Time T.A.T. W.T.
T(P.C.)-T(P.S.) TAT- T(Proc.)
P4 3 3-0=3 3-3=0
P1 6 9-0=9 9-6=3
P3 7 16-0=16 16-7=9
P2 8 24-0=24 24-8=16
GANTT CHART

P4 P1 P3 P2
0 3 9 16 24
Average T.A.T. =(3+9+16+24)/4 =
13 microsecond Average W.T. =
(0+3+9+16)/4 =28/4 = 7 microsecond
19
CPU Utilization = (24/24)*100 =
100%
Throughput = 4/24

20
Example: Example of Non-Preemptive SJF

Example: Example of Preemptive SJF

21
Lecture 13

Priority Scheduling:

 A priority number (integer) is associated with each process

 The CPU is allocated to the process with the highest priority


(smallest integer  highest priority)
 Problem  Starvation – low priority processes may never execute
 Solution  Aging – as time progresses increase the priority of the process

Example: Process p1,p2,p3,p4,p5 having burst time of 10,1,2,1,5


microseconds and priorities are 3,1,4,5,2. Draw Gantt Chart & Calculate Average
Turn Around Time, Average Waiting Time, CPU Utilization & Throughput using
Priority Scheduling.

Processes Priority Processing Time T.A.T. W.T.


T(P.C.)-T(P.S.) TAT- T(Proc.)
P2 1 1 1-0=1 1-1=0
P5 2 5 6-0=6 6-5=1
P1 3 10 16-0=16 16-10=6
P3 4 2 18-0=18 18-2=16
P4 5 1 19-0=19 19-1=18

GANTT CHART:

P2 P5 P1 P3 P4
0 1 6 16 18 19

Average T.A.T. =(1+6+16+18+19)/5 = 12 microsecond


Average W.T. = (0+1+6+16+18)/5 =41/5 = 8.2 microsecond
CPU Utilization = (19/19)*100 = 100%
Throughput = 5/19

22
Lecture 14

Round-Robin Scheduling:
 Each process gets a small unit of CPU time (time quantum), usually 10-100
milliseconds. After this time has elapsed, the process is preempted and added
to the end of the ready queue.

 If there are n processes in the ready queue and the time quantum is q, then
each process gets 1/n of the CPU time in chunks of at most q time units at
once. No process waits more than (n-1)q time units.
 Used for time sharing & multiuser O.S.
 FCFS with preemptive scheduling.

Example:
Process p1,p2,p3 having processing time of 24,3,3 milliseconds.
Draw Gantt Chart & Calculate Average Turn Around Time, Average
Waiting Time, CPU Utilization & Throughput using Round Robin with
time slice of 4milliseconds.
Processes Processing T.A.T. W.T.
Time
T(P.C.)-T(P.S.) TAT- T(Proc.)
P1 24 30-0=30 30-24=6
P2 3 7-0=7 7-3=4
P3 3 10-0=10 10-3=7

23
GANTT CHART

P1 P2 P3 P1 P1 P1 P1 P1

0 4 7 10 14 18 22 26 30
Average T.A.T. =(30+7+7)/3 = 44/3 = 14.67 millisecond
Average W.T. = (6+4+7)/3 =17/3 = 5.67 millisecond

CPU Utilization = (30/30)*100 = 100%


Throughput = 3/30=0.1

C. Multilevel Queue Scheduling


 Ready queue is partitioned into separate queues: foreground (interactive)
background (batch)
 Each queue has its own scheduling algorithm foreground –
RR
background – FCFS
 Scheduling must be done between the queues
Fixed priority scheduling; (i.e., serve all from
foreground then from background). Possibility of
starvation.
Time slice – each queue gets a certain amount of CPU time which it
can schedule amongst its processes; i.e. 80% to foreground in RR,
20% to background in FCFS

24
D. Multilevel Feedback Queue Scheduling
 A process can move between the various queues; aging can be implemented this
way
 Multilevel-feedback-queue scheduler defined by the following parameters:
 number of queues
 scheduling algorithms for each queue
 method used to determine when to upgrade a process
 method used to determine when to demote a process
 method used to determine which queue a process will enter
when that process needs service
Example of Multilevel Feedback Queue
 Three queues:
Q0 – RR with time quantum 8 milliseconds
Q1 – RR time quantum 16 milliseconds
Q2 – FCFS
 Scheduling
A new job enters queue Q0 which is served FCFS. When it gains
CPU, job receives 8 milliseconds. If it does not finish in 8
milliseconds, job is moved to queue Q1.

At Q1 job is again served FCFS and receives 16 additional


milliseconds. If it still does not complete, it is preempted and moved
to queue Q2.

25
26
E. Multiple Processor Scheduling
 CPU scheduling more complex when multiple CPUs are available
 Homogeneous processors within a multiprocessor
 Load sharing
 Asymmetric multiprocessing – only one processor accesses the system data
structures

F. Real Time Scheduling


 Hard real-time systems – required to complete a critical task
within a guaranteed amount of time

 Soft real-time computing – requires that critical processes receive


priority over less fortunate ones.

Problem 1: Process p1,p2,p3 having burst time of 24,3,3 microseconds. Draw


Gantt Chart & Calculate Average Turn Around Time, Average Waiting Time, CPU
Utilization & Throughput using FCFS.
[Ans. Average TAT = 27 microsecond, Average WT = 17 microseconds, CPU
Utilization = 100%, Throughput = 0.1]

Problem 2: Consider the set of process A,B,C,D,E having arrival time of 0,2,3,3.5,4
and execution time 4,7,3,3,5 and the following scheduling algorithms:
a. FCFS
b. Round Robin (quantum=2)
c. Round Robin (quantum=1)
If there is tie within the processes, the tie is broken in the favour of the oldest process
i) draw the GANTT Chart and find the average waiting time & response time for the
algorithms. Comment on your result which one is better and why?

ii)If the scheduler takes 0.2 unit of CPU Time in context switch for the completed
job & o.1 unit of additional CPU time for incomplete jobs for saving their context,
calculate the percentage of CPU time wasted in each case.

27
Problem 3: Processes A,B,C,D,E having arrival time 0,0,1,2,2 and execution time
10,2,3,1,4 and priority 3,1,3,5,2. Draw the Gantt Chart and find average waiting time
and response time of the process set.

Problem 4: Process p1,p2,p3 having burst time 7,3,9 and priority 1,2,3 and
arrival time 0,4,7.
Calculate turn around time and average waiting time using
i) SJF
ii) priority. (both preemptive)

problem 5: Process p1,p2,p3,p4 having arrival time 0,1,2,3 and burst time 8,4,9,5.
Calculate turn around time and waiting time using SJF, FCFS.

28
Lecture 15

Deadlock: A set of blocked processes each holding a resource and waiting to acquire a
resource held by another process in the set.
Deadlock Problem: Bridge Crossing Example

a) Traffic only in one direction.


b) Each section of a bridge can be viewed as a resource.
c) If a deadlock occurs, rollback).

It can be resolved if one car backs up (preempt resources and


d) Several cars may have to be backed up if a deadlock occurs.
e) Starvation is possible.

System Model:
A system consists of a finite number of resources to be distributed among a
number of competing processes. The resources are partitioned into several types,
each consisting of some
number of identical instances. Resources are like Memory cycles, files, and
space, CPU devices (such as printers and DVD drives).

29
If a system has two CPUs, then the resource type CPU has two instances. Similarly,
the resource type printer may have five instances.
 Resource types R1, R2, . . ., Rm ( CPU cycles, memory space, I/O devices)
 Each resource type Ri has Wi instances.
 Each process utilizes a resource as follows:
i. Request: If the request cannot be granted immediately (for
example, if the resource is being used by another process), then the
requesting process must wait until it can acquire the resource.
ii. Use: The process can operate on the resource (for example, if the
resource is a printer, the process can print on the printer).
iii. Release: The process releases the resource.

Deadlock Characterization: Deadlock can arise if four conditions hold


simultaneously.
i. Mutual exclusion: only one process at a time can use a resource.
ii. Hold and wait: a process holding at least one resource is waiting to acquire
additional resources held by other processes.
iii. No preemption: a resource can be released only voluntarily by the process
holding it, after that process has completed its task.
iv. Circular wait: there exists a set {P0, P1, …, P0} of waiting processes such
that P0 is waiting for a resource that is held by P1, P1 is waiting for a resource
that is held by P2, …, Pn–1 is waiting for a resource that is held
by Pn, and P0 is waiting for a resource that is held by P0.

30
Resource-Allocation Graph:
Deadlocks can be described more precisely in terms of a directed graph called a
system resource-allocation graph. A set of vertices V and a set of edges E.
 V is partitioned into two types:
i. P = {P1, P2, …, Pn},
the set consisting of
all the processes in
the system.
ii. R = {R1, R2, …, Rm},
the set consisting of
all resource types in
the system.
 request edge – directed edge P1  Rj
 assignment edge – directed edge Rj  Pi

Example of a Resource Allocation Graph Resource Allocation Graph With A

Deadlock

31
Lecture 16

Deadlock Prevention: Restrain the ways request can be made


i. Mutual Exclusion – not required for sharable resources; must hold for
nonsharable resources.
ii. Hold and Wait – must guarantee that whenever a process requests a
resource, it does not hold any other resources.
 Require process to request and be allocated all its resources
before it begins execution, or allow process to request resources only
when the process has none.
 Low resource utilization; starvation possible.
iii. No Preemption –
 If a process that is holding some resources requests another
resource that cannot be immediately allocated to it, then all resources
currently being held are released.

 Preempted resources are added to the list of resources for which the
process is waiting.

 Process will be restarted only when it can regain its old resources, as
well as the new ones that it is requesting.
iv. Circular Wait – impose a total ordering of all resource types, and require that
each process requests resources in an increasing order of enumeration.

32
Lecture 17

Deadlock Avoidance: Requires that the system has some additional a priori information
available.
 Simplest and most useful model requires that each process declare the maximum
number of resources of each type that it may need.

 The deadlock-avoidance algorithm dynamically examines the resource-allocation


state to ensure that there can never be a circular-wait condition.

 Resource-allocation state is defined by the number of available and allocated


resources, and the maximum demands of the processes.

A. Safe State:
 When a process requests an available resource, system must decide
if immediate allocation leaves the system in a safe state.
 System is in safe state if there exists a
sequence <P1, P2, …, Pn> of ALL the
processes is the systems such that for
each Pi, the resources that Pi can still
request can be satisfied by currently
available resources + resources held by
all the Pj, with j < i.
 That is:
i. If Pi resource needs are not
immediately available, then Pi
can wait until all Pj have
finished.
ii. When Pj is finished, Pi can
obtain needed resources,
execute, return allocated
resources, and terminate.
33
iii. When Pi terminates, Pi +1 can obtain its needed resources, and so on.
 If a system is in safe state  no deadlocks.
 If a system is in unsafe state  possibility of deadlock.
 Avoidance  ensure that a system will never enter an unsafe state.

B. Avoidance Algorithm
A. Single instance of a resource type: Use a resource-allocation graph
B. Multiple instances of a resource type: Use the banker’s algorithm

Resource-Allocation Graph Scheme


 Claim edge Pi  Rj indicated that process Pj may request resource Rj;
represented by a dashed line.
 Claim edge converts to request edge when a process requests a resource.
 Request edge converted to an assignment edge when the resource is
allocated to the process.
 When a resource is released by a process, assignment edge reconverts to a claim
edge.
 Resources must be claimed a priori in the system.

Resource-Allocation Graph Unsafe State In Resource-Allocation


Graph

34
Resource-Allocation Graph Algorithm
 Suppose that process Pi requests a resource Rj
 The request can be granted only if converting the request edge
to an assignment edge does not result in the formation of a cycle
in the resource allocation graph

35
Banker’s Algorithm
Discussed in separate ppt file.

Deadlock Detection
In this environment, the system must provide:
• An algorithm that examines the state of the system to determine whether a
deadlock has occurred
• An algorithm to recover from the deadlock.

A. Single Instance of Each Resource Type

Resource-Allocation Graph Wait-for Graph

B. Several Instances of a Resource Type


i. Available: A vector of length m indicates the number of available resources of
each type.
ii. Allocation: An n x m matrix defines the number of resources
of each type currently allocated to each process.
iii. Request: An n x m matrix indicates the current request of each process. If
Request [ij] =
k, then process Pi is requesting k more instances of resource type. Rj.
36
Lecture 18
Recovery from Deadlock:

A. Process Termination:

 Abort all deadlocked processes.


 Abort one process at a time until the deadlock cycle is eliminated.
Some Other factors are:
1. What the priority of the process is?
2. How long the process has computed and how much longer the process will
compute before completing its designated task?
3. How many and what type of resources the process has used?
(for example, whether the resources are simple to preempt)
4. How many more resources the process needs in order to complete?
5. How many processes will need to be terminated?
6. Whether the process is interactive or batch?

B. Resource Preemption:

1. Selecting a victim: Which resources and which processes are to be preempted?


2. Rollback: If it cannot continue with its normal execution; it is missing some
needed resource. We must roll back the process to some safe state and
restart it from that state.
3. Starvation: we guarantee that resources will not always be preempted from the same
process?

37
IMPORTANT QUESTIONS

1 Explain threads
2 What do you understand by Process? Explain various states of process with suitable diagram. Explain process
control block.
3 What is a deadlock? Discuss the necessary conditions for deadlock with examples
4 Describe Banker’s algorithm for safe allocation.
5 What are the various scheduling criteria for CPU scheduling
6 What is the use of inter process communication and context switching
7 Discuss the usage of wait-for graph method
8
Consider the following snapshot of a system:

Allocated Maximum Available

Process R1 R2 R3 R1 R2 R3 R1 R2 R3

P1 2 2 3 3 6 8 7 7 10

P2 2 0 3 4 3 3

P3 1 2 4 3 4 4

Answer the following questions using the banker’s algorithm:

1) What is the content of the matrix need?


2) Is the system in a safe state?

9 Is it possible to have a deadlock involving only a single process? Explain


10 Describe the typical elements of the process control block.

11 What are the various scheduling criteria for CPU scheduling?


12 What is the safe state and an unsafe state ?
13 Define Process. Explain various steps involved in change of a process state with neat transition diagram.
14
Consider the following process:

Process Arrival Burst


Time Time
P1 0 8

P2 1 4

P3 2 9

P4 3 5

What is the average waiting and turn around time for these process with:

38
FCFS Scheduling
Preemptive SJF Scheduling

15
Consider the following process:

Process Arrival Burst


Time Time
P1 0 8

P2 1 4

P3 2 9

P4 3 5

Draw Gantt chart and find the average waiting time and average turnaround time:

iii. FCFS Scheduling


iv. SRTF Scheduling

Consider the following process:

Burs
Process Arrival t Priority
Time Time
0 6 3
P1
P2 1 4 1

P3 2 5 2

P4 3 8 4

Draw Gantt chart and find the average waiting time and average turnaround time:
(i) SRTF Scheduling
(ii) Round robin (time quantum:3)
16 What is the need for Process Control Block (PCB)?
17 Draw process state transition diagram
18
Define the multilevel feedback queues scheduling.
19 Discuss the performance criteria for CPU Scheduling.

39

You might also like