Chapter 2 Processes and Process Management
Chapter 2 Processes and Process Management
Processes and
Process Management
2.1 Process
11/03/2023 School of Computing, DDUIoT 1
Contents
Process and Thread
The concept of multi-threading
Inter process communication
Race conditioning
Critical Sections and mutual exclusion
Process Scheduling
Preemptive and non preemptive scheduling
Scheduling policies
Dead lock
Deadlock prevention
Deadlock detection
Deadlock avoidance
11/03/2023 School of Computing, DDUIoT 2
Process concept
• Early systems
– One program at a time was executed and a
single program has a complete control.
• Modern OS allow multiple programs to be
loaded in to memory and to be executed
concurrently.
• This requires firm or strong control over
execution of programs.
• The notion of process emerged to control the
execution of programs.
CPU.
In reality, of course, the real CPU switches back
11/03/2023
11/03/2023 School of Computing,
DDUIOT DDUIoT 21
Processes and Threads
Similarities Differences
• Both share CPU and only one • Unlike processes,
thread/process is active threads are not
(running) at a time. independent of one
another.
• Like processes, threads within
a process execute sequentially. • Unlike processes, all
threads can access every
• Like processes, thread can address in the task.
create children.
• Unlike processes,
• Like process, if one thread is thread are design to
blocked, another thread can assist one other.
run.
11/03/2023
11/03/2023 School of Computing,
DDUIOT DDUIoT 22
Thread usage
o several reasons for having multiple threads:
o many applications need multiple activities are going on
at once.
decomposing such an application into multiple sequential
threads that run in quasi-parallel, the programming model
becomes simpler.
o they are lighter weight than processes, they are easier
(i.e., faster) to create and destroy than processes.
o Having multiple threads within an application provide
higher performance argument.
• If there is substantial computing and also substantial I/0,
having threads allows these activities to overlap, thus
speeding up the application.
o Threads are useful on systems with multiple CPUs
Disk Memory
Swap a Process
process
Job (input) from ready
to job queue
Queue
Medium Term
Scheduler School of Computing, DDUIoT
11/03/2023 DDUIOT 53
Degree of multi-programming is the number of processes
that are placed in the ready queue waiting for execution
by the CPU.
Process 1
Process 2
Process 3 Degree of
Process 4 Multi-Programming
Process 5
Memory
Long Term
Process 1
Disk Scheduler Process 2
Process 3 Degree of
Process 4 Multi-Programming
Process 5
Memory
Job Queue
Medium Term
Process 1
Disk Scheduler Process 2
Process 3 Degree of
Process 4 Multi-Programming
Process 5
Memory
Job Queue
CPU
Non- Preemptive
Scheduling
CPU
11/03/2023 School of Computing, DDUIoT 60
CPU scheduling
CPU Scheduling is the method to select a process from the
ready queue to be executed by CPU when ever the CPU
becomes idle.
o CPU scheduling decisions may take place when a process:
1. Switches from running to waiting state
2. Switches from running to ready state
3. Switches from waiting to ready
4. Terminates
Turnaround time:
For a particular process, it is the total time needed for process
execution (from the time of submission to the time of completion).
It is the sum of process execution time and its waiting times (to get
memory, perform I/O, ….).
Waiting time:
The waiting time for a specific process is the sum of all periods it
spends waiting in the ready queue.
Response time.
It is the time from the submission of a process until the first
response is produced (the time the process takes to start responding).
Maximize:
CPU utilization.
System throughput.
Minimize:
Turnaround time.
Waiting time.
Response time.
Ready queue
FCFS Scheduling
CPU
P3 P2 P1 Process
27 24 0 Waiting Time (WT)
+
30 27 24 Turnaround Time (TAT)
Execution
Time
Hence, average waiting time= (0+24+27)/3=17 milliseconds
67 School of Computing,
DDUIOT DDUIoT 11/03/2023
11/03/2023
Repeat the previous example, assuming that the processes arrive in the
order P2, P3, P1. All at time 0.
22 3 P3
Gant chart:
P3 P2 P1 Process
3 0 6 Waiting Time (WT)
6 3 30 Turnaround Time (TAT)
68 School of Computing,
DDUIOT DDUIoT 11/03/2023
11/03/2023
Shortest-Job-First (SJF) scheduling
10 5 18 7
X
18 10 7 5
CPU
Note: numbers indicates the process execution time
11/03/2023 School of Computing, DDUIoT 69
Consider the following set of processes, with the length of the CPU burst
time given in milliseconds:
Burst Time Process
The processes arrive in the order 6 P1
P1, P2, P3, P4. All at time 0. 8 P2
7 P3
3 P4
1. Using FCFS
Gant chart:
P4 P3 P2 P1 Process
21 14 6 0 Waiting Time (WT)
24 21 14 6 Turnaround Time (TAT)
Gant chart:
P4 P3 P2 P1 Process
0 9 16 3 Waiting Time (WT)
3 16 24 9 Turnaround Time (TAT)
2 10 7 5 3
4
CPU
Hence, average
73
waiting time= (0+6+3+7)/4=4
School of Computing,
DDUIOT DDUIoT
milliseconds
11/03/2023
11/03/2023
2. Using SRTF Arrival Time Burst Time Process
0 7 P1
2 4 P2
Gant chart: 4 1 P3
5 4 P4
Hence, average
74 waiting time=
School(9+1+0+2)/4=3
DDUIOT DDUIoT milliseconds
of Computing, 11/03/2023
11/03/2023
Round Robin scheduling
Is one of the oldest, simplest, fairest, and most widely
used algorithms.
Allocate the CPU for one Quantum time (also called time
slice) Q to each process in the ready queue.
If the process has blocked or finished before the quantum
has elapsed, the CPU switching is done when the process
blocks, of course.
This scheme is repeated until all processes are finished.
A new process is added to the end of the ready queue.
setting the quantum too short causes too many process
switches and lowers the CPU efficiency, but setting it too
long may cause poor response to short interactive
requests.
Q Q
Q Q
CPU
11/03/2023 School of Computing, DDUIoT 76
Consider the following set of processes, with the length of the CPU burst time given in
milliseconds:
Burst Time Process
The processes arrive in the order
24 P1
P1, P2, P3. All at time 0.
3 P2
use RR scheduling with Q=2 and Q=4
3 P3
RR with Q=4
Gant chart:
Gant chart:
P3 P2 P1 Process
7 6 6 Waiting Time (WT)
10 9 30 Turnaround Time (TAT)
Sol:
10 5 18 7
X
18 10 7 5
Very lowVery
priority
low process
priority process
8 28
26 30
8 5 4 2
Starvation
Aging
11/03/2023 School of Computing, DDUIoT 81
Consider the following set of processes, with the length of the CPU burst
time given in milliseconds:
priority Burst Time Process
Hence, average
82
waiting time= (6+0+16+18+1)/5=8.2
School of Computing,
DDUIOT DDUIoT
milliseconds
11/03/2023
11/03/2023
Multi-level queuing scheduling
• Ready queue is partitioned into separate queues:
• foreground (interactive)
• background (batch)
• Each queue has its own scheduling algorithm,
• foreground – RR
• background – FCFS
• Scheduling must be done between the queues.
• Fixed priority scheduling: (i.e., serve all from foreground
then from background). Possibility of starvation.
• Time slice: each queue gets a certain amount of CPU
time which it can schedule amongst its processes; i.e.,
80% to foreground in RR;20% to background in FCFS
There are two types:
Without feedback: processes can not move between queues.
With feedback: processes can move between queues.
11/03/2023 School of Computing, DDUIoT 83
Multi-level queuing without feedback:
• Divide ready queue into several queues.
• Each queue has specific priority and its own
scheduling algorithm (FCFS, …)
Queue 0
Queue 1
Queue 2
Deadlock
System Breakdown
11/03/2023 School of Computing, DDUIoT 90
Resource deadlock
Deadlock:
Resource
Process A
Deadlock Process B Resource
Hence, blocked processes will never change state (Explain why?) because the resource it
has requested is held by another waiting
11/03/2023 School of process.
Computing, DDUIoT 91
Resource
o A major class of deadlocks involve resources.
o Deadlocks can occur when processes have been granted
exclusive access to devices, data records, files, and so forth.
o In general the objects granted to a process is referred to as
resources.
o A resource can be a hardware device (e.g., a tape drive) or a
piece of information (e.g., a locked record in a database).
o process must request a resource before using it and release it
after making use of it. Each process utilizes a resource as
follows:
Request :A process requests for an instance of a resource
type. If the resource is free, the request will be granted.
Otherwise the process should wait until it acquires the
resource
Use :The process uses the resource for its operations
Release : The process releases the resource
11/03/2023 School of Computing, DDUIoT 92
Resource
Resource Types
Types
Low Priority
High Priority process (10
process (20 kb)
Available Available
kb)
27 Kb 27 Kb
Available
17 Kb
Available
7 Kb
Deadlock conditions
Mutual Circular
Exclusion: Hold and Wait: No preemption: Wait: A set of
only one A process A resource is a processes
process can holding at least released only byeach waits for
use a one resource is the process another one
resource at a waiting for in a circular
holding it after it
time. additional fashion.
completed its task
resource held by Note: the four conditions must occur
another
to have a deadlock. If one condition
processesSchool of Computing, DDUIoT
11/03/2023
is absent, a deadlock may not exist. 95
Circular
wait
There exists a set {P0, P1, P2, ….., Pn} of waiting processes such that:
Circular
P2
Wait
P3
P4
Pn waiting for a resource held by P0.
11/03/2023
11/03/2023 School of Computing,
DDUIOT DDUIoT 109
109
Safe and Unsafe States
• A state is said to be safe if there is some scheduling
order in which every process can run to completion even
if all of them suddenly request their maximum Number of
resources immediately.
• A state said to be unsafe if there is no granted, given to
any of process to complete.
• If a system is in a safe state, then
there are no deadlocks.
11/03/2023
11/03/2023 School of Computing,
DDUIOT DDUIoT 111
Deadlock Avoidance Algorithms (contd.)
Banker’s Algorithm
11/03/2023
11/03/2023 School of Computing,
DDUIOT DDUIoT 112
Communication deadlock
o process wants something that another process has and must wait
until the first one gives it up.
o Another kind of deadlock can occur in communication systems
(e.g., networks), in which two or more processes communicate by
sending messages.
o A common arrangement is that process A sends a request message
to process B, and then blocks until B sends back a reply message.
o Communication deadlocks cannot be prevented by ordering the
resources (since there are none) or avoided by careful scheduling
(since there are no moments when a request could be postponed).
o The technique that can usually be employed to break
communication deadlocks: timeouts.
o In most network communication systems, whenever a message is
sent to which a reply is expected a timer is also started.
o If the timer goes off before the reply arrives, the sender of the
message assumes that the message has been lost and sends it
again (and again and again if needed).
11/03/2023
11/03/2023 School of Computing,
DDUIOT DDUIoT 113
Livelock
o In some situations, polling (busy waiting) is used to
enter a critical region or access a resource.
o Let pair of processes (process A and process B)using two
resources.
o Process A is use resource 1 and request resource 2 and
process 2 use resource 2 and request
o If the processes wait for required resource by spooling
rather than blocking, this situation is called livelock.
o Thus we do not have a deadlock (because no process is
blocked) but we have something functionally equivalent
to deadlock; will make no further progress.
11/03/2023
11/03/2023 School of Computing,
DDUIOT DDUIoT 114
11/03/2023 School of Computing, DDUIoT 115