Process
Process
• Deadlocks : system model, deadlock characterization, methods for handling deadlocks, deadlock
prevention, deadlock avoidance, deadlock detection, recovery from deadlock.
Process concept
Memory
• Process is a dynamic entity
– Program in execution Disk
User
• Program code program
– Contains the text section
• Program becomes a process when
– executable file is loaded in the memory
– Allocation of various resources
• Processor, register, memory, file, devices
• One program code may create several processes
– One user opened several MS Word
– Equivalent code/text section
– Other resources may vary
Process State
• As a process executes, it changes state
– new: The process is being created
– ready: The process is waiting to be assigned to
a processor
– running: Instructions are being executed
– waiting: The process is waiting for some event
to occur
– terminated: The process has finished execution
Job pool
Process State diagram
Single
processor
Multitasking/Time sharing
As a process executes, it changes state
– new: The process is being created
– running: Instructions are being executed
– waiting: The process is waiting for some event to
occur
– ready: The process is waiting to be assigned to a
processor
– terminated: The process has finished execution
How to represent a process?
• Process is a dynamic entity
– Program in execution
• Program code
– Contains the text section
• Program counter (PC)
• Values of different registers
– Stack pointer (SP) (maintains process stack)
• Return address, Function parameters
– Program status word (PSW) C Z O S I K
– General purpose registers
• Main Memory allocation
– Data section
• Variables
– Heap
• Dynamic allocation of memory during process execution
Process Control Block (PCB)
• Process is represented in the operating system
by a Process Control Block
Information associated with each process
• Process state
• Program counter
• CPU registers
– Accumulator, Index reg., stack pointer, general
Purpose reg., Program Status Word (PSW)
• CPU scheduling information
– Priority info, pointer to scheduling queue
• Memory-management information
– Memory information of a process
– Base register, Limit register, page table, segment table
• Accounting information
– CPU usage time, Process ID, Time slice
• I/O status information
– List of open files=> file descriptors
– Allocated devices
Process Representation in Linux
Represented by the C structure task_struct
pid t pid; /* process identifier */
long state; /* state of the process */
unsigned int time slice /* scheduling information */
struct task struct *parent; /* this process’s parent */
struct list head children; /* this process’s children */
struct files struct *files; /* list of open files */
struct mm_struct *mm; /* address space of this pro */
Doubly
linked list
CPU Switch From Process to Process
Context switch
Context Switch
Job queue
Ready queue
Device queue
Ready Queue And Various
I/O Device Queues
Queues are linked list of PCB’s
Device queue
Many processes
are waiting for
disk
Process Scheduling
• We have various queues
• Single processor system
– Only one CPU=> only one running process
• Selection of one process from a group of
processes
– Process scheduling
Process Scheduling
• Scheduler
– Selects a process from a set of processes
• Two kinds of schedulers
1. Long term schedulers, job scheduler
– A large number of processes are submitted (more than
memory capacity)
– Stored in disk
– Long term scheduler selects process from job pool and
loads in memory
2. Short term scheduler, CPU scheduler
– Selects one process among the processes in the
memory (ready queue)
– Allocates to CPU
Long Term Scheduler
CPU scheduler
Representation of Process Scheduling
CPU scheduler selects
a process Dispatched (task of
Dispatcher)
Parent at
wait()
Dispatcher
• Dispatcher module gives control of the CPU to
the process selected by the short-term
scheduler; this involves:
– switching context
– switching to user mode
– jumping to the proper location in the user
program to restart that program
Create
initial Child Shell
PCB
Exce()
Update PCB
Context switch
Swapper
ISR for context switch
Current <- PCB of current process
Context_switch()
{
Disable interrupt;
switch to kernel mode
Save_PCB(current);
Insert(ready_queue, current);
next=CPU_Scheduler(ready_queue);
remove(ready_queue, next);
Dispatcher(next);
switch to user mode;
Enable Interrupt;
}
Dispatcher(next)
{
Load_PCB(next); [update PC]
}
Interprocess Communication
• Processes within a system may be independent or cooperating
Utility of CPU
scheduler
CPU bound
process
I/O bound
process
Large number of short CPU bursts and small number of long CPU
bursts
Preemptive and non preemptive
• Selects from among the processes in ready
queue, and allocates the CPU to one of them
– Queue may be ordered in various ways (not
necessarily FIFO)
• CPU scheduling decisions may take place when
a process:
1. Switches from running to waiting state
2. Switches from running to ready state
3. Switches from waiting to ready
4. Terminates
• Scheduling under 1 and 4 is nonpreemptive
• All other scheduling is preemptive
Long Term Scheduler
CPU scheduler
Preemptive scheduling
Preemptive scheduling
Results in cooperative processes
Issues:
– Consider access to shared data
• Process synchronization
– Consider preemption while in kernel mode
• Updating the ready or device queue
• Preempted and running a “ps -el”
Race condition
Scheduling Criteria
• CPU utilization – keep the CPU as busy as possible
• Priority Scheduling
Performance evaluation
• Ideally many processes with several CPU and I/O bursts
0 24 27 30
• Waiting time for P1 = 0; P2 = 24; P3 = 27
• Average waiting time: (0 + 24 + 27)/3 = 17
FCFS Scheduling (Cont.)
Suppose that the processes arrive in the order:
P2 , P3 , P1
• The Gantt chart for the schedule is:
P2 P3 P1
0 3 6 30
P4 P1 P3 P2
0 3 9 16 24
P1 P2 P4 P1 P3
0 1 5 10 17 26
• Average waiting time = [(10-1)+(1-1)+(17-2)+(5-3)]/4 = 26/4 = 6.5 msec
• Commonly, α set to ½
Examples of Exponential Averaging
• =0
– n+1 = n
– Recent burst time does not count
• =1
– n+1 = tn
– Only the actual last CPU burst counts
• If we expand the formula, we get:
n+1 = tn+(1 - ) tn-1 + …
+(1 - )j tn -j + …
+(1 - )n +1 0
• Since both and (1 - ) are less than or equal to 1,
each successive term has less weight than its
predecessor
Prediction of the Length of the
Next CPU Burst
0 1 2 3 6
Priority Scheduling
• A priority number (integer) is associated with each process
• The CPU is allocated to the process with the highest priority
(smallest integer highest priority)
• Set priority value
nice
– Internal (time limit, memory req., ratio of I/O Vs CPU burst)
– External (importance, fund etc)
• SJF is priority scheduling where priority is the inverse of predicted
next CPU burst time
• Two types
– Preemptive
– Nonpreemptive
P2 P5 P1 P3 P4
0 1 6 16 18 19
P1 P2 P3 P1 P1 P1 P1 P1
0 4 7 10 14 18 22 26 30
• No overhead
• However,
poor response
time
• Too much
overhead!
• q must be large with respect to context switch,
• Slowing the
otherwise overhead is too high
execution time
• q usually 10ms to 100ms, context switch < 10 microsec
Effect on Turnaround Time
• TT depends on the time quantum and CPU burst time
• Better if most processes complete there next CPU burst in a
single q
• Large q=>
processes in ready
queue suffer
• Small q=>
Completion will
take more time
0 6 9 10 16 17
P1 P2 P3 P4 P4
(6+9+10+17)/4=10.5
q=6
Process classification
• Foreground process
– Interactive
– Frequent I/O request
– Requires low response time
• Background Process
– Less interactive
– Like batch process
– Allows high response time
• Can use different scheduling algorithms for two types
of processes ?
Multilevel Queue
• Ready queue is partitioned into separate queues, eg:
– foreground (interactive)
– background (batch)
• Process permanently assigned in a given queue
– Based on process type, priority, memory req.
• Each queue has its own scheduling algorithm:
– foreground – RR
– background – FCFS
• Scheduling must be done between the queues:
– Fixed priority scheduling; (i.e., serve all from foreground
then from background).
– Possibility of starvation.
Multilevel Queue Scheduling
• No process in batch queue
could run unless upper
queues are empty
Another possibility
• Time slice – each queue gets a certain amount of CPU time which it can schedule
amongst its processes; i.e., 80% to foreground in RR
• 20% to background in FCFS
Multilevel Feedback Queue
• So a process is permanently assigned a queue when
they enter in the system
– They do not move
• Flexibility!
– Multilevel-feedback-queue scheduling
• A process can move between the various queues;
• Separate processes based of the CPU bursts
– Process using too much CPU time can be moved to lower
priority
– Interactive process => Higher priority
• Move process from low to high priority
– Implement aging
Example of Multilevel Feedback
Queue
Q0
• Three queues:
– Q0 – RR with time quantum 8 milliseconds
Q1
– Q1 – RR time quantum 16 milliseconds
– Q2 – FCFS
Q2
• Scheduling
– A new job enters queue Q0
• When it gains CPU, job receives 8 milliseconds
• If it does not finish in 8 milliseconds, job is moved to queue
Q1
– At Q1 job is again receives 16 milliseconds
• If it still does not complete, it is preempted and moved to
queue Q2
Multilevel Feedback Queues
Combine round-robin and priority scheduling in such a way that the system
executes the highest-priority process and runs processes with the same
priority using round-robin scheduling (q=2).
Solution 1
Problem 2
Consider three processes (process id 0, 1, 2 respectively) with compute time bursts 2, 4 and 8
time units. All processes arrive at time zero. Consider the longest remaining time first (LRTF)
scheduling algorithm. In LRTF ties are broken by giving priority to the process with the lowest
process id. Compute average turn around time
Process AT BT TAT
P0 0 2
P1 0 4
P2 0 8
Solution 2
2. Consider three processes (process ID 0,1,2 respectively) with compute time bursts 2,4 and 8 time units. All processes
arrive at time zero. Consider the longest remaining time first (LRTF) scheduling algorithm. In LRTF, ties are broken by
giving priority to the process with the lowest process ID. The average turnaround time is:
P2 P2 P2 P2 P1 P2 P1 P2 P0 P1 P2 P0 P1 p2
0 X out Consumer
1 A
Buffer empty=>
in=out 2 B
3 C
Buffer full=>
4 in Producer
(in+1)%size=out
5
• Shared data
#define BUFFER_SIZE 10
typedef struct {
...
} item;
item buffer[BUFFER_SIZE];
int in = 0;
int out = 0;
Bounded-Buffer – Producer
while (true) {
/* Produce an item */
while (((in + 1) % BUFFER SIZE) == out)
; /* do nothing -- no free buffers */
buffer[in] = item;
in = (in + 1) % BUFFER SIZE;
}
Bounded Buffer – Consumer
while (true) {
while (in == out)
; // do nothing -- nothing to consume
File 1
File 2
File 1
File 2
File 1
Process B
File 2
Next_free_slot=7
File 1
File 2 8
Process B
Next_free_slot=8
File 1
File 1 8
– Hard to debug
Critical Section Problem
• Critical region
– Part of the program where the shared memory is
accessed
• Mutual exclusion
– Prohibit more than one process from reading and
writing the shared data at the same time
Critical Section Problem
• Consider system of n processes {p0, p1, … pn-1}
• Each process has critical section segment of code
– Process may be changing common variables, updating
table, writing file, etc
– When one process in critical section, no other may be
in its critical section
• Critical section problem is to design protocol to
solve this
• Each process must ask permission to enter critical
section in entry section, may follow critical
section with exit section, then remainder section
Critical Section Problem
do {
entry section
critical section
exit section
remainder section
} while (TRUE);
2. Progress–
• If no process is executing in its critical section
• and there exist some processes that wish to enter their critical
section
• then only the processes outside remainder section (i.e. the
processes competing for critical section, or exit section) can
participate in deciding which process will enter CS next
3. Bounded Waiting - A bound must exist on the number of times that other
processes are allowed to enter their critical sections after a process has
made a request to enter its critical section and before that request is
granted
Assume that each process executes at a nonzero speed
No assumption concerning relative speed of the n processes
Critical Section Problem
do {
entry section
critical section
exit section
remainder section
} while (TRUE);
• Disable interrupt
– After entering critical region, disable all interrupts
– Since clock is just an interrupt, no CPU
preemption can occur
– Disabling interrupt is useful for OS itself, but not
for users…
Mutual Exclusion with busy waiting
• Lock variable
– A software solution
– A single, shared variable (lock)
– before entering critical region, programs test the
variable,
– if 0, enter CS;
– if 1, the critical region is occupied
While(true)
{
– What is the problem? while(lock!=0);
Lock=1
CS()
Lock=0
Non-CS()
}
Concepts
• Busy waiting
– Continuously testing a variable until some value
appears
• Spin lock
– A lock using busy waiting is call a spin lock
• Process Pi
do {
while (TestAndSet(lock)) ;
critical section
lock = false;
remainder section
}
Swap Instruction
Concepts
• Busy waiting
– Continuously testing a variable until some value
appears
• Spin lock
– A lock using busy waiting is call a spin lock
• A simple solution
Sleep and Wakeup
Producer-Consumer Problem
• Signal missing
– Shared variable: counter
– When consumer read count
with a 0 but didn’t fall asleep
in time
– then the signal will be lost
typedef struct {
int value;
struct process *list;
} semaphore
Semaphore
Wait(S) S<= semaphore variable
• When a process P executes the wait(S) and finds
• S==0
– Process must wait => block()
– Places the process into a waiting queue associated with S
– Switch from running to waiting state
• Otherwise decrement S
Signal(S)
When a process P executes the signal(S)
– Check, if some other process Q is waiting on the semaphore S
– Wakeup(Q)
– Wakeup(Q) changes the process from waiting to ready state
• Otherwise increment S
Semaphore (wait and signal)
• Implementation of wait:
wait(semaphore *S) { List of PCB
S->value--;
if (S->value < 0) {
add this process to S->list;
block();
}
} Atomic/
• Implementation of signal: Indivisible
signal(semaphore *S) {
S->value++;
if (S->value <= 0) {
remove a process P from S->list;
wakeup(P); Note: which process is picked
} for unblocking may depend on
} policy.
Usage of Semaphore
• Counting semaphore – integer value can range over
an unrestricted domain
– Control access to a shared resource with finite elements
– Wish to use => wait(S)
– Releases resource=>signal(S)
– Used for synchronization
• Binary semaphore – integer value can range only
between 0 and 1
– Also known as mutex locks
– Used for mutual exclusion
Ordering Execution of Processes using
Semaphores (Synchronization)
• Execute statement B in Pj only after statement A
executed in Pi
• Use semaphore flag initialized to 0
• Code:
Pi Pj
Stmt. A wait(flag)
signal(flag) Stmt. B
• Shared data:
semaphore mutex; /* initially mutex = 1 */
• Process Pi:
do {
wait(mutex);
critical section
signal(mutex);
remainder section
} while (1);
Producer-consumer problem
: Semaphore
• Solve producer-consumer problem
– Full: counting the slots that are full; initial value
0
– Empty: counting the slots that are empty, initial
value N
– Mutex: prevent access the buffer at the same
time, initial value 1 (binary semaphore)
• Process Pi
do {
while (TestAndSet(lock)) ;
critical section
lock = false;
remainder section
}
Swap Instruction
Readers-Writers Problem
• A database is shared among a number of concurrent
processes
– Readers – only read the data set; they do not perform any updates
– Writers – can both read and write
• Shared Data
– Database
– Semaphore mutex initialized to 1
– Semaphore wrt initialized to 1
– Integer readcount initialized to 0
Writer Readers-Writers Problem
R R R
Writer
• Task of the writer
• Just lock the
dataset and write
Reader
• Task of the first reader
• Lock the dataset
• Task of the last reader
• Release the lock
• Wakeup the any waiting writer
R R R
Readers-Writers Problem (Cont.)
• The structure of a writer process
do {
wait (wrt) ;
// writing is performed
signal (wrt) ;
} while (TRUE);
Readers-Writers Problem (Cont.)
• The structure of a reader process
do {
wait (mutex) ;
readcount ++ ;
if (readcount == 1)
wait (wrt) ;
signal (mutex)
// reading is performed
wait (mutex) ;
readcount - - ;
if (readcount == 0)
signal (wrt) ;
signal (mutex) ;
} while (TRUE);
Readers-Writers Problem (Cont.)
• Ensure that two neighboring philosopher should not seize the same fork
Dining Philosophers Problem
Each fork is implemented as a semaphore
// think
} while (TRUE);
• After taking the left fork, philosopher checks to see if right fork is available
• If not, puts down the left fork
Limitation
• Random delay (Exponential backoff) not going to help for critical systems
Dining Philosophers Problem
Third solution
Wait(mutex);
signal(mutex);
state
...
Dining Philosophers Problem
Final solution
...
...
Dining Philosophers Problem
... Final solution
The Sleeping Barber Problem
Barber sleeps on “Customer”
Customer sleeps on “Barber”
For customer:
Checking the waiting
room and informing
the barber makes its
critical section
Deadlock
The Deadlock Problem
• A set of blocked processes each holding a resource and
waiting to acquire a resource held by another process in
the set
• Example
– System has 2 disk drives
– P1 and P2 each hold one disk drive and each needs another one
• Example
– semaphores A and B, initialized to 1
P0 P1
wait (A); wait (B);
wait(B) wait(A)
Introduction To Deadlocks
• Process
• Pi requests instance of Rj Pi
Rj
• Pi is holding an instance of Rj Pi
Rj
Example of a Resource Allocation Graph
No cycle; No
deadlock
One Resource of Each Type
• Figure 6-5. (a) A resource graph. (b) A cycle
extracted from (a).
P3 requests R2
Graph With A Cycle But No Deadlock
P1->R1->P3->R2->P1
• If the resource allocation graph does not have a
cycle
• System is not in a deadlocked state
• If there is a cycle
• May or may not be in a deadlocked state
Deadlock Modeling
• Figure 6-4. An example of how deadlock
occurs
and how it can be avoided.
Deadlock Modeling
• Figure 6-4. An example of how deadlock
occurs
and how it can be avoided.
Deadlock
Deadlock Modeling
• Figure 6-4. An example of how deadlock
occurs
and how it can be avoided.
Suspend process B
Deadlock Handling
Strategies for dealing with deadlocks:
1.Detection and recovery. Let deadlocks occur,
detect them, take action.
2.Dynamic avoidance by careful resource
allocation.
3.Prevention, by structurally negating one of the
four required conditions.
4. Just ignore the problem.
Tutorials
The Sleeping Barber Problem
Challenges
• Actions taken by barber and customer takes unknown amount of time
(checking waiting room, entering shop, taking waiting room chair)
• Scenario 1
– Customer arrives, observe that barber busy
– Goes to waiting room
– While he is on the way, barber finishes the haircut
– Barber checks the waiting room
– Since no one there, Barber sleeps
– The customer reaches the waiting room and waits forever
• Scenario 2
– Two customer arrives at the same time
– Barber is busy
– Both customers try to occupy the same chair!
Barber sleeps on “Customer”
Customer sleeps on “Barber”
One semaphore:
customer No customer: Barber falls asleep
“I have arrived; waiting for
your service”
Customer
Barber
Semaphore customer:
Customer informs barber
that “I have arrived; waiting
for your service”
Mutex: Ensures that only
one of the participants
can change state at once
The Sleeping Barber Problem
Barber sleeps on “Customer”
Customer sleeps on “Barber”
For customer:
Checking the waiting
room and informing
the barber makes its
critical section
Problem 1
We want to use semaphores to implement a shared critical
section (CS) among three processes T1, T2, and T3. We want to
enforce the execution in the CS in this order: First T2 must execute
in the CS. When it finishes, T1 will then be allowed to enter the
CS; and when it finishes T3 will then be allowed to enter the CS;
when T3 finishes then T2 will be allowed to enter the CS, and so
on, (T2, T1, T3, T2, T1, T3,…).
P2()
{
D = 2 * B;
B = D - 1;
}
The number of distinct values that B can possibly take after the execution
Problem 4
Consider the reader-writer problem with designated readers. There are n reader
processes, where n is known beforehand. There are one or more writer processes.
Items are stored in a buffer. Every item is written by a writer and is designated for a
particular reader.
Deadlock Detection
• Detection algorithm
• Recovery scheme
Single Instance of Each Resource Type
• In resource graph
– Pi → R and R → Pj
P0 P0
P1 Allocation P1 Request
matrix matrix
P2 P2
R0 R1 R2 R3 R0 R1 R2 R3
• We say X Y
Iff X[i] Y[i] for all i=1, 2, …, n
Detection Algorithm
1. Let Work and Finish be vectors of length m and
n, respectively
Initialize:
(a) Work = Available
(b)For i = 1,2, …, n, if Allocationi 0, then
Finish[i] = false; otherwise, Finish[i] = true
• Sequence <P0, P2, P3, P1, P4> will result in Finish[i] = true for all i
Example (Cont.)
• P2 requests an additional instance of type C
Request
ABC
P0 0 0 0
P1 2 0 2
P2 0 0 1
P3 1 0 0
P4 0 0 2
• State of system?
– Can reclaim resources held by process P0, but insufficient
resources to fulfill other processes; requests
– Deadlock exists, consisting of processes P1, P2, P3, and P4
Home work
Available
Detection-Algorithm Usage
• When, and how often, to invoke depends on:
– How often a deadlock is likely to occur?
– How many processes will be affected by deadlock?
• If deadlock frequent
– Invoke detection algo frequently
• That is:
– If Pi resource needs are not immediately available, then Pi can wait
until all Pj have finished
– When Pj is finished, Pi can obtain needed resources, execute, return
allocated resources, and terminate
– When Pi terminates, Pi +1 can obtain its needed resources, and so on
Basic Facts
• If a system is in safe state no deadlocks
Resource R=12
State at time t0
Free resource = 3
State at time t1
Allocate one resource to P2
Avoidance algorithms
Claim edge
Resource-Allocation Graph
Algorithm
• Suppose that process Pi requests a resource Rj
• If no cycle
– Safe state
Unsafe State In Resource-Allocation Graph
P0 P0
P1 Allocation P1 Max
matrix matrix
P2 P2
R0 R1 R2 R3 R0 R1 R2 R3
Deadlock avoidance :
Flow chart for Pi
Pi requests Take decision
resources (safety algorithm)
Requesti[] yes
Permanent
Provisionally
State Safe allocation of
allocate
state? resource
resources
New state
Temporary No
state (Request
algorithm)
Need
ABC
P0 743
P1 122
P2 600
P3 011
P4 431
• The system is in a safe state since the sequence < P1, P3, P4, P2, P0> satisfies safety criteria
Example (Cont.)
• The content of the matrix Need is defined to be Max – Allocation
Need
ABC
P0 743
P1 122
P2 600
P3 011
P4 431
• The system is in a safe state since the sequence < P1, P3, P4, P2, P0>
satisfies safety criteria
Example: P1 Request (1,0,2)
• Check that Request Available (that is, (1,0,2) (3,3,2)
true
Allocation Need Available
ABC ABC ABC
P0 0 1 0 743 230
P1 302 020
P2 3 0 2 600
P3 2 1 1 011
P4 0 0 2 431
• Executing safety algorithm shows that sequence < P1,
P3, P4, P0, P2> satisfies safety requirement
• Can request for (3,3,0) by P4 be granted?
• Can request for (0,2,0) by P0 be granted?
Deadlock Handling
Strategies for dealing with deadlocks:
1.Detection and recovery. Let deadlocks occur,
detect them, take action.
2.Dynamic avoidance by careful resource
allocation.
3.Prevention, by structurally negating one of the
four required conditions.
4. Just ignore the problem.
Deadlock Prevention
Restrain the ways request can be made
R0 R1 R(n-1) R(n)
Rn
P0 P1 P(n-1) Pn
P0 1 2 1 P0 1 0 3
P1 2 0 1 P1 0 1 2
P2 2 2 1 P2 1 2 0
• Step: With the instances available currently, only the requirement of process P1 can be satisfied. So, process P1 is allocated
the requested resources. It completes its execution and then frees up the instances of resources held by it.
(Then, Available = [ 0 1 2 ] + [ 2 0 1] = [ 2 1 3 ]