Process
Process
Process
• Deadlocks : system model, deadlock characterization, methods for handling deadlocks, deadlock
prevention, deadlock avoidance, deadlock detection, recovery from deadlock.
Process concept
Memory
• Process is a dynamic entity
– Program in execution Disk
User
• Program code program
– Contains the text section
• Program becomes a process when
– executable file is loaded in the memory
– Allocation of various resources
• Processor, register, memory, file, devices
• One program code may create several processes
– One user opened several MS Word
– Equivalent code/text section
– Other resources may vary
Process State
• As a process executes, it changes state
– new: The process is being created
– ready: The process is waiting to be assigned to
a processor
– running: Instructions are being executed
– waiting: The process is waiting for some event
to occur
– terminated: The process has finished execution
Job pool
Process State diagram
Single
processor
Multitasking/Time sharing
As a process executes, it changes state
– new: The process is being created
– running: Instructions are being executed
– waiting: The process is waiting for some event to
occur
– ready: The process is waiting to be assigned to a
processor
– terminated: The process has finished execution
How to represent a process?
• Process is a dynamic entity
– Program in execution
• Program code
– Contains the text section
• Program counter (PC)
• Values of different registers
– Stack pointer (SP) (maintains process stack)
• Return address, Function parameters
– Program status word (PSW) C Z O S I K
– General purpose registers
• Main Memory allocation
– Data section
• Variables
– Heap
• Dynamic allocation of memory during process execution
Process Control Block (PCB)
• Process is represented in the operating system
by a Process Control Block
Information associated with each process
• Process state
• Program counter
• CPU registers
– Accumulator, Index reg., stack pointer, general
Purpose reg., Program Status Word (PSW)
• CPU scheduling information
– Priority info, pointer to scheduling queue
• Memory-management information
– Memory information of a process
– Base register, Limit register, page table, segment table
• Accounting information
– CPU usage time, Process ID, Time slice
• I/O status information
– List of open files=> file descriptors
– Allocated devices
Process Representation in Linux
Represented by the C structure task_struct
pid t pid; /* process identifier */
long state; /* state of the process */
unsigned int time slice /* scheduling information */
struct task struct *parent; /* this process’s parent */
struct list head children; /* this process’s children */
struct files struct *files; /* list of open files */
struct mm_struct *mm; /* address space of this pro */
Doubly
linked list
CPU Switch From Process to Process
Context switch
Context Switch
Job queue
Ready queue
Device queue
Ready Queue And Various
I/O Device Queues
Queues are linked list of PCB’s
Device queue
Many processes
are waiting for
disk
Process Scheduling
• We have various queues
• Single processor system
– Only one CPU=> only one running process
• Selection of one process from a group of
processes
– Process scheduling
Process Scheduling
• Scheduler
– Selects a process from a set of processes
• Two kinds of schedulers
1. Long term schedulers, job scheduler
– A large number of processes are submitted (more than
memory capacity)
– Stored in disk
– Long term scheduler selects process from job pool and
loads in memory
2. Short term scheduler, CPU scheduler
– Selects one process among the processes in the
memory (ready queue)
– Allocates to CPU
Long Term Scheduler
CPU scheduler
Representation of Process Scheduling
CPU scheduler selects
a process Dispatched (task of
Dispatcher)
Parent at
wait()
Dispatcher
• Dispatcher module gives control of the CPU to
the process selected by the short-term
scheduler; this involves:
– switching context
– switching to user mode
– jumping to the proper location in the user
program to restart that program
Create
initial Child Shell
PCB
Exce()
Update PCB
Context switch
Swapper
ISR for context switch
Current <- PCB of current process
Context_switch()
{
Disable interrupt;
switch to kernel mode
Save_PCB(current);
Insert(ready_queue, current);
next=CPU_Scheduler(ready_queue);
remove(ready_queue, next);
Dispatcher(next);
switch to user mode;
Enable Interrupt;
}
Dispatcher(next)
{
Load_PCB(next); [update PC]
}
Interprocess Communication
• Processes within a system may be independent or cooperating
Utility of CPU
scheduler
CPU bound
process
I/O bound
process
Large number of short CPU bursts and small number of long CPU
bursts
Preemptive and non preemptive
• Selects from among the processes in ready
queue, and allocates the CPU to one of them
– Queue may be ordered in various ways (not
necessarily FIFO)
• CPU scheduling decisions may take place when
a process:
1. Switches from running to waiting state
2. Switches from running to ready state
3. Switches from waiting to ready
4. Terminates
• Scheduling under 1 and 4 is nonpreemptive
• All other scheduling is preemptive
Long Term Scheduler
CPU scheduler
Preemptive scheduling
Preemptive scheduling
Results in cooperative processes
Issues:
– Consider access to shared data
• Process synchronization
– Consider preemption while in kernel mode
• Updating the ready or device queue
• Preempted and running a “ps -el”
Race condition
Scheduling Criteria
• CPU utilization – keep the CPU as busy as possible
• Priority Scheduling
Performance evaluation
• Ideally many processes with several CPU and I/O bursts
0 24 27 30
• Waiting time for P1 = 0; P2 = 24; P3 = 27
• Average waiting time: (0 + 24 + 27)/3 = 17
FCFS Scheduling (Cont.)
Suppose that the processes arrive in the order:
P2 , P3 , P1
• The Gantt chart for the schedule is:
P2 P3 P1
0 3 6 30
P4 P1 P3 P2
0 3 9 16 24
P1 P2 P4 P1 P3
0 1 5 10 17 26
• Average waiting time = [(10-1)+(1-1)+(17-2)+(5-3)]/4 = 26/4 = 6.5 msec
• Commonly, α set to ½
Examples of Exponential Averaging
• =0
– n+1 = n
– Recent burst time does not count
• =1
– n+1 = tn
– Only the actual last CPU burst counts
• If we expand the formula, we get:
n+1 = tn+(1 - ) tn-1 + …
+(1 - )j tn -j + …
+(1 - )n +1 0
• Since both and (1 - ) are less than or equal to 1,
each successive term has less weight than its
predecessor
Prediction of the Length of the
Next CPU Burst
0 1 2 3 6
Priority Scheduling
• A priority number (integer) is associated with each process
• The CPU is allocated to the process with the highest priority
(smallest integer highest priority)
• Set priority value
nice
– Internal (time limit, memory req., ratio of I/O Vs CPU burst)
– External (importance, fund etc)
• SJF is priority scheduling where priority is the inverse of predicted
next CPU burst time
• Two types
– Preemptive
– Nonpreemptive
P2 P5 P1 P3 P4
0 1 6 16 18 19
P1 P2 P3 P1 P1 P1 P1 P1
0 4 7 10 14 18 22 26 30
• No overhead
• However,
poor response
time
• Too much
overhead!
• q must be large with respect to context switch,
• Slowing the
otherwise overhead is too high
execution time
• q usually 10ms to 100ms, context switch < 10 microsec
Effect on Turnaround Time
• TT depends on the time quantum and CPU burst time
• Better if most processes complete there next CPU burst in a
single q
• Large q=>
processes in ready
queue suffer
• Small q=>
Completion will
take more time
0 6 9 10 16 17
P1 P2 P3 P4 P4
(6+9+10+17)/4=10.5
q=6
Process classification
• Foreground process
– Interactive
– Frequent I/O request
– Requires low response time
• Background Process
– Less interactive
– Like batch process
– Allows high response time
• Can use different scheduling algorithms for two types
of processes ?
Multilevel Queue
• Ready queue is partitioned into separate queues, eg:
– foreground (interactive)
– background (batch)
• Process permanently assigned in a given queue
– Based on process type, priority, memory req.
• Each queue has its own scheduling algorithm:
– foreground – RR
– background – FCFS
• Scheduling must be done between the queues:
– Fixed priority scheduling; (i.e., serve all from foreground
then from background).
– Possibility of starvation.
Multilevel Queue Scheduling
• No process in batch queue
could run unless upper
queues are empty
Another possibility
• Time slice – each queue gets a certain amount of CPU time which it can schedule
amongst its processes; i.e., 80% to foreground in RR
• 20% to background in FCFS
Multilevel Feedback Queue
• So a process is permanently assigned a queue when
they enter in the system
– They do not move
• Flexibility!
– Multilevel-feedback-queue scheduling
• A process can move between the various queues;
• Separate processes based of the CPU bursts
– Process using too much CPU time can be moved to lower
priority
– Interactive process => Higher priority
• Move process from low to high priority
– Implement aging
Example of Multilevel Feedback
Queue
Q0
• Three queues:
– Q0 – RR with time quantum 8 milliseconds
Q1
– Q1 – RR time quantum 16 milliseconds
– Q2 – FCFS
Q2
• Scheduling
– A new job enters queue Q0
• When it gains CPU, job receives 8 milliseconds
• If it does not finish in 8 milliseconds, job is moved to queue
Q1
– At Q1 job is again receives 16 milliseconds
• If it still does not complete, it is preempted and moved to
queue Q2
Multilevel Feedback Queues
Combine round-robin and priority scheduling in such a way that the system
executes the highest-priority process and runs processes with the same
priority using round-robin scheduling (q=2).
Solution 1
Problem 2
Consider three processes (process id 0, 1, 2 respectively) with compute time bursts 2, 4 and 8
time units. All processes arrive at time zero. Consider the longest remaining time first (LRTF)
scheduling algorithm. In LRTF ties are broken by giving priority to the process with the lowest
process id. Compute average turn around time
Process AT BT TAT
P0 0 2
P1 0 4
P2 0 8
Solution 2
2. Consider three processes (process ID 0,1,2 respectively) with compute time bursts 2,4 and 8 time units. All processes
arrive at time zero. Consider the longest remaining time first (LRTF) scheduling algorithm. In LRTF, ties are broken by
giving priority to the process with the lowest process ID. The average turnaround time is:
P2 P2 P2 P2 P1 P2 P1 P2 P0 P1 P2 P0 P1 p2
0 X out Consumer
1 A
Buffer empty=>
in=out 2 B
3 C
Buffer full=>
4 in Producer
(in+1)%size=out
5
• Shared data
#define BUFFER_SIZE 10
typedef struct {
...
} item;
item buffer[BUFFER_SIZE];
int in = 0;
int out = 0;
Bounded-Buffer – Producer
while (true) {
/* Produce an item */
while (((in + 1) % BUFFER SIZE) == out)
; /* do nothing -- no free buffers */
buffer[in] = item;
in = (in + 1) % BUFFER SIZE;
}
Bounded Buffer – Consumer
while (true) {
while (in == out)
; // do nothing -- nothing to consume
File 1
File 2
File 1
File 2
File 1
Process B
File 2
Next_free_slot=7
File 1
File 2 8
Process B
Next_free_slot=8
File 1
File 1 8
– Hard to debug
Critical Section Problem
• Critical region
– Part of the program where the shared memory is
accessed
• Mutual exclusion
– Prohibit more than one process from reading and
writing the shared data at the same time
Critical Section Problem
• Consider system of n processes {p0, p1, … pn-1}
• Each process has critical section segment of code
– Process may be changing common variables, updating
table, writing file, etc
– When one process in critical section, no other may be
in its critical section
• Critical section problem is to design protocol to
solve this
• Each process must ask permission to enter critical
section in entry section, may follow critical
section with exit section, then remainder section
Critical Section Problem
do {
entry section
critical section
exit section
remainder section
} while (TRUE);
2. Progress–
• If no process is executing in its critical section
• and there exist some processes that wish to enter their critical
section
• then only the processes outside remainder section (i.e. the
processes competing for critical section, or exit section) can
participate in deciding which process will enter CS next
3. Bounded Waiting - A bound must exist on the number of times that other
processes are allowed to enter their critical sections after a process has
made a request to enter its critical section and before that request is
granted
Assume that each process executes at a nonzero speed
No assumption concerning relative speed of the n processes
Critical Section Problem
do {
entry section
critical section
exit section
remainder section
} while (TRUE);
• Disable interrupt
– After entering critical region, disable all interrupts
– Since clock is just an interrupt, no CPU
preemption can occur
– Disabling interrupt is useful for OS itself, but not
for users…
Mutual Exclusion with busy waiting
• Lock variable
– A software solution
– A single, shared variable (lock)
– before entering critical region, programs test the
variable,
– if 0, enter CS;
– if 1, the critical region is occupied
While(true)
{
– What is the problem? while(lock!=0);
Lock=1
CS()
Lock=0
Non-CS()
}
Concepts
• Busy waiting
– Continuously testing a variable until some value
appears
• Spin lock
– A lock using busy waiting is call a spin lock