Module-2 Process Scheduling
Module-2 Process Scheduling
Process Scheduling
Basic concepts
Scheduling Criteria
Scheduling Algorithms
Thread scheduling
Multiple-processor scheduling,
In a single-processor system,
‣ To have some process running at all times, in order to maximize CPU utilization.
‣ Several processes are kept in memory at one time
‣ Every time a running process has to wait, another process can take over use of the
CPU
Scheduling of the CPU is fundamental to operating system design
The dispatcher module gives control of the CPU to the process selected by the short-term
scheduler; this involves:
‣ switching context
‣ switching to user mode
‣ jumping to the proper location in the user program to restart that program
The dispatcher needs to run as fast as possible, since it is invoked during process context
switch
The time it takes for the dispatcher to stop one process and start another process is called
dispatch latency
It is desirable to
‣ Maximize throughput
In other cases, it is more important to optimize the minimum or maximum values rather
than the average
Department of CSE- Data Science
Gantt Chart
P1 P2 P3
0 24 27 30
P2 P3 P1
0 3 6 30
Case #2 shows that higher utilization might be possible if the short processes were
allowed to run first
– Once the CPU has been allocated to a process, that process keeps the CPU until it
releases it either by terminating or by requesting I/O
When the CPU becomes available, it is assigned to the process that has the smallest next
CPU burst (in the case of matching bursts, FCFS is used)
Two schemes:
Nonpreemptive – once the CPU is given to the process, it cannot be preempted until
it completes its CPU burst
Preemptive – if a new process arrives with a CPU burst length less than the remaining
time of the current executing process, preempt. This scheme is know as the Shortest-
Remaining-Time-First (SRTF)
P1 P2 P4 P1 P3
0 1 5 10 17 26
The CPU is allocated to the process with the highest priority (smallest integer =
highest priority)
A preemptive: approach will preempt the CPU if the priority of the newly-arrived
process is higher than the priority of the currently running process
A non-preemptive approach will simply put the new process (with the highest
priority) at the head of the ready queue
SJF is a priority scheduling algorithm where priority is the predicted next CPU burst
time
A solution is aging; as time progresses, the priority of a process in the ready queue is
increased
0 1 6 16 18 19
Each process gets a small unit of CPU time (time quantum q), usually 10-100
milliseconds. After this time has elapsed, the process is preempted and added to the
end of the ready queue.
Process Burst Time
P1 24
P2 3
P3 3
The Gantt chart is:
P1 P2 P3 P1 P1 P1 P1 P1
0 4 7 10 14 18 22 26 30
In multi-level feedback queue scheduling, a process can move between the various
queues; aging can be implemented this way
– Number of queues
– Method used to determine which queue a process will enter when that process
needs service
Thread scheduling refers to the process of determining the execution order and time
allocation for threads in a multithreaded environment.
Distinction between user-level and kernel-level threads
Many-to-one and many-to-many models, thread library schedules user-level threads
to run on LWP( Light Weight Process)
– Known as process-contention scope (PCS) since scheduling competition is
within the process
Kernel thread scheduled onto available CPU is system-contention scope (SCS) –
competition among all threads in system
Pthread API that allows specifying either PCS or SCS during thread creation.
P threads identifies the following contention scope values:
1. PTHREAD_SCOPEJPROCESS schedules threads using PCS scheduling.
2. PTHREAD-SCOPE_SYSTEM schedules threads using SCS scheduling.
Pthread IPC provides following two functions for getting and setting the contention
scope policy:
1. P thread_ attr_ setscope(pthread_attr_t *attr, intscope)
2. pthread_attr_getscope(pthread_attr_t *attr, int*scop
In SMP systems,
1. Migration of processes from one processor to another are avoided and
2. Instead processes are kept running on same processor. This is known as processor
affinity.
Two forms:
1. SoftAffinity
When an OS try to keep a process on one processor because of policy, but cannot
guarantee it will happen.
It is possible for a process to migrate between processors.
2. Hard Affinity
When an OS have the ability to allow a process to specify that it is not to migrate to
other processors. Eg: Solaris OS
This attempts to keep the workload evenly distributed across all processors in an
SMP system.
Load balancing attempts to keep the workload evenly distributed across all
processors in an SMP system.
There are two general approaches to load balancing: push migration and pull
migration.
With push migration, a specific task periodically checks the load on each
processor and -if it finds an imbalance-evenly distributes the load by moving (or
pushing) processes from overloaded to idle or less-busy processors.
Pull migration occurs when an idle processor pulls a waiting task from a busy
processor.