0% found this document useful (0 votes)
3 views11 pages

CPU Scheduling

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views11 pages

CPU Scheduling

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

CPU SCHEDULING

CPU Scheduling

• processes are managed through the use of multiple queues (or lists) of PCB's; the
word queue (in an OS context) has a loose interpretation
• the job queue contains all jobs submitted to the system, but not yet in main memory
• the ready queue contains all jobs in main memory ready to execute
• each I/O device has a queue of jobs waiting for various I/O operations
• a process is dispatched from the ready queue to the CPU; its processing may cause it
to be put on a device queue
• all of these events are signaled by interrupts
• job scheduling versus process scheduling (or CPU scheduling)
• here we are primarily discussing process scheduling
Process scheduling

• allocating the CPU to a different process to reduce idle time


• each process change requires a context switch
• a context switch is pure overhead (i.e., involves no useful work)
CPU and I/O Bursts
• a process cycles between CPU processing and I/O activity
• a process generally has many short CPU bursts or a few long CPU bursts

• I/O bound processes have many short CPU bursts


• CPU bound processes have few long CPU bursts

• this can effect the choice of CPU scheduling algorithm used in an OS

Preemptive scheduling

• CPU scheduling decisions may take place when a process


o switches from the running to waiting state
o switches from the running to ready state
o switches from the waiting to ready state
o terminates
• scheduling under conditions 1 and 4 is called non-preemptive (context switch is
caused by the running program)
• scheduling under conditions 2 and 3 is preemptive (context switch caused by external
reasons)
Scheduling Criteria

Each scheduling algorithm favors particular criteria:


• CPU utilization (maximize)
• throughput: number of processes which complete execution per time unit (maximize)
• turnaround time (TA): total amount of time to execute a particular process
(minimize)

Jyoti Verma 1
CPU SCHEDULING

• waiting time: amount of time a process has been waiting in the ready queue
(minimize)
• response time: amount of time it takes from when a request is submitted to when the
response is produced (minimize); does not include the time for a response to be
output
• Some work is being done to minimize response time variance, to promote
predictability.
CPU Scheduling Algorithms

1. First-Come, First Serve (FCFS or FIFO) (non-preemptive)


2. Priority (e.g., Shortest Job First (SJF; non-preemptive)
3. Shortest Remaining Time First (SRTF; preemptive))
4. Round Robin (preemptive)
5. Multi-level Queue
6. Multi-level Feedback Queue

First-Come, First Serve


• non-preemptive scheduling management
• ready queue is managed as a FIFO queue
• example: 3 jobs arrive at time 0 in the following order (batch processing):

Process Burst Time Arrival CT TA WT


1 24 0 24 24 0
2 3 0 27 27 24
3 3 0 30 30 27
Gantt chart:

average waiting time: (0+24+27)/3 = 17

average turnaround time: (24+27+30) = 27

consider arrival order: 2, 3, 1

Process Burst Time Arrival CT TA WT


2 3 0 3 3 0
3 3 0 6 6 3
1 24 0 30 30 6
Gantt chart:

Jyoti Verma 2
CPU SCHEDULING

average waiting time: (0+3+6)/3 = 3

average turnaround time: (3+6+30) = 13

another example:

Process Burst Time Arrival CT TA WT


1 12 0 12 12 0
2 6 1 18 17 11
3 9 4 27 23 14
Gantt chart:

average waiting time: (0+11+14)/3 = 8.33


average turnaround time: (12+17+23) = 52/3 = 17.33
another example:

Process Burst Time Arrival CT TA WT


1 10 0 10 10 0
2 29 0 39 39 10
3 3 0 42 42 39
4 7 0 49 49 42
5 12 0 61 61 49
Gantt chart:

• average waiting time: (0+10+39+42+49)/5 = 28


• average turnaround time: (10+39+42+49+61)/5 = 40.2

Priority Scheduling

Jyoti Verma 3
CPU SCHEDULING

• associate a priority with each process, allocate the CPU to the process with the highest
priority
• any 2 processes with the same priority are handled FCFS
• SJF is a version of priority scheduling where the priority is defined using the predicted
CPU burst length
• priorities are usually numeric over a range
• high numbers may indicate low priority (system dependent)
• internal (process-based) priorities: time limits, memory requirements, resources
needed, burst ratio
• external (often political) priorities: importance, source (e.g., faculty, student)
• priority scheduling can be non-preemptive or preemptive
• problem: starvation --- low priority processes may never execute because they are
waiting indefinitely for the CPU
• a solution: aging --- increase the priority of a process as time progresses
Priority Scheduling example

Gantt chart:

Process Burst Time Priority Arrival CT TA WT


1 10 3 0 16 16 6
2 1 1 0 1 1 0
3 2 4 0 18 18 16
4 1 5 0 19 19 18
5 5 2 0 6 6 1

• average waiting time: (6+0+16+18+1)/5 = 8.2


• average turnaround time: (1+6+16+18+19)/5 = 12

Shortest Job First (SJF)

• associate with each process the length of its next CPU burst
• schedule the process with the shortest time
• two schemes
• non-preemptive: once scheduled, a process continues until the end of its CPU burst
• preemptive: preempt if a new process arrives with a CPU burst of less length than the
remaining time of the currently executing process; known as the Shortest Remaining
Time First (SRTF) algorithm
• SJF is provably optimal; it yields a minimum average waiting time for any set of
processes however, we cannot always predict the SJF (non-preemptive) examples

Jyoti Verma 4
CPU SCHEDULING

example 1:

Gantt chart:

Jyoti Verma 5
CPU SCHEDULING

Process Burst Time Arrival CT TA WT


1 6 0 9 9 3
2 8 0 24 24 16
3 7 0 16 16 9
4 3 0 3 3 0

average waiting time: (3+16+9+0)/4 = 7

average turnaround time: (9+24+16+3)/4 = 13

example 2:

Process Burst Time Arrival CT TA WT


1 7 0 7 7 0
2 4 2 12 10 6
3 1 4 8 4 3
4 4 5 16 11 7
Gantt chart:

average waiting time: (0+6+3+7)/4 = 4


average turnaround time: (7+4+10+11)/4 = 8
example 3:

Process Burst Time Arrival CT TA WT


1 10 0 20 20 10
2 29 0 61 61 32
3 3 0 3 3 0
4 7 0 10 10 3
5 12 0 32 32 20

Gantt chart:

average waiting time: (10+32+0+3+20)/5 = 13


average turnaround time: (10+39+42+49+61)/5 = 25.2

Jyoti Verma 6
CPU SCHEDULING

Jyoti Verma 7
CPU SCHEDULING

SRTF (preemptive) examples


example 1:
Gantt chart:

Process Burst Time Arrival CT TA WT


1 8 0 17 17 9
2 4 1 5 4 0
3 9 2 26 24 15
4 5 3 10 7 2

average waiting time: (9+0+15+2)/4 = 6.5


average turnaround time: (17+4+24+7)/4 = 13

Round Robin

• time sharing (preemptive) scheduler where each process is given access to the CPU
for 1 time quantum (slice) (e.g., 20 milliseconds)
• a process may block itself before its time slice expires
• if it uses its entire time slice, it is then preempted and put at the end of the ready
queue
• the ready queue is managed as a FIFO queue and treated as a circular
• if there are n processes on the ready queue and the time quantum is q, then each
process gets 1/n time on the CPU in chunks of at most q time units
• no process waits for more than (n-1)q time units
• the choice of how big to make the time slice (q) is extremely important
• if q is very large, Round Robin degenerates into FCFS
• if q is very small, the context switch overhead defeats the benefits

example 1 (q = 20):

Gantt chart:

Process Burst Time Arrival Start Wait Finish TA


1 53 0 0 81 134 134
2 17 0 20 20 37 37

Jyoti Verma 8
CPU SCHEDULING

3 68 0 37 94 162 162
4 24 0 57 97 121 121

waiting times:

p1: (77-20) + (121-97) = 81

p2: (20-0) = 20

p3: (37-0) + (97-57) + (134-117) = 94

p4: (57-0) + (117-77) = 97

average waiting time: (81+20+94+97)/4 = 73

Multilevel Queue scheduling

• Used where processes are easily classified into different groups.


• For example, a common division is made between foreground (interactive)
processes and background (batch) processes.
• partitions the ready queue into several separate queues, depending on the process
type.
• The processes are permanently assigned to one queue, generally based on some
property of the process, such as memory size, process priority, or process type.
• Each queue has its own scheduling algorithm.
• Separate queues might be used for foreground and background processes.
• There must be scheduling among the queues, which is commonly implemented as
fixed-priority preemptive scheduling.
• The foreground queue may have absolute priority over the background queue.

Example

Multilevel queue scheduling algorithm with five queues, listed below in order of
priority:
1. System processes
2. Interactive processes
3. Interactive editing processes
4. Batch processes
5. Student processes

Jyoti Verma 9
CPU SCHEDULING

• Each queue has absolute priority over lower-priority queues.


• No process in the batch queue, for example, could run unless the queues for system
processes, interactive processes, and interactive editing processes were all empty.
• If an interactive editing process entered the ready queue while a batch process
was running, the batch process would be preempted.
• Another possibility is to time-slice among the queues.
• Here, each queue gets a certain portion of the CPU time, which it can then schedule
among its various processes.

Multi-level feedback queue scheduling

• When the multilevel queue scheduling algorithm is used, processes are


permanently assigned to a queue when they enter the system.
• If there are separate queues for foreground and background processes, for
example, processes do not move from one queue to the other, since processes do
not change their foreground or background nature.
• This setup has the advantage of low scheduling overhead, but it is inflexible.
• The multilevel feedback-queue scheduling algorithm, in contrast, allows a process
to move between queues.
• The processes separated according to the characteristics of their CPU bursts.
• If a process uses too much CPU time, it will be moved to a lower-priority queue.
• This scheme leaves I/O-bound and interactive processes in the higher-priority

Jyoti Verma 10
CPU SCHEDULING

queues.
• Also if a process waits too long in a lower-priority queue may be moved to a
higher-priority queue. This form of aging prevents starvation.

Example

• Consider a multilevel feedback-queue scheduler with three queues, numbered


from 0 to 2

Queue-0

Queue-1

Queue-2

• The scheduler first executes all processes in queue 0.


• Only when queue 0 is empty will it execute processes in queue 1. Similarly,
processes in queue 2 will only be executed if queues 0 and 1 are empty.
• A process that arrives for queue 1 will pre-empt a process in queue 2. A process in
queue I will in turn be pre-empted by a process arriving for queue 0.
• A process entering the ready queue is put in queue 0.
• A process in queue 0 is given a time quantum of 8 milliseconds. If it does not finish
within this time, it is moved to the tail of queue 1.
• If queue 0 is empty, the process at the head of queue 1 is given a quantum of 16
milliseconds. If it does not complete, it is pre-empted and is put into queue 2.
• Processes in queue 2 are run on an FCFS basis but are run only when queues 0 and
1 are empty.

Jyoti Verma 11

You might also like