0% found this document useful (0 votes)
18 views16 pages

Os Chapter 4

Uploaded by

ravijack025
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views16 pages

Os Chapter 4

Uploaded by

ravijack025
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

CPU SCHEDULING

4
 CPU Scheduling is the process of switching the CPU among various processes.
 CPU Scheduling is the basis of multiprogrammed operating systems. By switching the
CPU among processes, the operating system can make the computer more productive.
4.1 Basic Concepts
 Scheduling refers to the arranging of order in which the programs are to be run.
It can be done by computer itself.
 Scheduling is a fundamental operating-system function. Almost all computer resources
are scheduled before use. The CPU is, of course one of the primary computer
resources.
 Scheduling is central to operating-system design.
1) CPU – I/O Burst Cycle
 Process consists of CPU and I/O bound instructions. Process execution is a cycle of
CPU execution and I/O wait. Any process switch in these two states i.e. CPU or I/O.
CPU bound means the process generates I/O request infrequently, using more of its
time doing computation than an I/O bound process uses. An I/O bound process spends
more of its time doing I/O than it spends doing computations.
 Process begins with a CPU burst, then followed by I/O burst, again CPU burst and so
on. Figure 4.1 shows the sequence CPU and I/O bursts. An I/O bound program would
typically have many very short CPU bursts. A CPU bound program might have a few
very long CPU bursts.
4.2 CPU Scheduling

Figure 4.1: Alternating sequence of CPU and I/O bursts

2) CPU Scheduler
 The Short-term Scheduler is called the CPU Scheduler. When the CPU becomes
idle, the operating system must select one of the processes in the ready queue to be
executed. The selection process is carried out by the short-term scheduler (or CPU
scheduler).
 The scheduler selects from among the processes in memory that are ready to execute,
and allocates the CPU to one of them.
 The ready queue is not essentially a first-in, first-out (FIFO) queue. A ready queue
may be implemented as a FIFO queue, priority queue, a tree, or simply an unordered
linked list.
 Conceptually, however, all the processes in the ready queue are lined up waiting for a
chance to run on the CPU. The records in the queues are generally process control
blocks (PCBs) of the processes.
Preemptive Vs Nonpreemptive Scheduling
 The Scheduling algorithms can be divided into two categories with respect to how they
deal with clock interrupts.
i) Preemptive Scheduling
 A scheduling discipline is preemptive if, once a process has been given the CPU
the CPU can taken away from that process.
 The strategy of allowing processes that are logically runnable to be temporarily
suspended is called Preemptive Scheduling and it is contrast to the "run to completion"
method.
Operating System 4.3

ii) Nonpreemptive Scheduling


 A scheduling discipline is nonpreemptive if, once a process has been given the
CPU the CPU cannot be taken away from that process.
 Following are some characteristics of nonpreemptive scheduling:
 In nonpreemptive system, short jobs are made to wait by longer jobs but the overall
treatment of all processes is fair.
 In nonpreemptive system, response times are more predictable because incoming high
priority jobs cannot displace waiting jobs.
 In nonpreemptive scheduling, a scheduler executes jobs in the following two
situations.
a. When a process switches from running state to the waiting state.
b. When a process terminates.
Differences between Preemptive and Non-Preemptive Scheduling
4.4 CPU Scheduling

3. Dispatcher
 The CPU-scheduling function is the dispatcher. The dispatcher is the module that
gives control of the CPU to the process selected by the short-term scheduler. This
function involves the following:
 Switching context
 Switching to user mode
 Jumping to the proper location in the user program to restart that program
 The dispatcher should be as fast as possible, since it is invoked during every process
switch.
 The time it takes for the dispatcher to stop one process and start another running
is known as the dispatch latency.
4.2 Scheduling Criteria
 There are several different criteria to consider when trying to selecting the "best"
scheduling algorithm for a particular situation and environment, including:
1) CPU Utilization: Keep the CPU as busy as possible. CPU utilization may range from
0 to 100 percent. In a real time system, it should range from 40 percent (for a lightly
loaded system) to 90 percent (for a heavily loaded system).
2) Throughput: The number of processes completed per time unit is called Throughput.
3) Turnaround Time: The interval from the time of submission of a process to the time
of completion is the turnaround time. Turnaround time is the sum of the periods spent
waiting to get into memory, waiting in the ready queue, executing on the CPU, and
doing I/O.
4) Waiting Time: Waiting time is the sum of the periods spent waiting in the ready
queue.
5) Response Time: Response time is the amount of time it takes to start responding. The
turnaround time is generally limited by the speed of the output device.
4.3 Scheduling Algorithms
 CPU Scheduling deals with the problem of deciding which of the processes in the
ready queue is to be allocated the CPU. The various scheduling algorithms are as
follows.
Operating System 4.5

1) FCFS Scheduling
2) SJF Scheduling
3) Priority Scheduling
4) Round Robin Scheduling
5) Multilevel Queue Scheduling
6) Multilevel Feedback Queue Scheduling
1) First-Come, First-Served (FCFS) Scheduling
 FCFS is the simplest scheduling algorithm. CPU is allocated to the process in the
order of arrival (Figure 5.2).
 The implementation of the FCFS policy is easily managed with a FIFO queue.
 When a process enters the ready queue, its process control block (PCB) is linked onto
the tail of the queue.
 When the CPU is free, it is allocated to the process at the head of the queue. The
running process is then removed from the queue.
 The code for FCFS scheduling is simple to write and understand.
 The average waiting time under the FCFS policy is often quite long.
 Consider the following set of processers that arrive at time 0, with the length of the
CPU-burst time given milliseconds.
Process Burst Time
P1 3
P2 6
P3 4
P4 2
 If the processes arrive in the order of P1, P2, P3, P4 and are served in FCFS order, the
Gantt chart is given below:

i) Gantt Chart

P1 P2 P3 P4

0 3 9 13 15
4.6 CPU Scheduling

ii) Waiting Time

Process Waiting Time

P1 0
P2 3
P3 9
P4 13

iii) Average Waiting Time

Average Waiting Time = Waiting times of all processes


Number of processes
= 0+3+9+13
4
= 25
4
= 6.25

iv) Turnaround Time

 It is computed by subtracting the time the process entered the system from the time it
terminated. Process entered time is 0 for all processes.

Process Turnaround Time


(Burst Time + Waiting Time)
P1 3+0=3
P2 6+3=9
P3 4+9=13
P4 2+13=15

v) Average Turnaround Time:

=3+9+13+15
4
=10
 The FCFS scheduling algorithm is nonpreemptive.
Operating System 4.7

 Once the CPU has been allocated to a process, that process keeps the CPU until it
releases the CPU either by terminating or by requesting I/O.
 The FCFS algorithm is particularly troublesome for time-sharing systems, where each
user needs to get a share of the CPU at regular intervals.

Figure 4.2: First-in-first-out Scheduling

2) Shortest-Job-First Scheduling (SJF)


 Shortest-Job-First (SJF) or Shortest-Job-Next (SJN) is a scheduling policy that
selects the waiting process with the smallest execution time.
 If two processes have the same length, FCFS scheduling is used to break the tie.
 Shortest Job first has the advantage of having minimum average waiting time among
all scheduling algorithms.
 SJF algorithm may be either preemptive or nonpreemptive.
 A preemptive SJF algorithm will preempt the currently executing process, whereas a
nonpreemptive SJF algorithm will allow the currently running process to finish its
CPU burst.
 Preemptive SJF scheduling is sometimes called shortest-remaining-time-first
scheduling.
 As an example, consider the following set of processes, with the length of the CPU-
burst time given in milliseconds.
Process Burst Time

P1 3
P2 6
P3 4
P4 2

 Arrival time of the process is 0 and processes arrive in the order of P1, P2, P3 and P4.
The Gantt chart, waiting time and turnaround time is given below:
4.8 CPU Scheduling

i) Gantt Chart:

P4 P1 P3 P2

0 2 5 9 15

ii) Waiting Time:

Process Waiting Time

P1 2

P2 9

P3 5

P4 0

iii) Average Waiting Time:

Average Waiting Time = 2+9+5+0


4
= 16/4
=4

iv) Turnaround Time:

It is the sum of Burst Time plus Waiting Time of each process.

Process Turnaround Time

P1 3+2=5
P2 6+9=15
P3 4+5=9
P4 2+0=2
Average Turnaround Time = 5+15+9+2
4
= 31/4
= 7.75
 SJF algorithm is an optimal algorithm. It gives the minimum average waiting time
for a given set of process. SJF algorithm cannot be implemented at the level of short
time CPU scheduling. There is no way to know the length of the next CPU burst.
Operating System 4.9

3) Priority Scheduling
 Priority Scheduling is a method of scheduling processes based on priority.
 A priority is associated with each process, and the CPU is allocated to the process with
the highest priority. Equal-priority processes are scheduled in FCFS order.
 Priorities are generally some fixed range of numbers, such as 0 to 7, or 0 to 4,095.
Some systems use low numbers to represent low priority; others use low numbers for
high priority.
 The CPU is allocated to the process with the highest priority (smallest integer =
highest priority).
 Priority of the process can be defined either internally or externally.
 Internally defined priorities use some measurable quantity or quantities to compute the
priority of a process. For example, time limits, memory requirements, the number of
open files, and the ratio of average I/O burst to average CPU burst have been used in
computing priorities.
 External priorities are defined using criteria beyond the operating system (OS), which
can include the significance of the process, the type as well as the sum of resources
being utilized for computer use, user preference, commerce and other factors like
politics, etc.
 Priority Scheduling can be either preemptive or nonpreemptive. When a process
arrives at the ready queue, its priority is compared with the priority of the currently
running process. A preemptive priority-scheduling algorithm will preempt the CPU if
the priority of the newly arrived process is higher than the priority of the currently
running process. A nonpreemptive priority-scheduling algorithm will simply put the
new process at the head of the ready queue.
 Let us consider the set of processes with burst time in milliseconds. Arrival time of the
process is zero.
4.10 CPU Scheduling

Process Burst Time Priority

P1 3 2
P2 6 4
P3 4 1
P4 2 3

Processes are arrived in the order of P1, P2, P3, P4, Gantt chart, waiting time,
turnaround time for priority scheduling algorithms are given below.

i) Gantt Chart:

P3 P1 P4 P2

0 4 7 9 15

ii) Waiting Time:

Process Waiting Time

P1 4
P2 9
P3 0
P4 7
iii) Average Waiting Time:

Average Waiting Time = 4+9+0+7


4
= 20/4
=5

iv) Turnaround Time:

Process Turnaround Time

P1 3+4=7
P2 6+9=15
P3 4+0=4
P4 2+7=9
Operating System 4.11

v) Average Turnaround Time = 7+15+4+9


4
= 35/4
= 8.75
 A major problem with priority-scheduling is indefinite blocking or starvation. A
process that is ready to run but lacking the CPU can be considered blocked, waiting for
the CPU.
 A solution to the problem of indefinite blockage of the low-priority processes is aging.
Aging is a technique of gradually increasing the priority of processes that wait in
the system for a long period of time.
4) Round-Robin Scheduling
 The Round-Robin (RR) Scheduling algorithm is designed especially for time-
sharing systems. It is similar to FCFS scheduling, but preemption is added to
switch between processes (Figure 4.3).
 A small unit of time, called a time quantum (or time slice), is defined. A time
quantum is generally from 10 to 100 milliseconds.
 The ready queue is treated as a circular queue.
 The CPU scheduler goes around the ready queue, allocating the CPU to each process
for a time interval of up to 1 time quantum.
 In the round robin scheduling, processes are dispatched in a FIFO manner.
 New processes are added to the tail of the ready queue. The CPU scheduler picks the
first process from the ready queue, sets a timer to interrupt after 1 time quantum, and
dispatches the process.
 The process may have a CPU burst of less than 1 time quantum. In this case, the
process will release the CPU voluntarily. The scheduler will then proceed to the next
process in the ready queue.
 If the CPU burst of the currently running process is longer than 1 time quantum, the
timer will go off and will cause an interrupt to the operating system.
 A context switch will be executed, and the process will be put at the tail of the ready
queue.
 The average waiting time under the RR policy, is often quite long.
4.12 CPU Scheduling

 Let us consider the set of processes with burst time in milliseconds. All the processes
are arrived at time 0. We can draw the Gantt Chart, calculate the waiting time and so
on.
Process Burst Time
P1 3
P2 6
P3 4
P4 2

Time quantum is 2 milliseconds.

i) Gantt Chart:

P1 P2 P3 P4 P1 P2 P3 P4

0 2 4 6 8 9 11 13 15

ii) Waiting Time:

Process Waiting Time

P1 0+6=6
P2 2+5+2=9
P3 4+5=9
P4 6=6

iii) Average Waiting Time:

Average Waiting Time = 6+9+9+6


4
= 30/4
= 7.5

iv) Turnaround Time:

Process Turnaround Time

P1 3+6=9
P2 6+9=15
P3 4+9=13
P4 2+6=8
Operating System 4.13

v) Average Turnaround Time = 9+15+13+8


4
= 45/4
= 11.25

Figure 4.3: Round Robin Scheduling

 In the RR scheduling algorithm, no process is allocated the CPU for more than one
time quantum in a row. If a process’ CPU burst exceeds 1 time quantum, that process
is preempted and is put back in the ready queue.
 The RR scheduling algorithm is preemptive.
 The performance of the RR algorithms depends heavily on the size of the time
quantum. If the time quantum is very large (infinite), the RR policy is the same as the
FCFS policy. If the time quantum is very small (say 1 microsecond), the RR approach
is called processor sharing.
 Turnaround time also depends on the size of the time quantum. In general, the average
turnaround time can be improved if most processes finish their next CPU in a single
time quantum.
5) Multilevel Queue Scheduling
 A Multilevel Queue-Scheduling algorithm partitions the ready queue into several
separate queues (Figure 4.4).

Figure 4.4: Multilevel Queue Scheduling


4.14 CPU Scheduling

 The processes are permanently assigned to one queue, generally based on some
property of the processes, such as memory size, process priority, or process type.
 Each queue has its own scheduling algorithm or policy.
 For example, separate queues might be used for foreground and background processes.
 The foreground queue might be scheduled by an RR algorithm, while the
background queue is scheduled by an FCFS algorithm.
 In addition, there must be scheduling among the queues, which is commonly
implemented as fixed-priority preemptive scheduling. For example, the foreground
queue may have absolute priority over the background queue.
 Consider a multilevel queue-scheduling algorithm with five queues:
1. System Processes
2. Interactive Processes
3. Interactive Editing Processes
4. Batch Processes
5. Student Processes
 Each queue has absolute priority over lower-priority queues. No process in the batch
queue, for example, could run unless the queues for system processes, interactive
processes, and interactive editing processes were all empty.
 Another possibility is to time slice between the queues. Each queue gets a certain
portion of the CPU time, which it can then schedule among the various processes in its
queue. For instance, in the foreground-background queue example, the foreground
queue can be given 80 percent of the CPU time for RR scheduling among its
processes, whereas the background queue receives 20 percent of the CPU to give its
processes in a FCFS manner.
6) Multilevel Feedback Queue Scheduling
 Multilevel Feedback Queue Scheduling algorithm allows a process to move
between queues. It uses many ready queues and associates a different priority with
each queue (Figure 4.5).
 The Algorithm chooses to process with highest priority from the occupied queue and
run that process either preemptively or non-preemptively.
Operating System 4.15

 If the process uses too much CPU time it will moved to a lower-priority queue.
Similarly, a process that wait too long in the lower-priority queue may be moved to a
higher-priority queue may be moved to a highest-priority queue. Note that this form of
aging prevents starvation.
Example:
 A process entering the ready queue is placed in queue 0.
 If it does not finish within 8 milliseconds time, it is moved to the tail of queue 1.
 If it does not complete, it is preempted and placed into queue 2.
 Processes in queue 2 run on a FCFS basis, only when queue 2 run on a FCFS
basis, only when queue 0 and queue 1 are empty.`

Figure 4.5: Multilevel Feedback Queues

 This scheduling algorithm gives highest priority to any process with a CPU burst
of 8 milliseconds. Such a process will quickly get the CPU, finish its CPU burst,
and go off to its next I/O burst.
 In general, a multilevel feedback queue scheduler is defined by the following
parameters:
 The number of queues.
 The scheduling algorithm for each queue.
 The method used to determine when to upgrade a process to higher-priority queue.
 The method used to determine when to demote a process to a lower-priority
queue.
 The method used to determine which queue a process will enter when that process
needs service.
4.16 CPU Scheduling

Exercises:

1. What is meant by CPU Scheduling?


2. What do you mean by the term “Scheduling” in OS.
3. Write short notes on CPU and I/O Burst Cycles.
4. What are the functions of the CPU Scheduler?
5. What are Preemptive and Non-Preemptive Scheduling in OS?
6. Distinguish between Preemptive and Non-Preemptive Scheduling Algorithms in OS.
7. Define Dispatcher in OS.
8. What is meant by Dispatch Latency?
9. Discuss briefly about the Scheduling Criteria.
10. Define Throughput.
11. What do you mean by Turnaround Time?
12. Define Waiting Time.
13. Define Response Time.
14. Discuss briefly about FCFS Scheduling Algorithm.
15. Describe briefly about SJF Scheduling Algorithm.
16. Write short note on Priority Scheduling Algorithm.
17. Explain briefly about R-R Scheduling Algorithm.
18. Discuss about Multilevel Queue Scheduling Algorithm.
19. Discuss about Multilevel Feedback Queue Scheduling Algorithm.
20. Explain briefly the various CPU Scheduling Algorithms in OS.
******************************

You might also like