Os Chapter 4
Os Chapter 4
4
CPU Scheduling is the process of switching the CPU among various processes.
CPU Scheduling is the basis of multiprogrammed operating systems. By switching the
CPU among processes, the operating system can make the computer more productive.
4.1 Basic Concepts
Scheduling refers to the arranging of order in which the programs are to be run.
It can be done by computer itself.
Scheduling is a fundamental operating-system function. Almost all computer resources
are scheduled before use. The CPU is, of course one of the primary computer
resources.
Scheduling is central to operating-system design.
1) CPU – I/O Burst Cycle
Process consists of CPU and I/O bound instructions. Process execution is a cycle of
CPU execution and I/O wait. Any process switch in these two states i.e. CPU or I/O.
CPU bound means the process generates I/O request infrequently, using more of its
time doing computation than an I/O bound process uses. An I/O bound process spends
more of its time doing I/O than it spends doing computations.
Process begins with a CPU burst, then followed by I/O burst, again CPU burst and so
on. Figure 4.1 shows the sequence CPU and I/O bursts. An I/O bound program would
typically have many very short CPU bursts. A CPU bound program might have a few
very long CPU bursts.
4.2 CPU Scheduling
2) CPU Scheduler
The Short-term Scheduler is called the CPU Scheduler. When the CPU becomes
idle, the operating system must select one of the processes in the ready queue to be
executed. The selection process is carried out by the short-term scheduler (or CPU
scheduler).
The scheduler selects from among the processes in memory that are ready to execute,
and allocates the CPU to one of them.
The ready queue is not essentially a first-in, first-out (FIFO) queue. A ready queue
may be implemented as a FIFO queue, priority queue, a tree, or simply an unordered
linked list.
Conceptually, however, all the processes in the ready queue are lined up waiting for a
chance to run on the CPU. The records in the queues are generally process control
blocks (PCBs) of the processes.
Preemptive Vs Nonpreemptive Scheduling
The Scheduling algorithms can be divided into two categories with respect to how they
deal with clock interrupts.
i) Preemptive Scheduling
A scheduling discipline is preemptive if, once a process has been given the CPU
the CPU can taken away from that process.
The strategy of allowing processes that are logically runnable to be temporarily
suspended is called Preemptive Scheduling and it is contrast to the "run to completion"
method.
Operating System 4.3
3. Dispatcher
The CPU-scheduling function is the dispatcher. The dispatcher is the module that
gives control of the CPU to the process selected by the short-term scheduler. This
function involves the following:
Switching context
Switching to user mode
Jumping to the proper location in the user program to restart that program
The dispatcher should be as fast as possible, since it is invoked during every process
switch.
The time it takes for the dispatcher to stop one process and start another running
is known as the dispatch latency.
4.2 Scheduling Criteria
There are several different criteria to consider when trying to selecting the "best"
scheduling algorithm for a particular situation and environment, including:
1) CPU Utilization: Keep the CPU as busy as possible. CPU utilization may range from
0 to 100 percent. In a real time system, it should range from 40 percent (for a lightly
loaded system) to 90 percent (for a heavily loaded system).
2) Throughput: The number of processes completed per time unit is called Throughput.
3) Turnaround Time: The interval from the time of submission of a process to the time
of completion is the turnaround time. Turnaround time is the sum of the periods spent
waiting to get into memory, waiting in the ready queue, executing on the CPU, and
doing I/O.
4) Waiting Time: Waiting time is the sum of the periods spent waiting in the ready
queue.
5) Response Time: Response time is the amount of time it takes to start responding. The
turnaround time is generally limited by the speed of the output device.
4.3 Scheduling Algorithms
CPU Scheduling deals with the problem of deciding which of the processes in the
ready queue is to be allocated the CPU. The various scheduling algorithms are as
follows.
Operating System 4.5
1) FCFS Scheduling
2) SJF Scheduling
3) Priority Scheduling
4) Round Robin Scheduling
5) Multilevel Queue Scheduling
6) Multilevel Feedback Queue Scheduling
1) First-Come, First-Served (FCFS) Scheduling
FCFS is the simplest scheduling algorithm. CPU is allocated to the process in the
order of arrival (Figure 5.2).
The implementation of the FCFS policy is easily managed with a FIFO queue.
When a process enters the ready queue, its process control block (PCB) is linked onto
the tail of the queue.
When the CPU is free, it is allocated to the process at the head of the queue. The
running process is then removed from the queue.
The code for FCFS scheduling is simple to write and understand.
The average waiting time under the FCFS policy is often quite long.
Consider the following set of processers that arrive at time 0, with the length of the
CPU-burst time given milliseconds.
Process Burst Time
P1 3
P2 6
P3 4
P4 2
If the processes arrive in the order of P1, P2, P3, P4 and are served in FCFS order, the
Gantt chart is given below:
i) Gantt Chart
P1 P2 P3 P4
0 3 9 13 15
4.6 CPU Scheduling
P1 0
P2 3
P3 9
P4 13
It is computed by subtracting the time the process entered the system from the time it
terminated. Process entered time is 0 for all processes.
=3+9+13+15
4
=10
The FCFS scheduling algorithm is nonpreemptive.
Operating System 4.7
Once the CPU has been allocated to a process, that process keeps the CPU until it
releases the CPU either by terminating or by requesting I/O.
The FCFS algorithm is particularly troublesome for time-sharing systems, where each
user needs to get a share of the CPU at regular intervals.
P1 3
P2 6
P3 4
P4 2
Arrival time of the process is 0 and processes arrive in the order of P1, P2, P3 and P4.
The Gantt chart, waiting time and turnaround time is given below:
4.8 CPU Scheduling
i) Gantt Chart:
P4 P1 P3 P2
0 2 5 9 15
P1 2
P2 9
P3 5
P4 0
P1 3+2=5
P2 6+9=15
P3 4+5=9
P4 2+0=2
Average Turnaround Time = 5+15+9+2
4
= 31/4
= 7.75
SJF algorithm is an optimal algorithm. It gives the minimum average waiting time
for a given set of process. SJF algorithm cannot be implemented at the level of short
time CPU scheduling. There is no way to know the length of the next CPU burst.
Operating System 4.9
3) Priority Scheduling
Priority Scheduling is a method of scheduling processes based on priority.
A priority is associated with each process, and the CPU is allocated to the process with
the highest priority. Equal-priority processes are scheduled in FCFS order.
Priorities are generally some fixed range of numbers, such as 0 to 7, or 0 to 4,095.
Some systems use low numbers to represent low priority; others use low numbers for
high priority.
The CPU is allocated to the process with the highest priority (smallest integer =
highest priority).
Priority of the process can be defined either internally or externally.
Internally defined priorities use some measurable quantity or quantities to compute the
priority of a process. For example, time limits, memory requirements, the number of
open files, and the ratio of average I/O burst to average CPU burst have been used in
computing priorities.
External priorities are defined using criteria beyond the operating system (OS), which
can include the significance of the process, the type as well as the sum of resources
being utilized for computer use, user preference, commerce and other factors like
politics, etc.
Priority Scheduling can be either preemptive or nonpreemptive. When a process
arrives at the ready queue, its priority is compared with the priority of the currently
running process. A preemptive priority-scheduling algorithm will preempt the CPU if
the priority of the newly arrived process is higher than the priority of the currently
running process. A nonpreemptive priority-scheduling algorithm will simply put the
new process at the head of the ready queue.
Let us consider the set of processes with burst time in milliseconds. Arrival time of the
process is zero.
4.10 CPU Scheduling
P1 3 2
P2 6 4
P3 4 1
P4 2 3
Processes are arrived in the order of P1, P2, P3, P4, Gantt chart, waiting time,
turnaround time for priority scheduling algorithms are given below.
i) Gantt Chart:
P3 P1 P4 P2
0 4 7 9 15
P1 4
P2 9
P3 0
P4 7
iii) Average Waiting Time:
P1 3+4=7
P2 6+9=15
P3 4+0=4
P4 2+7=9
Operating System 4.11
Let us consider the set of processes with burst time in milliseconds. All the processes
are arrived at time 0. We can draw the Gantt Chart, calculate the waiting time and so
on.
Process Burst Time
P1 3
P2 6
P3 4
P4 2
i) Gantt Chart:
P1 P2 P3 P4 P1 P2 P3 P4
0 2 4 6 8 9 11 13 15
P1 0+6=6
P2 2+5+2=9
P3 4+5=9
P4 6=6
P1 3+6=9
P2 6+9=15
P3 4+9=13
P4 2+6=8
Operating System 4.13
In the RR scheduling algorithm, no process is allocated the CPU for more than one
time quantum in a row. If a process’ CPU burst exceeds 1 time quantum, that process
is preempted and is put back in the ready queue.
The RR scheduling algorithm is preemptive.
The performance of the RR algorithms depends heavily on the size of the time
quantum. If the time quantum is very large (infinite), the RR policy is the same as the
FCFS policy. If the time quantum is very small (say 1 microsecond), the RR approach
is called processor sharing.
Turnaround time also depends on the size of the time quantum. In general, the average
turnaround time can be improved if most processes finish their next CPU in a single
time quantum.
5) Multilevel Queue Scheduling
A Multilevel Queue-Scheduling algorithm partitions the ready queue into several
separate queues (Figure 4.4).
The processes are permanently assigned to one queue, generally based on some
property of the processes, such as memory size, process priority, or process type.
Each queue has its own scheduling algorithm or policy.
For example, separate queues might be used for foreground and background processes.
The foreground queue might be scheduled by an RR algorithm, while the
background queue is scheduled by an FCFS algorithm.
In addition, there must be scheduling among the queues, which is commonly
implemented as fixed-priority preemptive scheduling. For example, the foreground
queue may have absolute priority over the background queue.
Consider a multilevel queue-scheduling algorithm with five queues:
1. System Processes
2. Interactive Processes
3. Interactive Editing Processes
4. Batch Processes
5. Student Processes
Each queue has absolute priority over lower-priority queues. No process in the batch
queue, for example, could run unless the queues for system processes, interactive
processes, and interactive editing processes were all empty.
Another possibility is to time slice between the queues. Each queue gets a certain
portion of the CPU time, which it can then schedule among the various processes in its
queue. For instance, in the foreground-background queue example, the foreground
queue can be given 80 percent of the CPU time for RR scheduling among its
processes, whereas the background queue receives 20 percent of the CPU to give its
processes in a FCFS manner.
6) Multilevel Feedback Queue Scheduling
Multilevel Feedback Queue Scheduling algorithm allows a process to move
between queues. It uses many ready queues and associates a different priority with
each queue (Figure 4.5).
The Algorithm chooses to process with highest priority from the occupied queue and
run that process either preemptively or non-preemptively.
Operating System 4.15
If the process uses too much CPU time it will moved to a lower-priority queue.
Similarly, a process that wait too long in the lower-priority queue may be moved to a
higher-priority queue may be moved to a highest-priority queue. Note that this form of
aging prevents starvation.
Example:
A process entering the ready queue is placed in queue 0.
If it does not finish within 8 milliseconds time, it is moved to the tail of queue 1.
If it does not complete, it is preempted and placed into queue 2.
Processes in queue 2 run on a FCFS basis, only when queue 2 run on a FCFS
basis, only when queue 0 and queue 1 are empty.`
This scheduling algorithm gives highest priority to any process with a CPU burst
of 8 milliseconds. Such a process will quickly get the CPU, finish its CPU burst,
and go off to its next I/O burst.
In general, a multilevel feedback queue scheduler is defined by the following
parameters:
The number of queues.
The scheduling algorithm for each queue.
The method used to determine when to upgrade a process to higher-priority queue.
The method used to determine when to demote a process to a lower-priority
queue.
The method used to determine which queue a process will enter when that process
needs service.
4.16 CPU Scheduling
Exercises: