Unit II Chapter II
Unit II Chapter II
Chapter 2
CPU SCHEDULING
Preemptive Scheduling :
Note:
o Turn Around Time = Completion Time - Arrival Time
o Waiting Time = Turnaround time - Burst Time
Scheduling Algorithms
CPU scheduling deals with the problem ofdeciding which of the processes
in theready queue is to be allocated the CPU.
There aremanydifferent CPU-scheduling algorithms.
Consider the following set of processes that arrive at time 0, with the
length of the CPU burst given in milliseconds:
If the processes arrive in the order P1, P2, P3, and are served in FCFS order,
we get the result shown in the following Gantt chart, which is a bar chart
that illustrates a particular schedule, including the start and finish times of
each of the participating processes:
0 0 2
1 1 6
2 2 4
3 3 9
4 4 12
Solution:
0 0 2 2 2 0
1 1 6 8 7 1
2 2 4 12 10 6
3 3 9 21 18 9
4 4 12 33 29 17
Gantt chart
This algorithm associates with each process the length of the process’s next
CPU burst.
When the CPU is available, it is assigned to the process that has the
smallest next CPU burst. If the next CPU bursts of two processes are the
same, FCFS scheduling is used to break the tie.
Note that a more appropriate term for this scheduling method would be
the shortest-next- CPU-burst algorithm, because scheduling depends on the
length of the next CPU burst of a process, rather than its total length.
Advantages of SJF
o Maximum throughput
o Minimum average waiting and turnaround time
Disadvantages of SJF
Gantt chart
1 1 7
2 3 3
3 6 2
4 7 10
5 9 8
Solution :
1 1 7 8 7 0
2 3 3 13 10 7
3 6 2 10 4 2
4 7 10 31 24 14
5 9 8 21 12 4
Gantt chart
Solution:
1 0 8
2 1 4
3 2 2
4 3 1
5 4 3
6 5 2
Solution:
Process Arrival Burst Completion Turn Waiting Response
ID Time Time Time Around Time Time
Time
1 0 8 20 20 12 0
2 1 4 10 9 5 1
3 2 2 4 2 0 2
4 3 1 5 2 1 4
5 4 3 13 9 6 10
6 5 2 7 2 0 5
Gantt chart
1. Since, at time 0, the only available process is P1 with CPU burst time 8. This
is the only available process in the list therefore it is scheduled.
2. The next process arrives at time unit 1. Since the algorithm we are using is
SRTF which is a preemptive one, the current execution is stopped and the
scheduler checks for the process with the least burst time.
Till now, there are two processes available in the ready queue. The OS has
executed P1 for one unit of time till now; the remaining burst time of P1 is
7 units. The burst time of Process P2 is 4 units. Hence Process P2 is
scheduled on the CPU according to the algorithm.
3. The next process P3 arrives at time unit 2. At this time, the execution of
process P3 is stopped and the process with the least remaining burst time is
searched. Since the process P3 has 2 unit of burst time hence it will be
given priority over others.
4. The Next Process P4 arrives at time unit 3. At this arrival, the scheduler will
stop the execution of P4 and check which process is having least burst time
among the available processes (P1, P2, P3 and P4). P1 and P2 are having the
remaining burst time 7 units and 3 units respectively.
P3 and P4 are having the remaining burst time 1 unit each. Since, both are
equal hence the scheduling will be done according to their arrival time. P3
arrives earlier than P4 and therefore it will be scheduled again.
5. The Next Process P5 arrives at time unit 4. Till this time, the Process P3 has
completed its execution and it is no more in the list. The scheduler will
compare the remaining burst time of all the available processes. Since the
burst time of process P4 is 1 which is least among all hence this will be
scheduled.
6. The Next Process P6 arrives at time unit 5, till this time, the Process P4 has
completed its execution. We have 4 available processes till now, that are P1
(7), P2 (3), P5 (3) and P6 (2). The Burst time of P6 is the least among all
hence P6 is scheduled. Since, now, all the processes are available hence the
algorithm will now work same as SJF. P6 will be executed till its completion
and then the process with the least remaining time will be scheduled.
7. Once all the processes arrive, no preemption is done and the algorithm will
work as SJF.
Priority Scheduling :
The SJF algorithm is a special case of the general priority-scheduling
algorithm.
Apriority is associated with each process, and the CPUis allocated to the
processwith the highest priority.
Equal-priority processes are scheduled in FCFS order.
In Priority scheduling, there is a priority number assigned to each process.
In some systems, the lower the number, the higher the priority. While, in
the others, the higher the number, the higher will be the priority. The
Process with the higher priority among the available processes is given the
CPU. There are two types of priority scheduling algorithm exists. One is Pre-
emptive priority scheduling while the other is non-Pre-emptive Priority
scheduling.
The priority number assigned to each of the process may or may not vary. If
the priority number doesn't change itself throughout the process, it is
called static priority, while if it keeps changing itself at the regular intervals,
it is called dynamic priority.
Non Preemptive Priority Scheduling:
In the Non-Preemptive Priority scheduling, The Processes are scheduled
according to the priority number assigned to them. Once the process gets
scheduled, it will run till the completion. Generally, the lower the priority
number, the higher is the priority of the process.
1 2 0 3
2 6 2 5
3 3 1 4
4 5 4 2
5 7 6 9
6 4 5 4
7 10 7 10
We can prepare the Gantt chart according to the Non Preemptive priority
scheduling..
The Process P1 arrives at time 0 with the burst time of 3 units and the
priority number 2. Since No other process has arrived till now hence the OS
will schedule it immediately.
Meanwhile the execution of P1, two more Processes P2 and P3 are arrived.
Since the priority of P3 is 3 hence the CPU will execute P3 over P2.
Meanwhile the execution of P3, All the processes get available in the ready
queue. The Process with the lowest priority number will be given the
priority. Since P6 has priority number assigned as 4 hence it will be
executed just after P3.
After P6, P4 has the least priority number among the available processes; it
will get executed for the whole burst time.
Since all the jobs are available in the ready queue hence All the Jobs will get
executed according to their priorities. If two jobs have similar priority
number assigned to them, the one with the least arrival time will be
executed.
1 2 0 3 3 3 0 0
2 6 2 5 18 16 11 13
3 3 1 4 7 6 2 3
4 5 4 2 13 9 7 11
5 7 6 9 27 21 12 18
6 4 5 4 11 6 2 7
7 10 7 10 37 30 18 27
Example
There are 7 processes P1, P2, P3, P4, P5, P6 and P7 given. Their respective
priorities, Arrival Times and Burst times are given in the table below.
Avg Waiting Time = (0+14+0+7+1+25+16)/7 = 63/7 = 9 units
Round Robin Scheduling Algorithm
Advantages
It can be actually implementable in the system because it is not depending
on the burst time.
It doesn't suffer from the problem of starvation or convoy effect.
All the jobs get a fare allocation of CPU.
Disadvantages
The higher the time quantum, the higher the response time in the system.
The lower the time quantum, the higher the context switching overhead in
the system.
Deciding a perfect time quantum is really a very difficult task in the system.
Example : Consider thefollowing set of processes that arrive at time 0, with
the length of the CPU burstgiven in milliseconds:
Solution :
Example:
Consider the following table of arrival time and burst time for five processes P1,
P2, P3, and P4 and given Time Quantum = 2
Proces Burst
s Time Arrival Time
P1 5 ms 0 ms
P2 4 ms 1 ms
P3 2 ms 2 ms
Proces Burst
s Time Arrival Time
P4 1 ms 4 ms
Multilevel queue scheduling is used when processes in the ready queue can
be divided into different classes where each class has its own scheduling
needs. For instance, foreground or interactive processes and background or
batch processes are commonly divided.
In addition, foreground processes may have priority (externally defined)
over background processes.
A multilevel queue scheduling algorithm partitions the ready queue into
several separate queues. The processes are permanently assigned to one
queue, generally based on some property of the process, such as memory
size, process priority, or process type. Each queue has its own scheduling
algorithm.
The foreground queue might be scheduled by an RRalgorithm, while the
background queue is scheduled by an FCFS algorithm.
The Description of the processes in the above diagram is as follows:
o System Processes: The CPU itself has its own process to run which is
generally termed as System Process.
o Interactive Processes: An Interactive Process is a type of process in
which there should be same type of interaction.
o Batch Processes: Batch processing is generally a technique in the
Operating system that collects the programs and data together in the
form of the batch before the processing starts.
Example
Let us consider the following four processes. (Use Round robin scheduling
algorithm with time quantum=2 msec)
Priority of queue: Queue1 > Queue2 > Queue3
The Gantt chart will be like this:
In this example, the processes P1, P2, and P3 arrive at t=0, but still, P1 runs
first as it belongs to queue number 1, which has a higher priority. After the
P1 process is over, the P2 process runs due to higher priority than P3, and
then P3 runs. While the P3 process is running P4 process belonging to
queue 1 of higher priority comes. So, the process P3 is stopped, and P4 is
run. After P4 is run and completed, then P3 is resumed.
Example 2: Let us consider the following four processes. ( For queue -1 ,
Use Round robin scheduling algorithm with time quantum=2 msec and For
queue – 2 Use SJF.)
Process Burst Arrival Time queue
ID Time
1 3 0 1
2 6 0 1
3 6 1 2
4 3 1 2
Explanation:
First of all, suppose that queues 1 and 2 follow round robin with time
quantum 8 and 16 respectively and queue 3 follows FCFS. One of the
implementations of Multilevel Feedback Queue Scheduling is as follows:
If any process starts executing then firstly it enters queue 1.
In queue 1, the process executes for 8 unit and if it completes in these 8
units or it gives CPU for I/O operation in these 8 units unit than the priority
of this process does not change, and if for some reasons it again comes in
the ready queue than it again starts its execution in the Queue 1.
If a process that is in queue 1 does not complete in 8 units then its priority
gets reduced and it gets shifted to queue 2.
Above points 2 and 3 are also true for processes in queue 2 but the time
quantum is 16 units. Generally, if any process does not complete in a given
time quantum then it gets shifted to the lower priority queue.
After that in the last queue, all processes are scheduled in an FCFS manner.
It is important to note that a process that is in a lower priority queue can
only execute only when the higher priority queues are empty.
Any running process in the lower priority queue can be interrupted by a
process arriving in the higher priority queue.
IT refers to the period of time from the arrival of an interrupt at the CPU to
the start of the routine that services the interrupt. When an interrupt
occurs, the operating system must first complete the instruction it is
executing and determine the type of interrupt that occurred. It must then
save the state of the current process before servicing the interrupt using
the specific interrupt service routine (ISR). The total time required to
perform these tasks is the interrupt latency.
Dispatch latency
The amount of time required for the scheduling dispatcher to stop one
process and start another is known as dispatch latency. (Or) The term
dispatch latency describes the amount of time it takes for a system to
respond to a request for a process to begin operation.
Providing real-timetasks with immediate access to the CPU mandates that
real-time operatingsystems minimize this latency as well. The most
effective technique for keepingdispatch latency low is to provide
preemptive kernels.
The conflictphase of dispatch latency has two components:
1. Preemption of any process running in the kernel.
2. Release by low-priority processes of resources needed by a high-
priorityprocess.
The important features of Real time scheduling is:
Priority-Based Scheduling:
The most important feature of a real-time operating system is to respond
immediately to a real-time process as soon as that process requires the
CPU.
As a result, the scheduler for a real-time operating system must support
apriority-based algorithm with preemption.
Recall that priority-based scheduling algorithms assign each process a
priority based on its importance; moreimportant tasks are assigned higher
priorities than those deemed less important. If the scheduler also supports
preemption, a process currently runningon the CPU will be preempted if a
higher-priority process becomes available torun.
Rate-Monotonic Scheduling:
The rate-monotonic scheduling algorithm schedules periodic tasks using a
static priority policy with preemption. If a lower-priority process is running
and a higher-priority process becomes available to run, it will preempt the
lower-priority process. Upon entering the system, each periodic task is
assigneda priority inversely based on its period. The shorter the period, the
higher thepriority; the longer the period, the lower the priority. The
rationale behind thispolicy is to assign a higher priority to tasks that require
the CPU more often.Furthermore, rate-monotonic scheduling assumes that
the processing time ofa periodic process is the same for each CPU burst.
That is, every time a processacquires the CPU, the duration of its CPU burst
is the same.
Earliest-Deadline-First Scheduling:
Earliest-deadline-first (EDF) scheduling dynamically assigns priorities
accordingto deadline. The earlier the deadline, the higher the priority; the
later the deadline, the lower the priority. Under the EDF policy, when a
process becomesrunnable, it must announce its deadline requirements to
the system. Prioritiesmay have to be adjusted to reflect the deadline of the
newly runnable process.
Proportional Share Scheduling:
Proportional Share Scheduling is a type of scheduling that pre allocates
certain amount of CPU time to each of the processes. In a proportional
share algorithm every job has a weight, and jobs receive a share of the
available resources proportional to the weight of every job.
Multiple-Processor Scheduling
Multiple processor scheduling or multiprocessor scheduling focuses on
designing the system's scheduling function, which consists of more than
one processor. Multiple CPUs share the load (load sharing) in
multiprocessor scheduling so that various processes run simultaneously.
In general, multiprocessor scheduling is complex as compared to single
processor scheduling. In the multiprocessor scheduling, there are many
processors, and they are identical, and we can run any process at any time.
The multiple CPUs in the system are in close communication, which shares
a common bus, memory, and other peripheral devices.
2. Processor Affinity
3. Load Balancing