0% found this document useful (0 votes)
10 views50 pages

Unit II Chapter II

This document covers CPU scheduling concepts, including process scheduling, scheduling criteria, and various algorithms such as First-Come, First-Served (FCFS) and Shortest-Job-First (SJF). It explains the importance of maximizing CPU utilization and minimizing waiting and turnaround times. Additionally, it discusses preemptive scheduling and the role of the dispatcher in managing process execution.

Uploaded by

amulyakorrapati7
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views50 pages

Unit II Chapter II

This document covers CPU scheduling concepts, including process scheduling, scheduling criteria, and various algorithms such as First-Come, First-Served (FCFS) and Shortest-Job-First (SJF). It explains the importance of maximizing CPU utilization and minimizing waiting and turnaround times. Additionally, it discusses preemptive scheduling and the role of the dispatcher in managing process execution.

Uploaded by

amulyakorrapati7
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 50

UNIT II

Chapter 2
CPU SCHEDULING

Process Scheduling: Basic Concepts, Scheduling Criteria, Scheduling Algorithms,


Thread Scheduling, Multiple-Processor Scheduling, Real-Time CPU Scheduling.
Basic Concepts

 In a single-processor system, only one process can run at a time. Others


must wait until the CPU is free and can be rescheduled.
 The objective of multiprogramming is to have some process running at all
times, to maximizeCPU utilization. The idea is relatively simple. A process is
executed until it must wait, typically for the completion of some I/O
request. In a simple computer system, the CPU then just sits idle. All this
waiting time is wasted; no useful work is accomplished.
 With multiprogramming, we try to use this time productively. Several
processes are kept in memory at one time. When one process has to wait,
the operating system takes the CPU away from that Nprocess and gives the
CPU to another process. This pattern continues. Everytime one process has
to wait, another process can take over use of the CPU.
 CPU–I/O Burst Cycle:
 CPU burst is when the process is being executed in the CPU.
 I/O burst is when the CPU is waiting for I/O for further execution.
 After I/O burst, the process goes into the ready queue for the next CPU
burst.
Fig:Alternating sequence of CPU and I/O bursts.
 The success of CPU scheduling depends on an observed property of
processes:
 Process execution consists of a cycle of CPU execution and I/O wait.
 Processesalternate between these two states. Process execution begins
with a CPU burst.That is followed by an I/O burst, which is followed by
another CPU burst, thenanother I/O burst, and so on. Eventually, the final
CPU burst ends with a systemrequest to terminate execution.
 An I/O-bound program typically has many short CPU bursts. A CPU-
boundprogram might have a few long CPU bursts.
 CPU Scheduler :
 Whenever the CPU becomes idle, the operating system must select one of
theprocesses in the ready queue to be executed. The selection process is
carried outby the short-term scheduler, or CPU scheduler. The scheduler
selects a processfrom the processes in memory that are ready to execute
and allocates the CPU to that process.

 Preemptive Scheduling :

 CPU-scheduling decisions may take place under the following four


circumstances:
1. When a process switches from the running state to the waiting state
(forexample, as the result of an I/O request or an invocation of wait() forthe
termination of a child process)
2. When a process switches from the running state to the ready state
(forexample, when an interrupt occurs)
3. When a process switches from the waiting state to the ready state
(forexample, at completion of I/O)
4. When a process terminates.
 For situations 1 and 4, there is no choice in terms of scheduling. A new
process(if one exists in the ready queue) must be selected for execution.
There is achoice, however, for situations 2 and 3.
 When scheduling takes place only under circumstances 1 and 4, we saythat
the scheduling scheme is non preemptive or cooperative. Otherwise,it is
preemptive.
 Dispatcher :
 Another component involved in the CPU-scheduling function is the
dispatcher.
 The dispatcher is the module that gives control of the CPU to the process
selected by the short-term scheduler.
 This function involves the following:
• Switching context
• Switching to user mode
• Jumping to the proper location in the user program to restart that
program
 The dispatcher should be as fast as possible, since it is invoked during every
process switch.
 The time it takes for the dispatcher to stop one process andstart another
running is known as the dispatch latency.
Scheduling Criteria
 CPU Scheduling is a process of determining which process will own CPU
for execution while another process is on hold. The main task of CPU
scheduling is to make sure that whenever the CPU remains idle, the OS at
least select one of the processes available in the ready queue for
execution. The selection process will be carried out by the CPU scheduler.
It selects one of the processes in memory that are ready for execution.
 Scheduling can be defined as a set of policies and mechanisms which
controls the order in which the work to be done is completed. The
scheduling program which is a system software concerned with scheduling
is called the scheduler and the algorithm it uses is called the scheduling
algorithm.
 Various criteria or characteristics that help in designing a good scheduling
algorithm are:
o CPU Utilization − A scheduling algorithm should be designed so that
CPU remains busy as possible. It should make efficient use of CPU.
o Throughput − Throughput is the amount of work completed in a
unit of time. In other words throughput is the processes executed to
number of jobs completed in a unit of time. The scheduling algorithm
must look to maximize the number of jobs processed per time unit.
o Response time − Response time is the time taken to start
responding to the request. A scheduler must aim to minimize
response time for interactive users.
o Turnaround time − Turnaround time refers to the time between the
moment of submission of a job/ process and the time of its
completion. Thus how long it takes to execute a process is also an
important factor.
o Waiting time − It is the time a job waits for resource allocation
when several jobs are competing in multi programming system. The
aim is to minimize the waiting time.

 A CPU scheduling algorithm tries to maximize and minimize the following:

 Note:
o Turn Around Time = Completion Time - Arrival Time
o Waiting Time = Turnaround time - Burst Time
Scheduling Algorithms
 CPU scheduling deals with the problem ofdeciding which of the processes
in theready queue is to be allocated the CPU.
 There aremanydifferent CPU-scheduling algorithms.

1. First-Come, First-Served Scheduling:


 The simplest CPU-scheduling algorithm is the first-come, first-served(FCFS)
scheduling algorithm.
 With this scheme, the process that requests theCPU first is allocated the
CPU first.
 First come first serve (FCFS) scheduling algorithm simply schedules the jobs
according to their arrival time. The job which comes first in the ready queue
will get the CPU first.
 Advantages of FCFS
o Simple
o Easy
o First come, First serve
 Disadvantages of FCFS
o The scheduling method is non preemptive, the process will run to the
completion.
o Due to the non-preemptive nature of the algorithm, the problem of
starvation may occur.
o Although it is easy to implement, but it is poor in performance since
the average waiting time is higher as compare to other scheduling
algorithms.
 Example 1 :

 Consider the following set of processes that arrive at time 0, with the
length of the CPU burst given in milliseconds:

 If the processes arrive in the order P1, P2, P3, and are served in FCFS order,
we get the result shown in the following Gantt chart, which is a bar chart
that illustrates a particular schedule, including the start and finish times of
each of the participating processes:

 The waiting time is :


 0 milliseconds for process P1,
 24 milliseconds for process P2, And
 27 milliseconds for process P3.
 Thus, the average waiting time is (0 + 24 + 27)/3 = 17 milliseconds.
 Note :If the processes arrive in the order P2, P3, P1, however, the results
will be as shown in the following Gantt chart:

 The average waiting time is now (6 + 0 + 3)/3 = 3 milliseconds. This


reduction is substantial.
 If the CPU gets the processes of the higher burst time at the front end of
the ready queue, then the processes of lower burst time may get blocked
which means they may never get the CPU if the job in the execution has a
very high burst time. This is called convoy effect or starvation.
 Or Convoy Effect is phenomenon associated with the First Come First Serve
(FCFS) algorithm, in which the whole Operating System slows down due to
few slow processes.
 Example 2 :
 In the Following schedule, there are 5 processes with process ID P0, P1, P2,
P3 and P4. P0 arrives at time 0, P1 at time 1, P2 at time 2, P3 arrives at time
3 and Process P4 arrives at time 4 in the ready queue. The processes and
their respective Arrival and Burst time are given in the following table. Find
the Turnaround time and the waiting time.

Process ID Arrival Time Burst Time

0 0 2

1 1 6

2 2 4

3 3 9

4 4 12

Solution:

Process Arrival Burst Completion Turn Around Waiting


ID Time Time Time Time Time

0 0 2 2 2 0

1 1 6 8 7 1

2 2 4 12 10 6

3 3 9 21 18 9

4 4 12 33 29 17
 Gantt chart

 The Average Waiting Time=0+1+6+9+17 = 33/5 =6.6 ms


 The Average Turnaround Time=2+7+10+18+29= 66/5 = 13.2 ms
2. Shortest-Job-First Scheduling

 This algorithm associates with each process the length of the process’s next
CPU burst.
 When the CPU is available, it is assigned to the process that has the
smallest next CPU burst. If the next CPU bursts of two processes are the
same, FCFS scheduling is used to break the tie.
 Note that a more appropriate term for this scheduling method would be
the shortest-next- CPU-burst algorithm, because scheduling depends on the
length of the next CPU burst of a process, rather than its total length.

Advantages of SJF

o Maximum throughput
o Minimum average waiting and turnaround time

Disadvantages of SJF

o May suffer with the problem of starvation


o It is not implementable because the exact Burst time for a process
can't be known in advance.
 Example 1:
 consider the following set of processes, with the length of the CPU burst
given in milliseconds:

 Gantt chart

 Thus, the average Turnaround Time is= (9+24+16+3)/4=52/4=13ms


 The waiting time is:
 3 milliseconds for process P1,
 16 milliseconds for process P2,
 9 milliseconds for process P3, and
 0 milliseconds for process P4.
 Thus, the average waiting time is (3+16+9+0)/4 =28/4= 7 milliseconds.
 By comparison, if we were using the FCFS scheduling scheme, the average
waiting time would be 10.25 milliseconds.
 The SJF scheduling algorithm is probably optimal, in that it gives the
minimum average waiting time for a given set of processes.
 Example 2:
 In the following example, there are five jobs named as P1, P2, P3, P4 and
P5. Their arrival time and burst time are given in the table below.

PID Arrival Burst


Time Time

1 1 7

2 3 3

3 6 2

4 7 10

5 9 8

 Solution :

PID Arrival Burst Completion Turn Waiting


Time Time Time Around Time
Time

1 1 7 8 7 0

2 3 3 13 10 7

3 6 2 10 4 2

4 7 10 31 24 14

5 9 8 21 12 4
 Gantt chart

 The Average Waiting Time = 0+7+2+14+4 = 27/5 =5.4 ms


 The Average Turnaround Time =7+10+4+24+12=57/5=11.4ms

 Since, No Process arrives at time 0 hence; there will be an empty slot in


the Gantt chart from time 0 to 1 (the time at which the first process
arrives).
 According to the algorithm, the OS schedules the process which is having
the lowest burst time among the available processes in the ready queue.
 Till now, we have only one process in the ready queue hence the scheduler
will schedule this to the processor no matter what is its burst time.
 This will be executed till 8 units of time. Till then we have three more
processes arrived in the ready queue hence the scheduler will choose the
process with the lowest burst time.
 Among the processes given in the table, P3 will be executed next since it is
having the lowest burst time among all the available processes.

Shortest Remaining Time First (SRTF) Scheduling Algorithm:


 This Algorithm is the pre-emptive version of SJF scheduling.
 In SRTF, the execution of the process can be stopped after certain amount
of time. At the arrival of every process, the short term scheduler schedules
the process with the least remaining burst time among the list of available
processes and the running process.
 Once all the processes are available in the ready queue, Nopre-emption will
be done and the algorithm will work as SJF scheduling. The context of the
process is saved in the Process Control Block when the process is removed
from the execution and the next process is scheduled. This PCB is accessed
on the next execution of this process.
 Example 1:
 consider the following four processes, with the length ofthe CPU burst
given in milliseconds:

 Solution:

 The completion time for p1=17, p2=5, p3=26, and p4=10.


 The Turnaround Time for p1=17, p2=4,p3=24 and p4=7
 The average Turn around Time=(17+4+24+7)/4=52/4=13.0 ms
 The Waiting Time for p1=9,p2=0,p3=15, and p4=2
 The Average Waiting Time = (9+0+15+2)/4=26/4=6.5ms
 Example 2:
 There are five jobs P1, P2, P3, P4, P5 and P6. Their arrival time and burst
time are given below in the table.

Proces Arrival Burst


s ID Time Time

1 0 8

2 1 4

3 2 2

4 3 1

5 4 3

6 5 2

Solution:
Process Arrival Burst Completion Turn Waiting Response
ID Time Time Time Around Time Time
Time

1 0 8 20 20 12 0

2 1 4 10 9 5 1

3 2 2 4 2 0 2

4 3 1 5 2 1 4

5 4 3 13 9 6 10

6 5 2 7 2 0 5

 Gantt chart

 The Average Waiting Time = 24/6


 The Average Turnaround Time =44/6
 The Gantt chart is prepared according to the arrival and burst time given in
the table.

1. Since, at time 0, the only available process is P1 with CPU burst time 8. This
is the only available process in the list therefore it is scheduled.
2. The next process arrives at time unit 1. Since the algorithm we are using is
SRTF which is a preemptive one, the current execution is stopped and the
scheduler checks for the process with the least burst time.
Till now, there are two processes available in the ready queue. The OS has
executed P1 for one unit of time till now; the remaining burst time of P1 is
7 units. The burst time of Process P2 is 4 units. Hence Process P2 is
scheduled on the CPU according to the algorithm.
3. The next process P3 arrives at time unit 2. At this time, the execution of
process P3 is stopped and the process with the least remaining burst time is
searched. Since the process P3 has 2 unit of burst time hence it will be
given priority over others.
4. The Next Process P4 arrives at time unit 3. At this arrival, the scheduler will
stop the execution of P4 and check which process is having least burst time
among the available processes (P1, P2, P3 and P4). P1 and P2 are having the
remaining burst time 7 units and 3 units respectively.

P3 and P4 are having the remaining burst time 1 unit each. Since, both are
equal hence the scheduling will be done according to their arrival time. P3
arrives earlier than P4 and therefore it will be scheduled again.
5. The Next Process P5 arrives at time unit 4. Till this time, the Process P3 has
completed its execution and it is no more in the list. The scheduler will
compare the remaining burst time of all the available processes. Since the
burst time of process P4 is 1 which is least among all hence this will be
scheduled.
6. The Next Process P6 arrives at time unit 5, till this time, the Process P4 has
completed its execution. We have 4 available processes till now, that are P1
(7), P2 (3), P5 (3) and P6 (2). The Burst time of P6 is the least among all
hence P6 is scheduled. Since, now, all the processes are available hence the
algorithm will now work same as SJF. P6 will be executed till its completion
and then the process with the least remaining time will be scheduled.
7. Once all the processes arrive, no preemption is done and the algorithm will
work as SJF.

 Priority Scheduling :
 The SJF algorithm is a special case of the general priority-scheduling
algorithm.
 Apriority is associated with each process, and the CPUis allocated to the
processwith the highest priority.
 Equal-priority processes are scheduled in FCFS order.
 In Priority scheduling, there is a priority number assigned to each process.
In some systems, the lower the number, the higher the priority. While, in
the others, the higher the number, the higher will be the priority. The
Process with the higher priority among the available processes is given the
CPU. There are two types of priority scheduling algorithm exists. One is Pre-
emptive priority scheduling while the other is non-Pre-emptive Priority
scheduling.

 The priority number assigned to each of the process may or may not vary. If
the priority number doesn't change itself throughout the process, it is
called static priority, while if it keeps changing itself at the regular intervals,
it is called dynamic priority.
 Non Preemptive Priority Scheduling:
 In the Non-Preemptive Priority scheduling, The Processes are scheduled
according to the priority number assigned to them. Once the process gets
scheduled, it will run till the completion. Generally, the lower the priority
number, the higher is the priority of the process.

 A major problem with priority scheduling algorithms is indefinite


blocking,or starvation.

 Asolution to the problem of indefinite blockage of low-priority processes


isaging. Aging involves gradually increasing the priority of processes that
waitin the system for a long time. For example, if priorities range from 127
(low)to 0 (high), we could increase the priority of a waiting process by 1
every 15minutes. Eventually, even a process with an initial priority of 127
would havethe highest priority in the system and would be executed. In
fact, it would takeno more than 32 hours for a priority-127 process to age
to a priority-0 process.

 Example 1:Non preemtion


 Consider the following set of processes, assumed to havearrived at time 0
in the order P1, P2, · · ·, P5, with the length of the CPU burstgiven in
milliseconds:
 Gantt chart:

 The average Turnaround Time =(16+1+18+19+6)=60/5=12ms


 The average Waiting Time =(6+0+16+18+1)/5=41/5=8.2ms

 EXAMPLE 2: Non preemtion


In the Example, there are 7 processes P1, P2, P3, P4, P5, P6 and P7. Their
priorities, Arrival Time and burst time are given in the table

Process Priority Arrival Time Burst Time


ID

1 2 0 3

2 6 2 5

3 3 1 4

4 5 4 2

5 7 6 9

6 4 5 4

7 10 7 10

 We can prepare the Gantt chart according to the Non Preemptive priority
scheduling..

 The Process P1 arrives at time 0 with the burst time of 3 units and the
priority number 2. Since No other process has arrived till now hence the OS
will schedule it immediately.
 Meanwhile the execution of P1, two more Processes P2 and P3 are arrived.
Since the priority of P3 is 3 hence the CPU will execute P3 over P2.
 Meanwhile the execution of P3, All the processes get available in the ready
queue. The Process with the lowest priority number will be given the
priority. Since P6 has priority number assigned as 4 hence it will be
executed just after P3.
 After P6, P4 has the least priority number among the available processes; it
will get executed for the whole burst time.
 Since all the jobs are available in the ready queue hence All the Jobs will get
executed according to their priorities. If two jobs have similar priority
number assigned to them, the one with the least arrival time will be
executed.

Proces Priorit Arriv Burs Completio Turnarou Waitin Respons


s Id y al t n Time nd Time g Time e Time
Time Tim
e

1 2 0 3 3 3 0 0

2 6 2 5 18 16 11 13

3 3 1 4 7 6 2 3

4 5 4 2 13 9 7 11

5 7 6 9 27 21 12 18

6 4 5 4 11 6 2 7

7 10 7 10 37 30 18 27

 AverageWaiting Time = (0+11+2+7+12+2+20)/7 = 54/7 units

 The difference between preemptive priority scheduling and non


preemptive priority scheduling is that, in the preemptive priority
scheduling, the job which is being executed can be stopped at the arrival of
a higher priority job.
 Once all the jobs get available in the ready queue, the algorithm will behave
as non-preemptive priority scheduling, which means the job scheduled will
run till the completion and no preemption will be done.

Preemptive Priority Scheduling

 In Preemptive Priority Scheduling, at the time of arrival of a process in the


ready queue, its Priority is compared with the priority of the other
processes present in the ready queue as well as with the one which is being
executed by the CPU at that point of time. The One with the highest priority
among all the available processes will be given the CPU next.
 The difference between preemptive priority scheduling and non
preemptive priority scheduling is that, in the preemptive priority
scheduling, the job which is being executed can be stopped at the arrival of
a higher priority job.

Example
There are 7 processes P1, P2, P3, P4, P5, P6 and P7 given. Their respective
priorities, Arrival Times and Burst times are given in the table below.
Avg Waiting Time = (0+14+0+7+1+25+16)/7 = 63/7 = 9 units
Round Robin Scheduling Algorithm

 Round Robin scheduling algorithm is one of the most popular scheduling


algorithm which can actually be implemented in most of the operating
systems.
 This is the preemptive version of first come first serve scheduling.
 The Algorithm focuses on Time Sharing. In this algorithm, every process
gets executed in a cyclic way.
 A certain time slice is defined in the system which is called time quantum.
 Each process present in the ready queue is assigned the CPU for that time
quantum, if the execution of the process is completed during that time then
the process will terminate else the process will go back to the ready
queue and waits for the next turn to complete the execution.

Advantages
 It can be actually implementable in the system because it is not depending
on the burst time.
 It doesn't suffer from the problem of starvation or convoy effect.
 All the jobs get a fare allocation of CPU.
Disadvantages
 The higher the time quantum, the higher the response time in the system.
 The lower the time quantum, the higher the context switching overhead in
the system.
 Deciding a perfect time quantum is really a very difficult task in the system.
 Example : Consider thefollowing set of processes that arrive at time 0, with
the length of the CPU burstgiven in milliseconds:

 Solution :

 If we use a time quantum of 4 milliseconds, then process P1 gets the first


4milliseconds. Since it requires another 20 milliseconds, it is preempted
afterthe first time quantum, and the CPU is given to the next process in the
queue,process P2. Process P2 does not need 4 milliseconds, so it quits
before its timequantum expires. The CPU is then given to the next process,
process P3. Onceeach process has received 1 time quantum, the CPU is
returned to process P1for an additional time quantum.
 Let’s calculate the average waiting time for this schedule. P1 waits for
6milliseconds (10 - 4), P2 waits for 4 milliseconds, and P3 waits for 7
milliseconds.
 Thus, the average waiting time is 17/3 = 5.66 milliseconds.
 In the RR scheduling algorithm, no process is allocated the CPU for
morethan 1 time quantum in a row (unless it is the only runnable process).
If aprocess’s CPU burst exceeds 1 time quantum, that process is preempted
and isput back in the ready queue. The RR scheduling algorithm is thus
preemptive.

Example:
Consider the following table of arrival time and burst time for five processes P1,
P2, P3, and P4 and given Time Quantum = 2

Proces Burst
s Time Arrival Time

P1 5 ms 0 ms

P2 4 ms 1 ms

P3 2 ms 2 ms
Proces Burst
s Time Arrival Time

P4 1 ms 4 ms

The Round Robin CPU Scheduling Algorithm


Gantt chart will be as following below:

Gantt chart for Round Robin Scheduling Algorithm

Multilevel Queue Scheduling

 Multilevel queue scheduling is used when processes in the ready queue can
be divided into different classes where each class has its own scheduling
needs. For instance, foreground or interactive processes and background or
batch processes are commonly divided.
 In addition, foreground processes may have priority (externally defined)
over background processes.
 A multilevel queue scheduling algorithm partitions the ready queue into
several separate queues. The processes are permanently assigned to one
queue, generally based on some property of the process, such as memory
size, process priority, or process type. Each queue has its own scheduling
algorithm.
 The foreground queue might be scheduled by an RRalgorithm, while the
background queue is scheduled by an FCFS algorithm.
 The Description of the processes in the above diagram is as follows:
o System Processes: The CPU itself has its own process to run which is
generally termed as System Process.
o Interactive Processes: An Interactive Process is a type of process in
which there should be same type of interaction.
o Batch Processes: Batch processing is generally a technique in the
Operating system that collects the programs and data together in the
form of the batch before the processing starts.

 In addition, there must be scheduling among the queues, which is


commonly implemented as fixed-priority preemptive scheduling.
Advantages
 It allows us to apply different scheduling algorithms for different processes.
 It offers a low scheduling overhead, i.e., the time taken by the dispatcher to
move the process from the ready state to the running state is low.
Disadvantages
 There are chances of starvation for lower priority processes. If higher
priority processes keep coming, then the lower priority processes won't get
an opportunity to go into the running state.
 Multilevel queue scheduling is inflexible.

Example

 Let us consider the following four processes. (Use Round robin scheduling
algorithm with time quantum=2 msec)
 Priority of queue: Queue1 > Queue2 > Queue3
 The Gantt chart will be like this:

 In this example, the processes P1, P2, and P3 arrive at t=0, but still, P1 runs
first as it belongs to queue number 1, which has a higher priority. After the
P1 process is over, the P2 process runs due to higher priority than P3, and
then P3 runs. While the P3 process is running P4 process belonging to
queue 1 of higher priority comes. So, the process P3 is stopped, and P4 is
run. After P4 is run and completed, then P3 is resumed.
 Example 2: Let us consider the following four processes. ( For queue -1 ,
Use Round robin scheduling algorithm with time quantum=2 msec and For
queue – 2 Use SJF.)
Process Burst Arrival Time queue
ID Time

1 3 0 1

2 6 0 1

3 6 1 2

4 3 1 2

Multilevel Feedback Queue Scheduling

 In a multilevel queue-scheduling algorithm, processes are permanently


assigned to a queue on entry to the system. Processes do not move
between queues. This setup has the advantage of low scheduling overhead,
but the disadvantage of being inflexible.
 Multilevel feedback queue scheduling, however, allows a process to move
between queues. The idea is to separate processes with different CPU-
burst characteristics. If a process uses too much CPU time, it will be
moved to a lower-priority queue. Similarly, a process that waits too long
in a lower-priority queue may be moved to a higher-priority queue. This
form of aging prevents starvation.

Explanation:

 First of all, suppose that queues 1 and 2 follow round robin with time
quantum 8 and 16 respectively and queue 3 follows FCFS. One of the
implementations of Multilevel Feedback Queue Scheduling is as follows:
 If any process starts executing then firstly it enters queue 1.
 In queue 1, the process executes for 8 unit and if it completes in these 8
units or it gives CPU for I/O operation in these 8 units unit than the priority
of this process does not change, and if for some reasons it again comes in
the ready queue than it again starts its execution in the Queue 1.
 If a process that is in queue 1 does not complete in 8 units then its priority
gets reduced and it gets shifted to queue 2.
 Above points 2 and 3 are also true for processes in queue 2 but the time
quantum is 16 units. Generally, if any process does not complete in a given
time quantum then it gets shifted to the lower priority queue.
 After that in the last queue, all processes are scheduled in an FCFS manner.
 It is important to note that a process that is in a lower priority queue can
only execute only when the higher priority queues are empty.
 Any running process in the lower priority queue can be interrupted by a
process arriving in the higher priority queue.

Real-Time CPU Scheduling


 CPU scheduling for real-time operating systems involves special issues.
 Ingeneral, we can distinguish between soft real-time systems and hard real-
timesystems.
 Soft real-time systems provide no guarantee as to when a criticalreal-time
process will be scheduled. They guarantee only that the process willbe
given preference over noncritical processes.
 Hard real-time systems havestricter requirements. A task must be serviced
by its deadline; service after thedeadline has expired is the same as no
service at all.
 In this section, we explore several issues related to process scheduling in
both soft and hard real-timeoperating systems.
Minimizing Latency
 The combined delay between an input or command and the desired
output is known as Latency time.
 Consider the event-driven nature of a real-time system. The system is
typically waiting for an event in real time to occur. Events may arise either
in software—as when a timer expires—or in hardware—as when a remote-
controlledvehicle detects that it is approaching an obstruction.When an
event occurs, thesystem must respond to and service it as quickly as
possible.
 We refer to event latency as the amount of time that elapses from when an
event occurs to whenit is serviced.
 Two types of latencies affect the performance of real-time systems:
1. Interrupt latency
2. Dispatch latency
Interrupt latency: the length of time that it takes for a computer interrupt to be
acted on after it has been generated.

 IT refers to the period of time from the arrival of an interrupt at the CPU to
the start of the routine that services the interrupt. When an interrupt
occurs, the operating system must first complete the instruction it is
executing and determine the type of interrupt that occurred. It must then
save the state of the current process before servicing the interrupt using
the specific interrupt service routine (ISR). The total time required to
perform these tasks is the interrupt latency.
Dispatch latency

 The amount of time required for the scheduling dispatcher to stop one
process and start another is known as dispatch latency. (Or) The term
dispatch latency describes the amount of time it takes for a system to
respond to a request for a process to begin operation.
 Providing real-timetasks with immediate access to the CPU mandates that
real-time operatingsystems minimize this latency as well. The most
effective technique for keepingdispatch latency low is to provide
preemptive kernels.
 The conflictphase of dispatch latency has two components:
1. Preemption of any process running in the kernel.
2. Release by low-priority processes of resources needed by a high-
priorityprocess.
 The important features of Real time scheduling is:
 Priority-Based Scheduling:
 The most important feature of a real-time operating system is to respond
immediately to a real-time process as soon as that process requires the
CPU.
 As a result, the scheduler for a real-time operating system must support
apriority-based algorithm with preemption.
 Recall that priority-based scheduling algorithms assign each process a
priority based on its importance; moreimportant tasks are assigned higher
priorities than those deemed less important. If the scheduler also supports
preemption, a process currently runningon the CPU will be preempted if a
higher-priority process becomes available torun.
 Rate-Monotonic Scheduling:
 The rate-monotonic scheduling algorithm schedules periodic tasks using a
static priority policy with preemption. If a lower-priority process is running
and a higher-priority process becomes available to run, it will preempt the
lower-priority process. Upon entering the system, each periodic task is
assigneda priority inversely based on its period. The shorter the period, the
higher thepriority; the longer the period, the lower the priority. The
rationale behind thispolicy is to assign a higher priority to tasks that require
the CPU more often.Furthermore, rate-monotonic scheduling assumes that
the processing time ofa periodic process is the same for each CPU burst.
That is, every time a processacquires the CPU, the duration of its CPU burst
is the same.
 Earliest-Deadline-First Scheduling:
 Earliest-deadline-first (EDF) scheduling dynamically assigns priorities
accordingto deadline. The earlier the deadline, the higher the priority; the
later the deadline, the lower the priority. Under the EDF policy, when a
process becomesrunnable, it must announce its deadline requirements to
the system. Prioritiesmay have to be adjusted to reflect the deadline of the
newly runnable process.
 Proportional Share Scheduling:
 Proportional Share Scheduling is a type of scheduling that pre allocates
certain amount of CPU time to each of the processes. In a proportional
share algorithm every job has a weight, and jobs receive a share of the
available resources proportional to the weight of every job.

Multiple-Processor Scheduling
 Multiple processor scheduling or multiprocessor scheduling focuses on
designing the system's scheduling function, which consists of more than
one processor. Multiple CPUs share the load (load sharing) in
multiprocessor scheduling so that various processes run simultaneously.
 In general, multiprocessor scheduling is complex as compared to single
processor scheduling. In the multiprocessor scheduling, there are many
processors, and they are identical, and we can run any process at any time.
 The multiple CPUs in the system are in close communication, which shares
a common bus, memory, and other peripheral devices.

Characteristics Multiple Processor Scheduling

1. Approaches to Multiple Processor Scheduling

 There are two approaches to multiple processors scheduling in the


operating system: Symmetric Multiprocessing and Asymmetric
Multiprocessing.

1. Symmetric Multiprocessing: It is used where each processor is self-


scheduling. All processes may be in a common ready queue, or each
processor may have its private queue for ready processes. The scheduling
proceeds further by having the scheduler for each processor examine the
ready queue and select a process to execute.
2. Asymmetric Multiprocessing: It is used when all the scheduling decisions
and I/O processing are handled by a single processor called the Master
Server. The other processors execute only the user code. This is simple and
reduces the need for data sharing, and this entire scenario is called
Asymmetric Multiprocessing.

2. Processor Affinity

 Processor Affinity means a process has an affinity for the processor on


which it is currently running. Or
 Most SMP (symmetric multiprocessing) systems try to avoid migrating
processes from one processor to another and keep a process running on the
same processor. This is known as processor affinity.
 There are two types of processor affinity, such as:

1. Soft Affinity: When an operating system has a policy of keeping a


process running on the same processor but not guaranteeing it will do
so, this situation is called soft affinity.
2. Hard Affinity: Hard Affinity allows a process to specify a subset of
processors on which it may run. Some Linux systems implement soft
affinity and provide system calls like sched_setaffinity() that also
support hard affinity.

3. Load Balancing

 Load Balancing is the phenomenon that keeps the workload evenly


distributed across all processors in an SMP system. Load balancing is
necessary only on systems where each processor has its own private queue
of a process that is eligible to execute.

There are two general approaches to load balancing:

1. Push Migration: In push migration, a task routinely checks the load


on each processor. If it finds an imbalance, it evenly distributes the
load on each processor by moving the processes from overloaded to
idle or less busy processors.
2. Pull Migration: Pull Migration occurs when an idle processor pulls a
waiting task from a busy processor for its execution.
Thread Scheduling
 The scheduling of thread involves two boundary scheduling:
o Scheduling of User-Level Threads or ULT to KLT or KLT by using
Lightweight process or LWP.
o Scheduling of Kernel-Level Threads by the system scheduler.

 Lightweight Process (LWP) :


o Light-weight process are threads in the user space that acts as an
interface for the ULT to access the physical CPU resources. Thread
library schedules which thread of a process to run on which LWP and
how long. The number of LWP created by the thread library depends
on the type of application. In the case of an I/O bound application,
the number of LWP depends on the number of user-level threads.
This is because when an LWP is blocked on an I/O operation, then to
invoke the other ULT the thread library needs to create and schedule
another LWP. Thus, in an I/O bound application, the number of LWP
is equal to the number of the ULT. In the case of a CPU bound
application, it depends only on the application. Each LWP is attached
to a separate kernel-level thread.
 In real-time, the first boundary of thread scheduling is beyond specifying
the scheduling policy and the priority. It requires two controls to be
specified for the User level threads: Contention scope, and Allocation
domain. These are explained as following below.
o 1. Contention Scope:
The word contention here refers to the competition or fight among
the User level threads to access the kernel resources. Thus, this
control defines the extent to which contention takes place. It is
defined by the application developer using the thread library.
Depending upon the extent of contention it is classified as Process
Contention Scope and System Contention Scope.
o Process Contention Scope (PCS) –
The contention takes place among threads within a same process.
The thread library schedules the high-prioritized PCS thread to access
the resources via available LWPs (priority as specified by the
application developer during thread creation).
o System Contention Scope (SCS) –
The contention takes place among all threads in the system. In this
case, every SCS thread is associated to each LWP by the thread
library and are scheduled by the system scheduler to access the
kernel resources.
o In LINUX and UNIX operating systems, the POSIX Pthread library
provides a function Pthread_attr_setscope to define the type of
contention scope for a thread during its creation.
int Pthread_attr_setscope(pthread_attr_t *attr, int scope)
o The first parameter denotes to which thread within the process the
scope is defined.
o The second parameter defines the scope of contention for the thread
pointed. It takes two values.
PTHREAD_SCOPE_SYSTEM
PTHREAD_SCOPE_PROCESS
o If the scope value specified is not supported by the system, then the
function returns ENOTSUP.
o 2. Allocation Domain:
The allocation domain is a set of one or more resources for which a
thread is competing. In a multicore system, there may be one or
more allocation domains where each consists of one or more cores.
One ULT can be a part of one or more allocation domain. Due to this
high complexity in dealing with hardware and software architectural
interfaces, this control is not specified. But by default, the multicore
system will have an interface that affects the allocation domain of a
thread.
Pthread Scheduling
 POSIX Threads, usually referred to as pthreads, is an execution model that
exists independently from a language, as well as a parallel execution model.
 Pthreads gives you a simple and portable way of expressing multithreading
in your programs.
 It allows a program to control multiple different flows of work that overlap
in time.
 You use the Pthreads scheduling features to set up a policy that determines
which thread the system first selects to run when CPU cycles become
available, and how long each thread can run once it is given the CPU.
 The pthreads run time library usually lives in /lib, while the development
library usually lives in /usr/lib.

You might also like