0% found this document useful (0 votes)
28 views55 pages

CPU Scheduling Upd

Uploaded by

ani1mesh2anand
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views55 pages

CPU Scheduling Upd

Uploaded by

ani1mesh2anand
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 55

Chapter: CPU/ Process Scheduling

(Unit-2)
Basic Concept

 CPU Scheduling is basis of Multi-programmed OS

 Objective of Multi-programming is to have some processes


running all the time to maximize CPU Utilization.
Sequence of CPU and I/O Bursts
 CPU Burst- when process is executed in CPU

 CPU Burst Time: The amount of time the process uses


the processor
 I/O Burst- when CPU is waiting for I/O completion for
further execution.
Scheduling

 Non-Preemptive

 Preemptive
Non-Preemptive
 Once the CPU is allocated to a process, process keeps
the CPU until:
 it releases when it completes
 by switching to waiting state
E.g : 1. Windows 3.x and Apple Macintosh operating
systems uses non-preemptive scheduling
2. Windows (also 10) uses a round-robin technique with
a multi-level feedback queue for priority scheduling
 Process is executed till completion. It cannot be
interrupted.
Eg First In First Out
Preemptive Scheduling
 The running process is interrupted for some
time and resumed later on, when the priority
task has finished its execution.

 CPU /resources is/are taken away from the


process when some high priority process
needs execution.
Scheduling Criteria
 Which algorithm to use in a particular situation

1. CPU Utilization: CPU should be busy to the fullest

2. Throughput: No. of processes completed per unit of


time.
3. Turnaround Time: The time interval from submitting a
process to the time of completion.

Turnaround Time= Time spent to get into memory + waiting


in ready queue + doing I/O + executing on CPU
(It is the amount of time taken to execute a particular process)
Scheduling Criteria
4. Waiting Time: Time a process spends in a ready queue.
Amount of time a process has been waiting in the ready
queue to acquire control on the CPU.

5. Response Time: Time from the submission of a request


to the first response, Not Output

6. Load Average: It is the average number of processes


residing in the ready queue waiting for their turn to get
into the CPU.
Scheduling Algorithm Optimization Criteria

 Max CPU utilization


 Max throughput
 Min turnaround time
 Min waiting time
 Min response time
Formula

Turn around time= Completion Time- Arrival Time

Waiting Time= Turn around Time-Burst Time

OR
Turnaround time = Burst time + Waiting time
First-Come, First-Served (FCFS)

 Processes that request CPU first, are allocated the CPU


first
 It is non-preemptive scheduling algorithm
 FCFS is implemented with FIFO queue.
 A process is allocated the CPU according to their arrival
times.
 When process enters the ready queue, its PCB is attached
to the Tail of queue, When CPU is free, it is allocated to the
process selected from Head/Front of queue.
First-Come, First-Served (FCFS)
 “Run until Completed:” FIFO algorithm
 Example: Consider three processes arrive in order
P1, P2, and P3.
 P1 burst time: 24
 P2 burst time: 3
 P3 burst time: 3
 Draw the Gantt Chart and compute Average Waiting
Time and Average Turn Around Time.

Sol: As arrival time is not given assume order of arrival


as: P1,P2,P3

13
First-Come, First-Served (FCFS)

 Example: Three processes arrive in order P1, P2, P3.


 P1 burst time: 24
 P2 burst time: 3
 P3 burst time: 3
 Waiting Time
 P1: 0 P1 P2 P3
 P2: 24
0 24 27 30
 P3: 27
 Turnaround Time/Completion Time:
 P1: 24
 P2: 27
 P3: 30

 Average Waiting Time: in milliseconds


 Average Completion Time: in milliseconds
Example: First-Come, First-Served (FCFS)
Example: First-Come, First-Served (FCFS)
(2) Shortest Job First
 Processes with least execution time are selected first.
 CPU is assigned to process with less CPU burst time.
 SJF:
 Non-Preemption: CPU is always allocated to the process with least
burst time and Process Keeps CPU with it until it is completed.

 Pre-Emption: When a new process enters the queue, scheduler


checks its execution time and compare with the already running
process.
 If Execution time of running process is more, CPU is taken from it
and given to new process.
Shortest Job First(Preemptive)
Q1. Consider foll. Processes with A.T and B.T
Process A.T B.T
P1 0 4
P2 0 6
P3 0 4
Cal. Completion time, turn around time and avg. waiting time.
SJF(Pre-emptive)-> SRTF
Shortest Job First(Preemptive)
Q1. Consider foll. Processes with A.T and B.T
Process A.T B.T
P1 0 9
P2 1 4
P3 2 9
Cal. Completion time, turn around time and avg. waiting time.
SJF(Pre-emptive)-> SRTF
Shortest Job First(Preemptive)
Q1. Consider foll. Processes with A.T and B.T
Process A.T B.T
P1 0 5
P2 1 7
P3 3 4
Cal. Completion time, turn around time and avg. waiting time.
SJF(Pre-emptive)-> SRTF
H.W. Practice: Shortest Job First (Non
Preemption)
Q1. Consider foll. Processes with A.T and B.T
Process A.T B.T
P1 1 7
P2 2 5
P3 3 1
P4 4 2
P5 5 8
Cal. Completion time, turn around time and avg. waiting time.

1.P1-> 8.P3->9.P4->11.P2->16.P5
Priority Scheduling
 Priority is associated with each process.
 CPU is allocated to the process with highest
priority.
 If 2 processes have same priority  FCFS

Disadvantage: Starvation (Low priority Processes


wait for long)
Solution of Starvation: Aging
Aging: Priority of process is increased gradually (e.g
after every 5 min priority is incremented by 1)
Priority Scheduling (Preemptive)
Process Arrival Priority Burst Completion Consider 4 as
Time Time Time
Highest and 7 as
P1 1 5 4
Lowest Priority
P2 2 7 2
P3 3 4 3
Priority Scheduling (Preemptive)
Process Arrival Priority Burst Completion
Time Time Time

P1 0 2 10
P2 2 1 5 Consider 0 as
P3 3 0 2 Lowest and 3
P4 5 3 20 as Highest
Priority
Round Robin Scheduling
 A Time Quantum is associated to all processes

 Time Quantum: Maximum amount of time for which


process can run once it is scheduled.

 RR scheduling is always Pre-emptive.


Round Robin

Process Arrival Burst Completion


TQ: 2 Time Time Time

P1 0 5 10
P2 1 7 13
P3 2 1 5

Process Arrival Burst Completion


Time Time Time

P1 0 3
P2 3 4
P3 4 6
Round Robin

Process Arrival Burst Completion


TQ: 2 Time Time Time

P1 0 4 8
P2 1 5 18
P3 2 2 6
P4 3 1 9
P5 4 6 21
P6 6 3 19
Round Robin

Process Arrival Burst Completion


TQ: 2 Time Time Time

P1 0 4 8
P2 1 5 18
P3 2 2 6
P4 3 1 9
P5 4 6 21
P6 6 3 19

RQ: P1 P2 P3 P1 P4 P5 P2 P6 P5 P2 P6 P5
Multilevel Queue
A multilevel queue scheduling algorithm partitions
the ready queue into several separate queues.

For Example: a multilevel queue scheduling algorithm


with five queues, listed below in order of priority:

1. System processes
2. Interactive processes
3. Interactive editing processes
4. Batch processes
5. Student/ user processes
Multilevel Queue
Multilevel Queue
 Processes can be :
 Foreground Process: processes that are running
currently  RR Scheduling is applied
 Background Process: Processes that are running in the
background but its effects are not visible to user. FCFS
 Multilevel queue scheduling divides ready queue into several
queues.
 Processes are permanently assigned to one queue on some
property like memory size, process priority, process type.
 Each queue has its own scheduling algorithm
Practice: Multilevel Queue
Process Arrival Burst Queue
Time Time

P1 0 4 1
P2 0 3 1
P3 0 8 2
P4 10 5 1

Priority of queue 1 is greater than


queue 2. queue 1 uses Round
Robin (Time Quantum = 2) and
queue 2 uses FCFS.
Multilevel Queue
 As different type of processes are there so all cant be put
into same queue and apply same scheduling algorithm.

Disadvantages:
1. Until high priority queue is not empty, No process from
lower priority queues will be selected.
2. Starvation for lower priority processes

Advantage:
Can apply separate scheduling algorithm for each queue.
Multilevel Feedback Queue
 Solution is: Multilevel Feedback Queue
 If a process is taking too long to execute.. Pre-empt
it send it to low priority queue.
 Don’t allow a low priority process to wait for long.
 After some time move a least priority process to
high priority queue  Aging
Multilevel Feedback Queue
Allows a process to move between queues.

The idea is to separate processes according to the


characteristics of their CPU bursts.

If a process uses too much CPU time, it will be moved to a


lower-priority queue.
This scheme leaves I/O-bound and interactive processes in
the higher-priority queues.

In addition, a process that waits too long in a lower-priority


queue may be moved to a higher-priority queue. This form of
aging prevents starvation.
Multilevel Feedback Queue
Multilevel Feedback Queue
Multi-processor Scheduling
Concerns:
 If multiple CPUs are available, load sharing becomes
possible.
 Concentration is on systems in which the processors
are identical—homogeneous in terms of their
functionality.
 Use any available processor to run any process in the
queue
There are also different types of limitations:
 A system with an I/O device attached to a private bus of
one processor. Processes that wish to use that device
must be scheduled to run on that processor
Approaches to Multiple-Processor Scheduling

 1. Asymmetric multiprocessing

All scheduling decisions, I/O processing, and other system


activities handled by a single processor—the master
server. The other processors execute only user code.

 Only one processor accesses the system data structures,


reducing the need for data sharing.
Approaches to Multiple-Processor Scheduling

 2. Symmetric multiprocessing (SMP)


• Each processor is self-scheduling.

• All processes may be in a common ready queue, or each


processor may have its own private queue of ready
processes.

• Scheduling proceeds by having the scheduler for each


processor examine the ready queue and select a process to
execute.

Virtually all modern operating systems support SMP, including


Windows, Linux, and Mac OS X
Issues concerning SMP systems

 1. Processor Affinity
(a process has an affinity for the processor on which it is currently
running.)
 Consider what happens to cache memory when a process
has been running on a specific processor?

The data most recently accessed by the process populate


the cache for the processor. As a result, successive memory
accesses by the process are often satisfied in cache
memory.
Issues concerning SMP systems

1. Processor Affinity

 If the process migrates to another processor.


 The contents of cache memory must be invalidated for the
first processor, and the cache for the second processor must
be repopulated.
Issues concerning SMP systems

Forms of Processor Affinity


1. Soft affinity
When an operating system has a policy of attempting to keep a
process running on the same processor—but not
guaranteeing that it will do so—we have a situation known
as soft affinity.

2. Hard affinity
Operating system will attempt to keep a process on a single
processor, it is possible for a process to migrate between
processors.
Issues concerning SMP systems

2. Load Balancing

On SMP systems, it is important to keep the workload


balanced among all processors to fully utilize the benefits of
having more than one processor.

Need of Load Balancing


One or more processors may sit idle while other processors
have high workloads, along with lists of processes awaiting the
CPU.
Issues concerning SMP systems

2. Load Balancing
Two approaches to load balancing: push migration and pull
migration.

1. Push migration: a specific task periodically checks the


load on each processor and—if it finds an imbalance—evenly
distributes the load by moving (or pushing) processes from
overloaded to idle or less-busy processors.

2. Pull migration: occurs when an idle processor pulls a


waiting task from a busy processor.
Issues concerning SMP systems

3. Multicore Processors

A multi-core processor is a single computing component


with two or more independent processing units called cores,
which read and execute program instructions.
A processor, or more commonly a CPU, is an individual
processing device. It may contain multiple cores.
A core is a bank of registers and dedicated cache (ALU)
Core is a structure that performs all of a processor's tasks,
but is not an entire processor.
Issues concerning SMP systems

3. Multicore Processors

Multicore processors may complicate scheduling issues:

When a processor accesses memory, it spends a significant


amount of time waiting for the data to become available. This
situation, known as a memory stall.

Memory Stall may occur due to a cache miss (accessing


data that are not in cache memory).
Real Time Scheduling
Analysis and testing of the scheduler system and the
algorithms used in real-time applications

a) soft real-time systems b) hard real-time systems

a)Soft real-time systems provide no guarantee as to when


a critical real-time process will be scheduled. They
guarantee only that the process will be given preference over
noncritical processes.
b)Hard real-time systems have stricter requirements. A
task must be serviced by its deadline;
Service after the deadline has expired is the same as no
service at all.
Rate-Monotonic Scheduling
The rate-monotonic scheduling algorithm schedules
periodic tasks using a static priority policy with preemption.

rate-monotonic scheduling (RMS) is a priority assignment


algorithm used in real-time operating systems (RTOS) with a
static-priority scheduling class.

The static priorities are assigned according to the cycle


duration of the job, so a shorter cycle duration results in a
higher job priority
Rate-Monotonic Scheduling

If a lower-priority process is running and a higher-priority


process becomes available to run, it will preempt the lower-
priority process.

Upon entering the system, each periodic task is assigned a


priority inversely based on its period. The shorter the period, the
higher the priority; the longer the period, the lower the priority.

Assign a higher priority to tasks that require the CPU more


often (less)
Earliest Deadline First Scheduling
Earliest-deadline-first (EDF) scheduling dynamically assigns
priorities according to deadline.

The earlier the deadline, the higher the priority the later the
deadline, the lower the priority.

Under the EDF policy, when a process becomes runnable, it


must announce its deadline requirements to the system. Priorities
may have to be adjusted to reflect the deadline of the newly
runnable process.

You might also like