0% found this document useful (0 votes)
71 views34 pages

Cpu Scheduling: by Noordiana Binti Kassim at Kasim / Mohd Hanif Bin Jofri / Ceds Uthm

Uploaded by

Amir Amjad
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
71 views34 pages

Cpu Scheduling: by Noordiana Binti Kassim at Kasim / Mohd Hanif Bin Jofri / Ceds Uthm

Uploaded by

Amir Amjad
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 34

CPU Scheduling

By
Noordiana binti Kassim @ Kasim / Mohd Hanif bin Jofri / CeDS UTHM
Multiple
Basic Processor
Concept Scheduling

Real
Scheduling
CPU Time
CPU
Criteria
SCHEDULING Scheduling

Operating
Scheduling Systems
Algorithms Examples
To CPU scheduling, which is the basis for
Introduce 1 multiprogrammed operating systems.

To
Describe 2 various CPU-scheduling algorithms

To
Examine
3 the scheduling algorithms of
several operating systems

OBJECTIVES
WHAT IS CPU SCHEDULING?

CPU scheduling is a process which allows one


process to use the CPU while the execution of
another process is on hold (in waiting state) due to
unavailability of any resource like I/O etc, thereby
making full use of CPU. The aim of CPU scheduling
is to make the system efficient, fast and fair.
Basic Concepts

Maximum CPU utilization obtained with


multiprogramming
CPU–I/O Burst Cycle – Process
execution consists of a cycle of CPU
execution and I/O wait
CPU burst followed by I/O burst
CPU burst distribution is of main
concern
PROCESS SCHEDULING
 The process manager in an operating system is responsible for
process scheduling.
 It is an operating system activity in which a running process is
removed from the CPU and is replaced by another high priority
process following certain set of rules.
 Process scheduling is an important part of a multi programming
system where multiple processes need execution.
 The process of scheduling can be categorised into two, Pre-
emptive and non-pre-emptive scheduling.
PRE-EMPTIVE SCHEDULING
 Scheduling in which a process is switched from the
running state to the ready state and from the waiting
state to the ready state.
 In this concept, the CPU is allocated to the process for a
limited time and is again taken back when its time is
over.
 The current state of the program is saved and is
resumed again when the process is assigned to the CPU.
   
NON-PRE-EMPTIVE SCHEDULING
 Once the process is allocated the CPU, it is only released when the
processing is over.
 The states into which a process switches are the running state to the waiting
state.
 It is an approach in which a process is not interrupted when it is under
process by the CPU.
 Windows® had Non-Preemptive Scheduling till Windows 3.x, after which it
changed to Preemptive from Windows 95.
 Non-preemptive scheduling is the only method that can be used on certain
hardware platforms because it does not require the special hardware for
example a timer that is needed for preemptive scheduling.
PRE-EMPTIVE vs NON-PRE-EMPTIVE
PRE-EMPTIVE vs NON-PRE-EMPTIVE
Pre-emptive Scheduling Non- pre-emptive Scheduling
CPU allocation is for a limited time. CPU allocation until the process is complete.

Execution of the process is interrupted in the middle.Execution of the process remains uninterrupted until it
is completed.
The concept bears an overhead of switching between No such overhead of switching between the tasks.
the tasks.
If the CPU receives continuous high priority tasks, a If the CPU is processing a program with the largest
process may remain in the waiting state indefinitely. burst time, even a program with the smallest burst time
may have to starve.
It allows flexibility to the processes which are in the This approach is also known as the rigid scheduling as
waiting state allowing the high priority tasks to be it offers no flexibility to the processes irrespective of
executed first. their urgency for execution.

Pre emptive scheduling needs to maintain the The Non- pre-emptive Scheduling does not require to
integrity of the shared data and to ensure no data loss maintain data integrity as no processes are swapped
occurs when processes are swapped from the waiting from the waiting state to the ready state.
state to the ready state.
WHAT DO WE WANT TO ACHIEVE FROM
SCHEDULING (OBJECTIVES):
MAX MIN TIME TAKEN BY A
KEEP CPU AS BUSY
CPU 1 Turnaround4 PROCESS TO FINISH
Utilization AS POSSIBLE EXECUTION
Time

MIN TIME A PROCESS


FAIR FAIR ALLOCATION
Allocation 2 OF CPU
Waiting 5 WAITS IN READY
Time QUEUE

NUMBER OF PROCESSES MIN TIME WHEN A


MAX Response 6 PROCESS PRODUCES
Throughput3
THAT COMPLETE THEIR
EXECUTION PER TIME Time FIRST RESPONSE
UNIT
Scheduling Criteria
 Max CPU utilization

 keep the CPU as busy as possible

 To make out the best use of CPU and not to


waste any CPU cycle, CPU would be working
most of the time(Ideally 100% of the time).
Considering a real system, CPU usage should
range from 40% (lightly loaded) to 90% (heavily
loaded.)
Scheduling Criteria

Max Throughput

 Number of processes that complete their execution per time


unit.

 the total number of processes completed per unit time or rather


say total amount of work done in a unit of time. This may range
from 10/second to 1/hour depending on the specific processes.
Scheduling Criteria

 Min Turnaround time


 Amount of time to execute a particular process.
 The amount of time taken to execute a particular process,
i.e. The interval from time of submission of the process to
the time of completion of the process (Wall clock time).
Scheduling Criteria

 Min Waiting time

 amount of time a process has been waiting in the ready


queue

The sum of the periods spent waiting in the ready queue


amount of time a process has been waiting in the ready
queue to acquire get control on the CPU.
Scheduling Criteria

 Min Response time


 amount of time it takes from when a request was submitted
until the first response is produced, not output (for time-
sharing environment).
 it is the time till the first response and not the completion of
process execution(final response).
Queuing Diagram

Etc … processes moved by the dispatcher of the OS to the CPU then back to the
queue until the task is completed
CPU SCHEDULING

SCHEDULING ALGORITHMS
SCHEDULING ALGORITHMS
To decide which process to execute first and which process to execute last

to achieve maximum CPU utilisation, computer scientists have defined some

algorithms, they are:

1.First Come First Serve(FCFS) Scheduling


2.Shortest-Job-First(SJF) Scheduling
3.Priority Scheduling
4.Round Robin(RR) Scheduling
5.Multilevel Queue Scheduling
6.Multilevel Feedback Queue Scheduling
FIRST COME FIRST SERVE (FCFS)
Process Burst Time
P1 24
P2 3
P3 3
 Suppose that the processes arrive in the order: P1 , P2 , P3
The Gantt Chart for the schedule is:

P1 P2 P3
0 24 27 30

 Waiting time for P1 = 0; P2 = 24; P3 = 27


 Average waiting time: (0 + 24 + 27)/3 = 17
FCFS Scheduling (Cont.)
Suppose that the processes arrive in the order:
P2 , P3 , P1
 The Gantt chart for the schedule is:

P2 P3 P1
0 3 6 30

 Waiting time for P1 = 6; P2 = 0; P3 = 3


 Average waiting time: (6 + 0 + 3)/3 = 3
 Much better than previous case
FIRST COME FIRST SERVE (FCFS)

 In the "First come first serve" scheduling algorithm, as the name


suggests, the process which arrives first, gets executed first, or we
can say that the process which requests the CPU first, gets the CPU
allocated first.
 First Come First Serve, is just like FIFO(First in First out) Queue data
structure, where the data element which is added to the queue first,
is the one who leaves the queue first.
 It's easy to understand and implement programmatically, using a
Queue data structure, where a new process enters through the tail of
the queue, and the scheduler selects process from the head of the
queue.
 A perfect real life example of FCFS scheduling is buying tickets at
ticket counter.
FIRST COME FIRST SERVE (FCFS)

Advantages:
• FCFS algorithm doesn't include any complex logic, it just puts the process
requests in a queue and executes it one by one.
• Hence, FCFS is pretty simple and easy to implement.
• Eventually, every process will get a chance to run, so starvation doesn't
occur.
Disadvantages:
• There is no option for pre-emption of a process. If a process is started, then
CPU executes the process until it ends.
• Because there is no pre-emption, if a process executes for a long time, the
processes in the back of the queue will have to wait for a long time before
they get a chance to be executed.
Shortest-Job-First (SJF) Scheduling

 Associate with each process the length of its next CPU


burst
 Use these lengths to schedule the process with the
shortest time
 SJF is optimal – gives minimum average waiting time for a
given set of processes
 The difficulty is knowing the length of the next CPU
request
Example of SJF

ProcessArriva l Time Burst Time


P1 0.0 6
P2 2.0 8
P3 4.0 7
P4 5.0 3

 _____________ scheduling chart

P4 P1 P3 P2
0 3 9 16 24

 Average waiting time = (3 + 16 + 9 + 0) / 4 = 7


SHORTEST JOB FIRST (SJF)

 Shortest Job First scheduling works on the process with the shortest burst time
or duration first.
 This is the best approach to minimize waiting time.
 To successfully implement it, the burst time/duration time of the processes
should be known to the processor in advance, which is practically not feasible
all the time.
 This scheduling algorithm is optimal if all the jobs/processes are available at the
same time.
SHORTEST JOB FIRST (SJF)

Advantages:
• According to the definition, short processes are executed first and then followed by longer
processes.
• The throughput is increased because more processes can be executed in less amount of
time.

Disadvantages:
• The time taken by a process must be known by the CPU beforehand, which is not possible.
• Longer processes will have more waiting time, eventually they'll suffer starvation.
ROUND ROBIN SCHEDULING (RR)

 Each process gets a small unit of CPU time (time


quantum q)
 usually 10-100 milliseconds.
The period of
 After this time has elapsed, the process is time for which
a process is
preempted and added to the end of the ready allowed to run
queue. in a preemptive
multitasking
 If there are n processes in the ready queue and the system is
time quantum is q, generally called
the time slice
 each process gets 1/n of the CPU or quantum.
 No process waits more than (n-1)q time units.
 Timer interrupts every quantum to schedule next
process
Example of scheduling algorithm with
Time Quantum = 4
Process Burst Time
P1 24
P2 3
P3 3
 The Gantt chart is:

P1 P2 P3 P1 P1 P1 P1 P1
0 4 7 10 14 18 22 26 30

 Better response than SJF


ROUND ROBIN

 fixed time is allotted to each process, called quantum, for


execution.
 Once a process is executed for given time period that process is
preemptied and other process executes for given time period.
 Context switching is used to save states of preemptied processes
ROUND ROBIN (RR)

Advantages:
• Each process is served by the CPU for a fixed time quantum, so all
processes are given the same priority.
• Starvation doesn't occur because for each round robin cycle, every
process is given a fixed time to execute. No process is left behind.
Disadvantages:
• The throughput in RR largely depends on the choice of the length of the
time quantum. If time quantum is longer than needed, it tends to exhibit the
same behavior as FCFS.
• If time quantum is shorter than needed, the number of times that CPU
switches from one process to another process, increases. This leads to
decrease in CPU efficiency.
Multiple-Processor Scheduling

 CPU scheduling more complex when multiple CPUs are available


 Asymmetric multiprocessing – only one processor accesses the
system data structures, alleviating the need for data sharing
 Symmetric multiprocessing (SMP) – each processor is self-
scheduling, all processes in common ready queue, or each has its own
private queue of ready processes
 Currently,most common, Virtually all modern OSes support SMP,
including XP, Win 2000, Solaris, Linux, and Mac OSX
 Load balancing attempts to keep workload evenly distributed.
Thank You

You might also like