0% found this document useful (0 votes)
32 views40 pages

Scheduling

This document discusses CPU scheduling and different scheduling algorithms used by operating systems. It begins by explaining the functionalities of an OS including resource management, process management, and memory management. It then discusses why CPU scheduling is needed and introduces common scheduling algorithms like FIFO, shortest job first, round robin, and priority scheduling. The goals of scheduling are also stated as understanding how programs execute together on a machine.

Uploaded by

bkt18
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
32 views40 pages

Scheduling

This document discusses CPU scheduling and different scheduling algorithms used by operating systems. It begins by explaining the functionalities of an OS including resource management, process management, and memory management. It then discusses why CPU scheduling is needed and introduces common scheduling algorithms like FIFO, shortest job first, round robin, and priority scheduling. The goals of scheduling are also stated as understanding how programs execute together on a machine.

Uploaded by

bkt18
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 40

Scheduling

1
Content of This Lecture
 Functionalities of OS
 Why CPU scheduling?
 Basic scheduling algorithms
 FIFO (FCFS)
 Shortest job first
 Round Robin
 Priority Scheduling
 Goals:
 Understand how your program is executed on
the machine together with other programs 2
Functionalities of OS
 Resource Management: When parallel accessing happens in the OS means when
multiple users are accessing the system the OS works as Resource Manager, Its
responsibility is to provide hardware to the user. It decreases the load in the
system.
 Process Management: It includes various tasks like scheduling and
termination of the process. It is done with the help of CPU scheduling
algorithms.
• Memory Management: Refers to the management of primary memory. The
operating system keeps track of how much memory has been used and by whom.
It decides which process needs memory space and how much. OS also has to
allocate and deallocate the memory space.
• Security/Privacy Management: Privacy is also provided by the Operating system
using passwords so that unauthorized applications can’t access programs or data.

3
Kernel
Kernel is one of the components of the Operating System which works
as a core component. The rest of the components depends on Kernel for
the supply of the important services that are provided by the Operating
System. The kernel is the primary interface between the Operating system
and Hardware.

Functions of Kernel
The following functions are to be performed by the Kernel.

•It helps in controlling the System Calls.

•It helps in I/O Management.

•It helps in the management of applications, memory, etc.

4
Processor Management
 In a multi-programming environment, the OS
decides the order in which processes have access
to the processor, and how much processing time
each process has. This function of OS is called
Process Scheduling.

5
Processor Management
 An operating system manages the processor’s work by
allocating various jobs to it and ensuring that each process
receives enough time from the processor to function
properly.

 Keeps track of the status of processes. The program which


performs this task is known as a traffic controller.
Allocates the CPU that is a processor to a process. De-
allocates processor when a process is no longer required.

6
Five - State Process Model
 This model has five states: new, ready, running,
blocked, and exit. When a new job/process arrives
in the queue, it is first accepted to the queue and
then moves to the ready state. The process is now
in the running state while in the Ready state.

7
Five - State Process Model
1. Running: A process that is currently being executed. Assuming that there is
only a single processor in the execution process, so there will be at most one
processor at a time that can be running in the state.
2. Ready: A process that is prepared to execute when given the opportunity by
the OS.
3. Blocked/Waiting: A process cannot continue executing until some event
occurs like for example, the completion of an input-output operation.
4. New: A new process that has been created but has not yet been admitted by
the OS for its execution. A new process is not loaded into the main memory,
but its process control block (PCB) has been created.
5. Exit/Terminate: A process or job that has been released by the OS, either
because it is completed or is aborted for some issue.

8
Process Control Block (PCB)

9
Five - State Process Model

10
Process Scheduling
 Deciding which process/thread should
occupy the resource (CPU, disk, etc)
CPU

I want to
play Whose turn is it?

Process 1 Process 2 Process 3


11
Context Switch
 Switch CPU from one process to another
 Performed by scheduler
 It includes:
 save PCB state of the old process;
 load PCB state of the new process;
 Flush memory cache;
 Change memory mapping (TLB);
 Context switch is expensive (1-1000 microseconds)
 No useful work is done (pure overhead)
 Can become a bottleneck
 Need hardware support
12
When should the scheduler be
called?
 A new process is admitted
 The running process exits
 The running process is blocked
 I/O interrupt (some processes will be ready)
 Clock interrupt (every 10 milliseconds)

13
Preemptive vs. Non-
preemptive scheduling
 Non-preemptive scheduling:
 The running process keeps the CPU until it
voluntarily gives up the CPU
 process exits
4
 switches to blocked state Running Terminated
1
3

 Preemptive scheduling: Ready Blocked

 The running process can be interrupted and must


release the CPU (can be forced to give up CPU)

14
Scheduling Objectives
 Fairness (equitable shares of CPU)
 Priority (highest priority first)
 Efficiency (make best use of equipment)
 Encouraging good behavior (can’t take
advantage of the system)
 Support for heavy loads (degrade gracefully)
 Adapting to different environments (interactive,
real-time, multi-media)
15
Performance Criteria

 Efficiency: keep resources as busy as possible


 Throughput: # of processes that complete in unit time
 Waiting Time
 Total amount of time spent by the process waiting in ready queue
 Initial Waiting Time
 Amount of time spent by the process waiting in ready queue before it
starts executing
 Response Time
 amount of time from when a job is admitted until it completes
 Proportionality:
 Assign CPU proportionally to given application weight

16
First Come First Serve (FCFS)
 Process that requests the CPU FIRST is allocated the CPU
FIRST.
 Also called FIFO
 FCFS considered to be the simplest of all operating system
scheduling algorithms.
 Used in Batch Systems
 Implementation
 FIFO queue
 A new process enters the tail of the queue
 The scheduler selects the process at the head of the queue.

17
First Come First Serve (FCFS)
• FCFS supports non-preemptive and preemptive
CPU scheduling algorithms.
• Tasks are always executed on a First-come, First-
serve concept.
• FCFS is easy to implement and use.
• This algorithm is not much efficient in
performance, and the wait time is quite high.

18
First Come First Serve (FCFS)
 Advantages of FCFS:
• Easy to implement
• First come, first serve method

 Disadvantages of FCFS:
• FCFS suffers from Convoy effect.
• The average waiting time is much higher than the other algorithms.
• FCFS is very simple and easy to implement and hence not much
efficient

19
1. Completion Time: Time at which the process completes its execution.
2. Turn Around Time: Time Difference between completion time and
arrival time.

Turn Around Time = Completion Time – Arrival Time

3. Waiting Time(WT): Time Difference between turn around time and


burst time.

Waiting Time = Turn Around Time – Burst Time.

20
Non-preemtive FCFS

21
Preemptive FCFS

22
FCFS Example

Process Duration Order Arrival Time


P1 24 1 0
P2 3 2 0
P3 4 3 0

P1 (24) P2 (3) P3 (4)


0 24 27

P1 waiting time: 0
The average waiting time:
P2 waiting time: 24
P3 waiting time: 27 (0+24+27)/3 = 17
Does the execution order affect the average waiting time?
23
Problems with FCFS
 Does not minimize AWT
 Cannot utilize resources in parallel:
 Assume 1 process CPU bounded and many I/O
bounded processes
 Result: Convoy effect, low CPU and I/O Device
utilization

24
Why Convoy Effects with FCFS?

 Consider n-1 jobs in system that are I/O bound


and 1 job that is CPU bound:
1. I/O bound jobs pass quickly through the ready queue
and suspend themselves waiting for I/O.
2. CPU bound job arrives at head of queue and executes
until completion.
3. I/O bound jobs re-join ready queue and wait for CPU
bound job to complete.
4. I/O devices idle until CPU bound job completes.
5. When CPU bound job completes, the ready I/O-
bounded processes quickly move through the running
state and become blocked on I/O events again. CPU
becomes idle.
25
Interactive Scheduling
Algorithms
 Usually preemptive
 Time is sliced into quanta (time intervals)
 Scheduling decision is also made at the beginning of each quantum
 Performance Criteria
 Average response time
 Average initial waiting time
 Average waiting time
 Fairness (or proportional resource allocation)
 Representative algorithms:
 Round-robin
 Priority-based
 …

26
Round-robin
 One of the oldest, simplest, most commonly used
scheduling algorithm
 Select process/thread from ready queue in a
round-robin fashion (take turns)

Problems:
• Does not consider priority

• Context switch overhead

Preemption after
quantum
expiration
27
Round-robin: Example
Process Duration Order Arrival Time

P1 3 1 0
P2 4 2 0
P3 3 3 0
Suppose time quantum is: 1 unit, P1, P2 & P3 never block

P1 P2 P3 P1 P2 P3 P1 P2 P3 P2

0 10
P1 waiting time: 4 The average waiting time (AWT):
P2 waiting time: 6
P3 waiting time: 6 (4+6+6)/3 = 5.33

Note that Initial Waiting Time of P1 is zero, P2’s IWT is 1, and P3’s IWT is 2
28
Choosing the Time Quantum
 Time slice too large
 FIFO behavior
 Poor initial waiting time
 Time slice too small
 Too many context switches (overheads)
 Inefficient CPU utilization
 Heuristic:
 70-80% of jobs block within time-slice
 Typical time-slice
 10-100 ms (depends on job priority)
29
Shortest Job First (SJF)
 Schedule the job with the shortest
computation time first
 Scheduling in Batch Systems
 Two types:
 Non-preemptive
 Preemptive
 Optimal if all jobs are available
simultaneously: Gives the best possible AWT
(average waiting time) 30
Non-preemptive SJF: Example
Process Duration Order Arrival Time
P1 6 1 0
P2 8 2 0
P3 7 3 0
P4 3 4 0
P4 (3) P1 (6) P3 (7) P2 (8)

0 3 9 16 24

P4 waiting time: 0
The total time is: 24
P1 waiting time: 3
P3 waiting time: 9 The average waiting time (AWT):
P2 waiting time: 16 (0+3+9+16)/4 = 7
31
Comparing to FCFS
Process Duration Order Arrival Time
P1 6 1 0
P2 8 2 0
P3 7 3 0
P4 3 4 0

32
Comparing to FCFS
Process Duration Order Arrival Time
P1 6 1 0
P2 8 2 0
P3 7 3 0
P4 3 4 0

P1 (6) P2 (8) P3 (7) P4 (3)

0 6 14 21 24

The total time is the same.


P1 waiting time: 0
The average waiting time (AWT):
P2 waiting time: 6
P3 waiting time: 14 (0+6+14+21)/4 = 10.25
P4 waiting time: 21 (comparing to 7)
33
Preemptive SJF
 Shortest job runs first.
 A job that arrives and is shorter than the
running job will preempt it.

34
A Problem with Preemptive SJF
 Starvation
 A job may keep getting preempted by shorter ones
 Example
 Process A with computation time of 1 hour arrives at time 0
 But every 1 minute, a short process with computation time of 2
minutes arrives
 Result of SJF: A never gets to run
 What’s the difference between starvation and a
deadlock?

35
Deadlock

 Deadlock occurs when each process holds a resource and wait for other
resource held by any other process. For example, in the below diagram,
Process 1 is holding Resource 1 and waiting for resource 2 which is acquired
by process 2, and process 2 is waiting for resource 1. Hence both process 1
and process 2 are in deadlock.

36
Starvation

Starvation is the problem that occurs when high priority processes keep executing and
low priority processes get blocked for indefinite time.

In heavily loaded computer system, a steady stream of higher-priority processes can


prevent a low-priority process from ever getting the CPU.

In starvation resources are continuously utilized by high priority processes. Problem of


starvation can be resolved using Aging. In Aging priority of long waiting processes is
gradually increased.

37
Priority Scheduling
 Priority CPU Scheduling Algorithm is a method that works based
on the priority of a process.

 In this algorithm, the editor sets the functions to be as important,


meaning that the most important process must be done first.

 In the case of any conflict, that is, where there are more than one
processor with equal value, then the most important CPU planning
algorithm works on the basis of the FCFS (First Come First Serve)
algorithm.

38
Priority Scheduling
• Schedules tasks based on priority.
• When the higher priority work arrives while a task with
less priority is executed, the higher priority work takes the
place of the less priority one and
• The latter is suspended until the execution is complete.
• Lower is the number assigned, higher is the priority level
of a process.

39
Priority Scheduling
 Advantages of Priority Scheduling:
• The average waiting time is less than FCFS
• Less complex

 Disadvantages of Priority Scheduling:


• One of the most common demerits of the Preemptive priority CPU
scheduling algorithm is the Starvation Problem. This is the problem in
which a process has to wait for a longer amount of time to get
scheduled into the CPU. This condition is called the starvation
problem.

40

You might also like