0% found this document useful (0 votes)
5 views

Module-2 Process Scheduling

The document discusses process scheduling in operating systems, focusing on CPU scheduling, scheduling criteria, and various algorithms such as FCFS, SJF, and Round Robin. It outlines the objectives of CPU scheduling, the basic concepts of process execution, and the role of the CPU scheduler and dispatcher. Additionally, it covers thread scheduling and the distinction between user-level and kernel-level threads.

Uploaded by

Ambika Venkatesh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

Module-2 Process Scheduling

The document discusses process scheduling in operating systems, focusing on CPU scheduling, scheduling criteria, and various algorithms such as FCFS, SJF, and Round Robin. It outlines the objectives of CPU scheduling, the basic concepts of process execution, and the role of the CPU scheduler and dispatcher. Additionally, it covers thread scheduling and the distinction between user-level and kernel-level threads.

Uploaded by

Ambika Venkatesh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 40

Module-2

Process Scheduling

Department of CSE- Data Science


Contents

 Basic concepts

 Scheduling Criteria

 Scheduling Algorithms

 Thread scheduling

 Multiple-processor scheduling,

Department of CSE- Data Science


Objectives

 To introduce CPU scheduling, which is the basis for multiprogrammed operating


systems.
 To describe various CPU-scheduling algorithms.
 To discuss evaluation criteria for selecting a CPU-scheduling algorithm for a
particular system.

Department of CSE- Data Science


Basic Concepts

 In a single-processor system,

‣ Only one process may run at a time.


‣ Other processes must wait until the CPU is rescheduled.
 Objective of multiprogramming:

‣ To have some process running at all times, in order to maximize CPU utilization.
‣ Several processes are kept in memory at one time

‣ Every time a running process has to wait, another process can take over use of the
CPU
 Scheduling of the CPU is fundamental to operating system design

Department of CSE- Data Science


CPU-I/O Burst Cycle
 Process execution consists of a cycle of a CPU time
burst and an I/O time burst (i.e. wait)
 Processes alternate between these two states (i.e.,
CPU burst and I/O burst)
 Process execution begins with a CPU burst, followed
by an I/O burst, then another CPU burst, etc…
 Eventually, the final CPU burst ends with a system
request to terminate execution
 An I/O-bound program typically has many short CPU
bursts.
 A CPU-bound program might have a few long CPU
bursts.

Department of CSE- Data Science


CPU Scheduler
 The CPU scheduler selects from among the processes in memory that are ready to
execute and allocates the CPU to one of them
– Ready Queue may be ordered in various ways
 CPU scheduling is affected by the following set of circumstances:

1. A process switches from running to waiting state

2. A process switches from running to ready state

3. A process switches from waiting to ready state

4. A processes switches from running to terminated state


 Circumstances 1 and 4 are non-preemptive; they offer no schedule choice
 Circumstances 2 and 3 are pre-emptive; they can be scheduled

Department of CSE- Data Science


 Under nonpreemptive (cooperative) scheduling, once the CPU has been allocated to a
process, the process keeps the CPU until it releases the CPU either by terminating or by
switching to the waiting state.
 This scheduling method was used by Microsoft Windows 3.x
 Windows 95 introduced preemptive scheduling, and all subsequent versions of Windows
operating systems have used preemptive scheduling.

Department of CSE- Data Science


Dispatcher

 The dispatcher module gives control of the CPU to the process selected by the short-term
scheduler; this involves:
‣ switching context
‣ switching to user mode
‣ jumping to the proper location in the user program to restart that program
 The dispatcher needs to run as fast as possible, since it is invoked during process context
switch
 The time it takes for the dispatcher to stop one process and start another process is called
dispatch latency

Department of CSE- Data Science


Scheduling Criteria
 Different CPU scheduling algorithms have different properties. In choosing
which algorithm to use, the properties of the various algorithms should be
considered
 Criteria for comparing CPU scheduling algorithms may include the following
1. CPU utilization – percent of time that the CPU is busy executing a process
2. Throughput – number of processes that are completed per time unit
3. Response time – amount of time it takes from when a request was
submitted until the first response occurs (but not the time it takes to output
the entire response)
4. Waiting time – the amount of time before a process starts after first
entering the ready queue (or the sum of the amount of time a process has
spent waiting in the ready queue)
5. Turnaround time – amount of time to execute a particular process from the
time of submission through the time of completion
Department of CSE- Data Science
Optimization Criteria

 It is desirable to

‣ Maximize CPU utilization

‣ Maximize throughput

‣ Minimize turnaround time

‣ Minimize start time

‣ Minimize waiting time

‣ Minimize response time

 In most cases, we strive to optimize the average measure of each metric

 In other cases, it is more important to optimize the minimum or maximum values rather
than the average
Department of CSE- Data Science
Gantt Chart

 The chart is named after Henry Gantt (1861–1919),


 A Gantt chart is a type of bar chart that illustrates a project schedule
 A Gantt chart is a useful graphical tool which shows activities or tasks performed
against time. It is also known as visual presentation of a project where the activities
are broken down and displayed on a chart which makes it is easy to understand and
interpret.

Department of CSE- Data Science


Scheduling Algorithms

1. First-Come, First-Served (FCFS) Scheduling


2. Shortest-Job-First (SJF) Scheduling
3. Priority Scheduling
4. Round Robin (RR) Scheduling
5. Multilevel Queue Scheduling
6. Multilevel Feedback Queue Scheduling

Department of CSE- Data Science


First-Come, First-Served (FCFS) Scheduling

 First-Come-First-Served Algorithm is the simplest CPU scheduling.


 Whichever process requests the CPU first gets it first.
 It is implemented using a standard FIFO single queue.
 Waiting time can be long and it depends heavily on the order in which processes
request CPU time
 Example:
Process Burst Time
P1 24
P2 3
P3 3

Department of CSE- Data Science


Case #1: Suppose that the processes arrive in the order: P 1 , P2 , P3

 The Gantt Chart for the schedule is:

P1 P2 P3

0 24 27 30

 Waiting time for P1 = 0; P2 = 24; P3 = 27


 Average waiting time = (0 + 24 + 27)/3 = 17
 Average turn-around time= (24 + 27 + 30)/3 = 27

Department of CSE- Data Science


Case #2: Suppose that the processes arrive in the order: P2 , P3 , P1

 The Gantt chart for the schedule is:

P2 P3 P1

0 3 6 30

 Waiting time for P1 = 6; P2 = 0; P3 = 3

 Average waiting time: (6 + 0 + 3)/3 = 3 (Much better than Case #1)

 Average turn-around time: (3 + 6 + 30)/3 = 13

Department of CSE- Data Science


 Case #1 is an example of the convoy effect(starvation); all the other processes wait for
one long-running process to finish using the CPU

– This problem results in lower CPU and device utilization;

 Case #2 shows that higher utilization might be possible if the short processes were
allowed to run first

 The FCFS scheduling algorithm is non-preemptive

– Once the CPU has been allocated to a process, that process keeps the CPU until it
releases it either by terminating or by requesting I/O

– It is a troublesome algorithm for time-sharing systems

Department of CSE- Data Science


Shortest Job First (SJF) Scheduling
 The SJF algorithm associates with each process the length of its next CPU burst

 When the CPU becomes available, it is assigned to the process that has the smallest next
CPU burst (in the case of matching bursts, FCFS is used)

 Two schemes:

 Nonpreemptive – once the CPU is given to the process, it cannot be preempted until
it completes its CPU burst

 Preemptive – if a new process arrives with a CPU burst length less than the remaining
time of the current executing process, preempt. This scheme is know as the Shortest-
Remaining-Time-First (SRTF)

Department of CSE- Data Science


Example #1: Non-Preemptive SJF (simultaneous arrival)
Process Burst Time
P1 6
P2 4
P3 1
P4 5

• Average waiting time = (0 + 1 + 5 + 10)/4 = 4


• Average turn-around time = (1 + 5 + 10 + 16)/4 = 8

Department of CSE- Data Science


Example of Shortest-remaining-time-first
ProcessA arri Arrival TimeT Burst Time
P1 0 8
P2 1 4
P3 2 9
P4 3 5
• Preemptive SJF Gantt Chart

P1 P2 P4 P1 P3

0 1 5 10 17 26

• Average waiting time = [(10-1)+(1-1)+(17-2)+5-3)]/4 = 26/4 = 6.5 msec

Department of CSE- Data Science


Example of Shortest-remaining-time-first
ProcessA arri Arrival TimeT Burst Time
P1 0 4
P2 1 2
P3 2 1
P4 4 3

Department of CSE- Data Science


Priority Scheduling

 A priority number (integer) is associated with each process

 The CPU is allocated to the process with the highest priority (smallest integer =
highest priority)

 Priority scheduling can be either preemptive or non-preemptive

 A preemptive: approach will preempt the CPU if the priority of the newly-arrived
process is higher than the priority of the currently running process

 A non-preemptive approach will simply put the new process (with the highest
priority) at the head of the ready queue

 SJF is a priority scheduling algorithm where priority is the predicted next CPU burst
time

Department of CSE- Data Science


 The main problem with priority scheduling is starvation, that is, low priority
processes may never execute

 A solution is aging; as time progresses, the priority of a process in the ready queue is
increased

Department of CSE- Data Science


Example of Priority Scheduling
ProcessA arri Burst TimeT Priority
P1 10 3
P2 1 1
P3 2 4
P4 1 5
P5 5 2
• Priority scheduling Gantt Chart
P2 P5 P1 P3 P4

0 1 6 16 18 19

• Average waiting time = 8.2 msec

Department of CSE- Data Science


Round Robin (RR)

 Each process gets a small unit of CPU time (time quantum q), usually 10-100
milliseconds. After this time has elapsed, the process is preempted and added to the
end of the ready queue.
Process Burst Time
P1 24
P2 3
P3 3
The Gantt chart is:

P1 P2 P3 P1 P1 P1 P1 P1

0 4 7 10 14 18 22 26 30

Department of CSE- Data Science


Multilevel Queue Scheduling
 Multi-level queue scheduling is used when processes can be classified into groups
 For example, foreground (interactive) processes and background (batch) processes
– The two types of processes have different response-time requirements and so may
have different scheduling needs
– Also, foreground processes may have priority (externally defined) over
background processes
 A multi-level queue scheduling algorithm partitions the ready queue into several
separate queues
 The processes are permanently assigned to one queue, generally based on some
property of the process such as memory size, process priority, or process type
 Each queue has its own scheduling algorithm
– The foreground queue might be scheduled using an RR algorithm
– The background queue might be scheduled using an FCFS algorithm
 In addition, there needs to be scheduling among the queues, which is commonly
implemented as fixed-priority pre-emptive scheduling
– The foreground queue may have absolute priority over the background queue

Department of CSE- Data Science


 Each queue has absolute priority over lower priority queues
 For example, no process in the batch queue can run unless the queues above it are
empty
 However, this can result in starvation for the processes in the lower priority queues

Department of CSE- Data Science


 Another possibility is to time slice among the queues
 Each queue gets a certain portion of the CPU time, which it can then schedule
among its various processes
– The foreground queue can be given 80% of the CPU time for RR scheduling
– The background queue can be given 20% of the CPU time for FCFS scheduling

Department of CSE- Data Science


Multilevel Feedback Queue Scheduling

 In multi-level feedback queue scheduling, a process can move between the various
queues; aging can be implemented this way

 A multilevel-feedback-queue scheduler is defined by the following parameters:

– Number of queues

– Scheduling algorithms for each queue

– Method used to determine when to promote a process

– Method used to determine when to demote a process

– Method used to determine which queue a process will enter when that process
needs service

Department of CSE- Data Science


 Scheduling
– A new job enters queue Q0 (RR) and is placed at the end. When it gains the
CPU, the job receives 8 milliseconds. If it does not finish in 8 milliseconds,
the job is moved to the end of queue Q1.
– A Q1 (RR) job receives 16 milliseconds. If it still does not complete, it is
preempted and moved to queue Q2 (FCFS).

Department of CSE- Data Science


Thread Scheduling

 Thread scheduling refers to the process of determining the execution order and time
allocation for threads in a multithreaded environment.
 Distinction between user-level and kernel-level threads
 Many-to-one and many-to-many models, thread library schedules user-level threads
to run on LWP( Light Weight Process)
– Known as process-contention scope (PCS) since scheduling competition is
within the process
 Kernel thread scheduled onto available CPU is system-contention scope (SCS) –
competition among all threads in system

Department of CSE- Data Science


Contention Scope
 Two approaches:
1. Process-Contention scope
 On systems implementing the many-to-one and many-to-many models, the thread
library schedules user-level threads to run on an available LWP.
 Competition for the CPU takes place among threads belonging to the same
process.
2. System-Contention scope
 The process of deciding which kernel thread to schedule on the CPU.
 Competition for the CPU takes place among all threads in the system.
 Systems using the one-to-one model schedule threads using only SCS.

Department of CSE- Data Science


Pthread Scheduling

 Pthread API that allows specifying either PCS or SCS during thread creation.
 P threads identifies the following contention scope values:
1. PTHREAD_SCOPEJPROCESS schedules threads using PCS scheduling.
2. PTHREAD-SCOPE_SYSTEM schedules threads using SCS scheduling.
 Pthread IPC provides following two functions for getting and setting the contention
scope policy:
1. P thread_ attr_ setscope(pthread_attr_t *attr, intscope)
2. pthread_attr_getscope(pthread_attr_t *attr, int*scop

Department of CSE- Data Science


#include <pthread.h>
#include <stdio.h> /* now join on each thread */
#define NUM THREADS 5 for (i = 0; i < NUM THREADS; i++)
int main(int argc, char *argv[]) pthread join(tid[i], NULL);
{
int i; }
pthread t tid[NUM THREADS]; /* Each thread will begin control in this
pthread attr t attr; function */
/* get the default attributes */
pthread attr init(&attr); void *runner(void *param)
/* set the scheduling algorithm to {
PROCESS or SYSTEM */
pthread attr setscope(&attr, PTHREAD printf("I am a thread\n");
SCOPE SYSTEM); pthread exit(0);
/* set the scheduling policy - FIFO, RT, or }
OTHER */
pthread attr setschedpolicy(&attr, SCHED
OTHER);
/* create the threads */
for (i = 0; i < NUM THREADS; i++)
pthread
create(&tid[i],&attr,runner,NULL);
Department of CSE- Data Science
Multiple-Processor Scheduling
 If multiple CPUs are available, the scheduling problem becomes more
complex.
Approaches to Multiple-Processor Scheduling
1. Asymmetric Multiprocessing: The basic idea is
 A master server is a single processor responsible for all scheduling decisions,
I/O processing and other system activities.
 The other processors execute only user code.
 Advantage: This is simple because only one processor accesses the system
data structures, reducing the need for data sharing
2. Symmetric Multiprocessing :The basic idea is:
 Each processor is self-scheduling.
 To do scheduling, the scheduler for each processor Examines the ready-
queue and Selects a process to execute.
• Restriction: We must ensure that two processors do not choose the same
process and that processes are not lost from the queue.
Department of CSE- Data Science
Processor Affinity

 In SMP systems,
1. Migration of processes from one processor to another are avoided and
2. Instead processes are kept running on same processor. This is known as processor
affinity.
 Two forms:
1. SoftAffinity
 When an OS try to keep a process on one processor because of policy, but cannot
guarantee it will happen.
 It is possible for a process to migrate between processors.
2. Hard Affinity
 When an OS have the ability to allow a process to specify that it is not to migrate to
other processors. Eg: Solaris OS

Department of CSE- Data Science


Load Balancing

 This attempts to keep the workload evenly distributed across all processors in an
SMP system.
 Load balancing attempts to keep the workload evenly distributed across all
processors in an SMP system.
 There are two general approaches to load balancing: push migration and pull
migration.
 With push migration, a specific task periodically checks the load on each
processor and -if it finds an imbalance-evenly distributes the load by moving (or
pushing) processes from overloaded to idle or less-busy processors.
 Pull migration occurs when an idle processor pulls a waiting task from a busy
processor.

Department of CSE- Data Science


Multicore Processors
• a recent trend in computer hardware has been to place multiple processor
cores on the same physical chip, resulting in a multicore processor .
• Each core has a register set to maintain its architectural state and appears to
the operating system to be a separate physical processor.
• Multicore processors may complicate scheduling issues.
• Let's consider how this can happen. Researchers have discovered that when a
processor accesses memory, it spends a significant amount of time waiting for
the data to become available.
• This situation, known as a memory stall, may occur for various reasons, such as
a cache miss (accessing data that is not in cache memory).

Department of CSE- Data Science


 Figure below illustrates a memory stall. In this scenario, the processor can spend up to
50 percent of its time waiting for data to become available from memory.

 To remedy this situation, many recent hardware designs have implemented


multithreaded processor cores in which two (or more) hardware threads are
assigned to each core. That way, if one thread stalls while waiting for memory,
the core can switch to another thread.

Department of CSE- Data Science


Virtualization and Scheduling
 A system with virtualization, even a single-CPU system, frequently acts like a
multiprocessor system.
 The virtualization software presents one or more virtual CPUs to each of the virtual
machines running on the system and then schedules the use of the physical CPUs among
the virtual machines.
 In general, though, most virtualized environments have one host operating system and
many guest operating systems.
 The host operating system creates and manages the virtual machines, and each virtual
machine has a guest operating system installed and applications running within that
guest.
 Each guest operating system may be fine-tuned for specific use cases, applications, and
users, including time sharing or even real-time operation.
Department of CSE- Data Science
Department of CSE- Data Science

You might also like