0% found this document useful (0 votes)
109 views38 pages

Lecture 4 - Process - CPU Scheduling

This document discusses process scheduling and CPU scheduling. It covers the following key points: - Process scheduling determines which ready processes move to the running state and is handled by the scheduler. The goal is to keep the CPU busy and provide minimum response times. - The OS maintains different queues for processes in various states like ready, waiting, running. Scheduling algorithms like FCFS, SJF, priority scheduling are used to select the next process to run. - Context switching allows processes to share the CPU by storing the state of a process being paused and restoring another process from its saved state. This enables multitasking.

Uploaded by

talent
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
109 views38 pages

Lecture 4 - Process - CPU Scheduling

This document discusses process scheduling and CPU scheduling. It covers the following key points: - Process scheduling determines which ready processes move to the running state and is handled by the scheduler. The goal is to keep the CPU busy and provide minimum response times. - The OS maintains different queues for processes in various states like ready, waiting, running. Scheduling algorithms like FCFS, SJF, priority scheduling are used to select the next process to run. - Context switching allows processes to share the CPU by storing the state of a process being paused and restoring another process from its saved state. This enables multitasking.

Uploaded by

talent
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 38

Process Scheduling/

CPU Scheduling
Process Scheduling
• The act of determining which process in the ready state should be moved to the
running state is known as Process Scheduling.
• The prime aim of the process scheduling system is to keep the CPU busy all the time
and to deliver minimum response time for all programs.
• CPU scheduling is a process which allows one process to use the CPU while the
execution of another process is on hold(in waiting state) due to unavailability of any
resource like I/O etc, thereby making full use of CPU.
• The aim of CPU scheduling is to make the system efficient, fast and fair.
• Process scheduling is an essential part of a Multiprogramming operating systems.
Such operating systems allow more than one process to be loaded into the executable
memory at a time and the loaded process shares the CPU using time multiplexing.
• For achieving this, the scheduler must apply appropriate rules for swapping processes
IN and OUT of CPU.
Process Scheduling Queues
• The OS maintains all PCBs in Process Scheduling Queues.
• The OS maintains a separate queue for each of the process states and PCBs of all
processes in the same execution state are placed in the same queue.
• When the state of a process is changed, its PCB is unlinked from its current queue and
moved to its new state queue.
• The Operating System maintains the following important process scheduling queues:
o All processes when enters into the system are stored in the job queue.
o Processes in the Ready state are placed in the ready queue. This queue keeps a set of
all processes residing in main memory, ready and waiting to execute. A new process is
always put in this queue.
o Processes waiting for a device to become available are placed in device queues. There
are unique device queues for each I/O device available.
Process Scheduling Queues
Two-State Process Model
• Two-state process model refers to running and non-running states.
Schedulers
• Schedulers are special system software which handle process scheduling in various ways.
• Their main task is to select the jobs to be submitted into the system and to decide which
process to run.
• Schedulers fell into one of the two general categories :
• Non pre-emptive scheduling. When the currently executing process gives up the CPU
voluntarily.
• Pre-emptive scheduling. When the operating system decides to favor another process,
pre-empting the currently executing process.
• Schedulers are of three types:
o Long-Term Scheduler
o Short-Term Scheduler
o Medium-Term Scheduler
Long-Term Scheduler
• It is also called a job scheduler.
• A long-term scheduler determines which programs are admitted to the system for
processing. It selects processes from the queue and loads them into memory for
execution.
• Process loads into the memory for CPU scheduling.
• The primary objective of the job scheduler is to provide a balanced mix of jobs, such as
I/O bound and processor bound.
• It also controls the degree of multiprogramming. If the degree of multiprogramming is
stable, then the average rate of process creation must be equal to the average departure
rate of processes leaving the system.
• On some systems, the long-term scheduler may not be available or minimal. Time-sharing
operating systems have no long term scheduler.
• When a process changes the state from new to ready, then there is use of long-term
scheduler.
Short Term Scheduler
• It is also called as CPU scheduler and runs very frequently.
• Its main objective is to increase system performance in accordance
with the chosen set of criteria and increase process execution rate.
• It is the change of ready state to running state of the process.
• CPU scheduler selects a process among the processes that are ready
to execute and allocates CPU to one of them.
• Short-term schedulers, also known as dispatchers, make the decision
of which process to execute next.
• Short-term schedulers are faster than long-term schedulers.
Medium Term Scheduler
• Medium-term scheduling is a part of swapping.
• It removes the processes from the memory. It reduces the degree of
multiprogramming.
• The medium-term scheduler is in charge of handling the swapped out-processes.
• A running process may become suspended if it makes an I/O request.
• A suspended processes cannot make any progress towards completion.
• In this condition, to remove the process from memory and make space for other
processes, the suspended process is moved to the secondary storage.
• This process is called swapping, and the process is said to be swapped out or
rolled out.
• Swapping may be necessary to improve the process mix.
Context Switch
• A context switch is the mechanism to store and restore the state or context of a
CPU in Process Control block so that a process execution can be resumed from the
same point at a later time.
• Using this technique, a context switcher enables multiple processes to share a
single CPU.
• Context switching is an essential part of a multitasking operating system features.
• When the scheduler switches the CPU from executing one process to execute
another, the state from the current running process is stored into the process
control block.
• After this, the state for the process to run next is loaded from its own PCB and used
to set the PC, registers, etc.
• At that point, the second process can start executing.
Context switching diagram
Scheduling Algorithms
• A Process Scheduler schedules different processes to be assigned to the
CPU based on particular scheduling algorithms.

First-Come, First-Served (FCFS) Scheduling


Shortest-Job-Next (SJN) Scheduling
Priority Scheduling
Shortest Remaining Time
Round Robin(RR) Scheduling
Multiple-Level Queues Scheduling
Multilevel Feedback Queue Scheduling
Scheduling Criteria
• There are many different criterias to check when considering the "best"
scheduling algorithm :
CPU utilization -To make out the best use of CPU and not to waste any
CPU cycle, CPU would be working most of the time(Ideally 100% of the
time). Considering a real system, CPU usage should range from 40%
(lightly loaded) to 90% (heavily loaded.)
Throughput - It is the total number of processes completed per unit time
or rather say total amount of work done in a unit of time. This may range
from 10/second to 1/hour depending on the specific processes.
Turnaround time - It is the amount of time taken to execute a particular
process, i.e. The interval from time of submission of the process to the
time of completion of the process(Wall clock time).
• Waiting time - The sum of the periods spent waiting in the ready queue
amount of time a process has been waiting in the ready queue to
acquire get control on the CPU.
• Load average - It is the average number of processes residing in the
ready queue waiting for their turn to get into the CPU.
• Response time - Amount of time it takes from when a request was
submitted until the first response is produced. Remember, it is the time
till the first response and not the completion of process execution(final
response).

• In general CPU utilization and Throughput are maximized and other


factors are reduced for proper optimization.
• The Algorithms
First Come, First Served (FCFS)
• Jobs are executed on first come, first served basis.
• It is a non-preemptive scheduling algorithm.
• Easy to understand and implement.
• Its implementation is based on FIFO queue.
• Poor in performance, as average wait time is high.
Round Robin Scheduling
• Round Robin is a preemptive process scheduling algorithm.
• Each process is provided a fix time to execute; it is called a
quantum.
• Once a process is executed for a given time period, it is
preempted and other process executes for a given time period.
• Context switching is used to save states of preempted
processes.
Shortest Job Next (SJN)
• This is also known as shortest job first, or SJF.
• This is a non-preemptive scheduling algorithm.
• Best approach to minimize waiting time.
• Easy to implement in Batch systems where required CPU time is known in
advance.
• Impossible to implement in interactive systems where the required CPU
time is not known.
• The processer should know in advance how much time a process will take.
Shortest Remaining Time
• Shortest remaining time (SRT) is the preemptive version of the SJN
algorithm.
• The processor is allocated to the job closest to completion but it can
be preempted by a newer ready job with shorter time to completion.
• Impossible to implement in interactive systems where required CPU
time is not known.
• It is often used in batch environments where short jobs need to be
given preference.
Priority Based Scheduling Algorithm

• Priority is assigned for each process.


• Process with highest priority is executed first and so on.
• Processes with same priority are executed in FCFS manner.
• Priority can be decided based on memory requirements, time
requirements or any other resource requirement.
Multilevel Queue Scheduling
• Multiple-level queues are not an independent scheduling algorithm.
• They make use of other existing algorithms to group and schedule jobs
with common characteristics.
 Multiple queues are maintained for processes with common
characteristics.
 Each queue can have its own scheduling algorithms.
 Priorities are assigned to each queue.
• For example, CPU-bound jobs can be scheduled in one queue and all
I/O-bound jobs in another queue.
• The Process Scheduler then alternately selects jobs from each.
• NB: under this algorithm jobs cannot switch from queue to queue -
Once they are assigned a queue, that is their queue until they finish.
Multilevel feedback queue scheduling
• Multilevel feedback queue scheduling is similar to the ordinary multilevel queue
scheduling described above, except jobs may be moved from one queue to another for a
variety of reasons:
o If the characteristics of a job change between CPU-intensive and I/O intensive, then it may be
appropriate to switch a job from one queue to another.
o Aging can also be incorporated, so that a job that has waited for a long time can get bumped up
into a higher priority queue for a while.
• Multilevel feedback queue scheduling is the most flexible, because it can be tuned for
any situation. But it is also the most complex to implement because of all the adjustable
parameters. Some of the parameters which define one of these systems include:
o The number of queues.
o The scheduling algorithm for each queue.
o The methods used to upgrade or demote processes from one queue to another. ( Which may be
different. )
o The method used to determine which queue a process enters initially.
The algorithm
• ANY question, from ANY point of ignorance that
might need ANY clarification?????????????

You might also like