0% found this document useful (0 votes)
12 views42 pages

Cpu Scheduling: UNIT-3 Amrita Bhatnagar

This document covers CPU scheduling concepts, including process states, scheduling algorithms, and the structure of processes in memory. It discusses various scheduling methods such as First-Come, First-Served, Shortest Job First, and Round Robin, along with their advantages and disadvantages. Additionally, it addresses the importance of CPU utilization, throughput, turnaround time, and the role of process control blocks in managing process information.

Uploaded by

kanoujiasatyam
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views42 pages

Cpu Scheduling: UNIT-3 Amrita Bhatnagar

This document covers CPU scheduling concepts, including process states, scheduling algorithms, and the structure of processes in memory. It discusses various scheduling methods such as First-Come, First-Served, Shortest Job First, and Round Robin, along with their advantages and disadvantages. Additionally, it addresses the importance of CPU utilization, throughput, turnaround time, and the role of process control blocks in managing process information.

Uploaded by

kanoujiasatyam
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 42

CPU SCHEDULING

UNIT-3
AMRITA BHATNAGAR
UNIT-3
Scheduling Concept
Performance Criteria
Process states
Process Transition Diagram
Schedulers
Process Control Block
Process address space
Process Identification Information
Thread and their management
Scheduling Algorithms
Multiprocessor Scheduling
Deadlock: System Model
Deadlock Characterization
Prevention, Avoidance, and detection
Recovery From Deadlock
Process Concept
Process
Process States
Process Control Blocks
Threads
Process

Process – a program in execution; process execution must progress in a sequential fashion.


A Program is a passive entity but a process is an active entity
A Program becomes a process when an executable file is loaded into memory
A System consists of a collection of processes.
All these processes can execute concurrently, with the CPU by switching the CPU between
the processes hence, the CPU utilization increased.

A process includes:
program counter
stack
data section
Heap
The Structure of Process in Memory
Process memory is divided into four sections as shown in the Figure below:
• The text section comprises the compiled program code, read in from non-volatile storage when
the program is launched. It also includes the current activity, as represented by the value of the
program counter and the contents of the processor’s register.

• The data section stores global and static variables, allocated and initialized prior to executing
main.
• The heap is used for dynamic memory allocation, and is managed via calls to new, delete,
malloc, free, etc.
• The stack is used for local variables. Space on the stack is reserved for local variables when
they are declared ( at function entrance or elsewhere, depending on the language ), and the
space is freed up when the variables go out of scope. Note that the stack is also used for
function return values.
Process in Memory
Process States
•Processes may be in one of 5 states
• New - The process is in the stage of being created.
• Ready - The process has all the resources available that it needs to
run, but the CPU is not currently working on this process's
instructions.
• Running - The CPU is working on this process's instructions.
• Waiting - The process cannot run at the moment, because it is
waiting for some resource to become available or for some event to
occur. For example the process may be waiting for keyboard input,
disk access request, inter-process messages, a timer to go off, or a
child process to finish.
• Terminated - The process has completed.
Process States
Schedulers
Short Term Scheduler
In operating systems, the short-term scheduler, also known as the CPU scheduler, decides which
process in the ready queue to execute next and allocates the CPU to it. It makes this decision
frequently, often triggered by events like clock interrupts or I/O completion. The short-term
scheduler uses various scheduling algorithms to determine which process to execute next, such
as First-Come, First-Served (FCFS), Shortest Job First (SJF), Round Robin (RR), and
priority-based scheduling.
Long Term Schedulers
Long Term Schedulers(also known as job scheduler) decides which processes to load into
memory from secondary storage.
Medium Term Schedulers
Handles swapping processes in and out of memory to manage memory usage.
Process Control Block
A process control block (PCB) is a data structure used by computer
operating systems to store all the information about a process.
It is also known as a process descriptor.
When a process is created (initialized or installed), the operating system
creates a corresponding process control block.
For each process there is a Process Control Block, PCB, which stores the following (
types of ) process-specific information
Process State - Running, waiting, Ready, new, terminated
•Process ID, and parent process ID.
•CPU registers and Program Counter - These need to be saved and restored when
swapping processes in and out of the CPU.
•CPU-Scheduling information - Such as priority information and pointers to
scheduling queues.
•Memory-Management information - E.g. page tables or segment tables, base
register, limit register.
•Accounting information - user and kernel CPU time consumed, account numbers,
limits, etc.
•I/O Status information - Devices allocated, open file tables, etc.
Process Control Block
THREADS
A process is a program that performs a single thread of execution.
Many modern operating systems have extended the process concept to allow a process to
have multiple threads of execution and thus to perform more than one task at a time.
On a system that supports threads, the PCB is expanded to include information for threads.
Context Switching

Context switching is a process that involves switching the CPU from one process
or task to another.
In this phenomenon, the execution of the process that is present in the running
state is suspended by the kernel and another process that is present in the ready
state is executed by the CPU.
It is one of the essential features of the multitasking operating system.
The processes are switched so fastly that it gives an illusion to the user that all
the processes are being executed at the same time.
CPU Switch from process to process
Scheduling concepts
❑CPU scheduling is the basis of multi programmed operating systems. By switching the
CPU among processes, the operating system can make the computer more productive.
❑ In a single-processor system, only one process can run at a time; any others must wait
until the CPU is free and can be rescheduled. The objective of multiprogramming is to
have some process running at all times, to maximize CPU utilization.
❑The idea is relatively simple. A process is executed until it must wait, typically for the
completion of some I/O request. In a simple computer system, the CPU then just sits
idle. All this waiting time is wasted; no useful work is accomplished.
❑With multiprogramming, we try to use this time productively. Several processes are kept
in memory at one time. When one process has to wait, the operating system takes the
CPU away from that process and gives the CPU to another process. This pattern
continues. Every time one process has to wait, another process can take over use of the
CPU.
❑ Scheduling of this kind is a fundamental operating-system function. Almost all
computer resources are scheduled before use. The CPU is, of course, one of the primary
computer resources. Thus, its scheduling is central to operating-system design.
CPU–I/O Burst Cycle

1. Process execution consists of a cycle of CPU execution and I/O wait. Processes
alternate between these two states.

2. Process execution begins with a CPU burst that is followed by an I/O burst, which is
followed by another CPU burst, then another I/O burst, and so on. Eventually, the
final CPU burst ends with a system request to terminate execution.

3. An I/O-bound program typically has many short CPU bursts. A CPU-bound program
might have a few long CPU bursts. This distribution can be important in the selection
of an appropriate CPU scheduling algorithm
CPU Scheduler
•Whenever the CPU becomes idle, the operating system must select one of the processes
in the ready queue to be executed. The selection process is carried out by the short-term
scheduler (or CPU scheduler).

• The scheduler selects a process from the processes in memory that are ready to execute
and allocates the CPU to that process.

•The records in the queues are generally process control blocks (PCBs) of the processes.
Preemptive Scheduling
CPU-scheduling decisions may take place under the following four circumstances:

1. When a process switches from the running state to the waiting state (for example, as
the result of an I/O request or an invocation of wait for the termination of one of the
child processes)
2. When a process switches from the running state to the ready state (for example, when
an interrupt occurs)
3. When a process switches from the waiting state to the ready state (for example, at the
completion of I/O)
4. When a process terminates.
Dispatcher
Another component involved in the CPU-scheduling function is the dispatcher. The dispatcher is
the module that gives control of the CPU to the process selected by the short-term scheduler.
This function involves the following:
• Switching context
•Switching to user mode
• Jumping to the proper location in the user program to restart that program
The dispatcher should be as fast as possible since it is invoked during every process switch. The
time it takes for the dispatcher to stop one process and start another running is known as the
dispatch.
Scheduling Criteria
CPU utilization:
We want to keep the CPU as busy as possible. Conceptually, CPU utilization can range from 0 to 100
percent. In a real system, it should range from 40 percent (for a lightly loaded system) to 90 percent (for a
heavily used system).

Throughput:
If the CPU is busy executing processes, then work is being done. One measure of work is the number of
processes that are completed per time unit, called throughput. For long processes, this rate may be one
process per hour; for short transactions, it may be ten processes per second.

Turnaround time:
From the point of view of a particular process, the important criterion is how long it takes to execute that
process. The interval from the time of submission of a process to the time of completion is the turnaround
time. Turnaround time is the sum of the periods spent waiting to get into memory, waiting in the ready
queue, executing on the CPU, and doing I/O.
Performance Criteria(Various time related to the Process)
Arrival Time
The time at which Process enter the ready queue.
Waiting time
Turn around time – Burst Time
Burst time
Time Required by a process to get execution on CPU.
Completion time
The time at which process complete its execution.
Turn around time
Completion time – Burst Time
Response Time
The time at which a process get CPU first time.
Scheduling Algorithms
CPU scheduling deals with the problem of deciding which of the processes in the ready queue is
to be allocated the CPU.
There are many different CPU-scheduling algorithms.
Preemptive Scheduling Non-Preemptive Scheduling
Shortest Remaining Time First(SRTF) First Come First Serve(FCFS)
Longest Remaining Time First(LRTF) Shortest Job first (SJF)
Round Robin Longest Job First (LJF)
Priority
First-Come, First-Served Scheduling

By far the simplest CPU-scheduling algorithm is the first-come, first-served (FCFS) scheduling
algorithm. With this scheme, the process that requests the CPU first is allocated the CPU first.
Consider the following set of processes that arrive at time 0, with the length of the CPU burst given
in milliseconds:
First-Come, First-Served Scheduling
The waiting time is 0 milliseconds for process P1, 24 milliseconds for process P2, and 27
milliseconds for process P3. Thus, the average waiting time is (0 + 24 + 27)/3 = 17 milliseconds.
Convoy Effect in FCFS
In FCFS, if a long, CPU-intensive process enters the queue first, it will monopolize the
CPU until it completes, even if many other shorter, I/O-bound processes are
waiting. This can lead to the convoy effect, where the longer process effectively holds up
a "convoy" of shorter processes waiting for CPU time.
Convoy Effect in FCFS
Consequences:

The convoy effect can lead to several negative consequences:


Wasted CPU Time

Reduced Throughput

Starvation
Shortest Job First
•Shortest Job First (SJF) is an algorithm in which the process having the
smallest execution time is chosen for the next execution.
•This scheduling method can be preemptive or non-preemptive. It significantly
reduces the average waiting time for other processes awaiting execution.

There are basically two types of SJF methods:


•Non-Preemptive SJF
•Preemptive SJF
Non-Preemptive SJF

In non-preemptive scheduling, once the CPU cycle is allocated to the process, the
process holds it till it reaches a waiting state or is terminated.
Pre-emptive SJF(Shortest Remaining time first)

•In Preemptive SJF Scheduling, jobs are put into the ready queue as they come. A
process with the shortest burst time begins execution.
•If a process with even a shorter burst time arrives, the current process is removed or
preempted from execution, and the shorter job is allocated a CPU cycle.
•It is also known as the Shortest remaining time first.
Round–Robin Scheduling Algorithm
•The name of this algorithm comes from the round-robin principle, where each
person gets an equal share of something in turns.
• It is the oldest, simplest scheduling algorithm mostly used for multitasking.
•In Round-robin scheduling, each ready task runs turn by turn only in a cyclic
queue for a limited time slice.
•This algorithm also offers starvation-free execution of processes.
Example of RR with Time Quantum = 20
Process Burst Time
P1 53
P2 17
P3 68
P4 24
The Gantt chart is:

P1 P2 P3 P4 P1 P3 P4 P1 P3 P3

0 20 37 57 77 97 117 121 134 154 162


Typically, higher average turnaround than SJF, but better response.
Priority Queue
•Priority Scheduling is a method of scheduling processes that is
based on priority. In this algorithm, the scheduler selects the tasks to
work as per priority.
•The processes with higher priority should be carried out first, whereas
jobs with equal priorities are carried out on a round-robin or FCFS
basis. Priority depends upon memory requirements, time
requirements, etc.
Types of Priority Queue
Preemptive Scheduling
In Preemptive Scheduling, the tasks are mostly assigned with their priorities. Sometimes
it is important to run a task with a higher priority before another lower priority task, even
if the lower priority task is still running. The lower priority task holds for some time and
resumes when the higher priority task finishes its execution.
Non-Preemptive Scheduling
In this type of scheduling method, the CPU has been allocated to a specific process. The
process that keeps the CPU busy, will release the CPU either by switching context or
terminating. It is the only method that can be used for various hardware platforms. That’s
because it doesn’t need special hardware (for example, a timer) like preemptive
scheduling.
Multilevel Queue
❑It may happen that processes in the ready queue can be divided into different classes where each
class has its own scheduling needs.
❑ For example, a common division is a foreground (interactive) process and a background
(batch) process.
❑These two classes have different scheduling needs. For this kind of situation, Multilevel Queue
Scheduling is used.
❑Ready Queue is divided into separate queues for each class of processes. For example, let us take
three different types of processes System processes, Interactive processes, and Batch Processes.
❑All three processes have their own queue.
Multilevel Feedback Queue Scheduling (MLFQ) CPU
Scheduling

Multilevel Feedback Queue Scheduling (MLFQ) CPU Scheduling is like a Multilevel queue
but in this process can move between the queues. And thus, much more efficient than multilevel
queue scheduling.

You might also like