Unit3_Notes
Unit3_Notes
Syllabus: Scheduling Concept, Performance Criteria, Process states, process transition diagram,
schedulers, Process Control Block (PCB), Process Address Space, Process Identification
Information, Scheduling Algorithms – First come first serve (FCFS), hortest Job First (SJF),
Shortest Remaining Time (SRTN), Round Robin (RR), Priority Scheduling, Multilevel Queue
Scheduling
Concept of Process:
A process is a program in execution which then forms the basis of all computation. The process
is not as same as program code but a lot more than it. A process is an 'active' entity as opposed to
the program which is a 'passive' entity.
Process memory is divided into four sections for efficient working:
The Text section is made up of the compiled program code, read in from non-volatile storage
when the program is launched.
The Data section is made up of global and static variables, allocated and initialized prior to
executing the main.
The Heap is used for dynamic memory allocation and is managed via calls to new, delete,
free, etc.
The Stack is used for local variables. Space on the stack is reserved for local variables when
they are declared.
Scheduling Concept:
In Multiprogramming systems, the Operating system schedules the processes on the CPU to
have the maximum utilization of it and this procedure is called CPU scheduling. The
Operating System uses various scheduling algorithms to schedule the processes.
The Operating System maintains the following important process scheduling queues −
Job queue − This queue keeps all the processes in the system.
Ready queue − This queue keeps a set of all processes residing in main memory, ready
and waiting to execute. A new process is always put in this queue.
Device queues − The processes which are blocked due to unavailability of an I/O device
constitute this queue.
Process States:
A process passes through different states as it executes. These states may be different in different
operating systems. However, the common process states are explained with the help of a
diagram
New
This is the state when the process has just been created. It is the initial state in the process life
cycle.
Ready
In the ready state, the process is waiting to be assigned the processor by the short term scheduler,
so it can run. This state is immediately after the new state for the process.
Running
The process is said to be in a running state when the process instructions are being executed by
the processor. This is done once the process is assigned to the processor using the short-term
scheduler.
Blocked/ Waiting
The process is in blocked state if it is waiting for some event to occur. This event may be I/O as
the I/O events are executed in the main memory and don't require the processor. After the event
is complete, the process again goes to ready state.
Terminated
The process is terminated once it finishes its execution. In the terminated state, the process is
removed from main memory and its process control block is also deleted.
Operation on process:
There are many operations that can be performed on processes. Some of these are process
creation, process preemption, process blocking, and process termination.
Process Creation :
Processes need to be created in the system for different operations. This can be done by the
following events −
1. User request for process creation
2. System initialization
3. Execution of a process creation system call by a running process
4. Batch job initialization
Process Preemption
An interrupt mechanism is used in preemption that suspends the process executing currently and
the next process to execute is determined by the short- term scheduler. Preemption makes sure that
all processes get some CPU time for execution.
Process Blocking
The process is blocked if it is waiting for some event to occur. This event may be I/O as the I/O
events are executed in the main memory and don't require the processor. After the event is
complete, the process again goes to the ready state.
Process Termination
After the process has completed the execution of its last instruction, it is terminated. The
resources held by a process are released after it is terminated.
Scheduling Criteria
Different CPU scheduling algorithms have different properties and the choice of a particular
algorithm depends on the various factors. Many criteria have been suggested for comparing
CPU scheduling algorithms.
The criteria include the following:
1. CPU utilization
The main objective of any CPU scheduling algorithm is to keep the CPU as busy as possible.
Theoretically, CPU utilization can range from 0 to 100 but in a real-time system, it varies from
40 to 90 percent depending on the load upon the system.
2. Throughput –
A measure of the work done by CPU is the number of processes being executed and completed
per unit time. This is called throughput. The throughput may vary depending upon the length
or duration of processes.
3. Turnaround time
For a particular process, an important criterion is how long it takes to execute that process. The
time elapsed from the time of submission of a process to the time of completion is known as the
turnaround time. Turn- around time is the sum of times spent waiting to get into memory,
waiting in ready queue, executing in CPU, and waiting for I/O.
4. Waiting time
A scheduling algorithm does not affect the time required to complete the process once it starts
execution. It only affects the waiting time of a process, i.e. time spent by a process waiting in the
ready queue.
5. Response time
In an interactive system, turn-around time is not the best criteria. A process may produce some
output early and continue computing new results while previous results are being output to the
user. Thus another criteria is the time taken from submission of the process of request until the
first response is produced. This measure is called response time.
Types of Schedulers:
Schedulers are special system software which handle process scheduling in various ways. Their
main task is to select the jobs to be submitted into the system and to decide which process to
run. Schedulers are of three types −
Long-Term Scheduler
Short-Term Scheduler
Medium-Term Scheduler
Context Switching
A context switch is the mechanism to store and restore the state or context of a CPU in Process
Control block so that a process execution can be resumed from the same point at a later time.
Using this technique, a context switcher enables multiple processes to share a single CPU.
Context switching is an essential part of multitasking operating system features.
When the scheduler switches the CPU from executing one process to execute another, the state
from the current running process is stored into the process control block. After this, the state for
the process to run next is loaded from its own PCB and used to set the PC, registers, etc. At that
point, the second process can start executing.
Scheduling Algorithms:
A Process Scheduler schedules different processes to be assigned to the CPU based on
scheduling algorithms. Scheduling algorithms are of two types
| P1 | 0ms | 18ms |
| P2 | 16ms | 23ms |
| P3 | 23ms | 33ms |
Advantages of FCFS:
It is the simplest scheduling algorithm and is easy to implement.
(b) Shortest Job First (SJF)
Shortest Job First (SJF) is a CPU scheduling algorithm where the process with the shortest burst
time is executed first.
It is optimal in terms of minimizing average waiting time but suffers from starvation, where
longer processes may wait indefinitely if shorter processes keep arriving.
Types of SJF
1. Non-Preemptive SJF
○ The CPU is allocated to the process with the shortest burst time.
○ Once a process starts execution, it cannot be preempted until it finishes.
the CPU is preempted.
Algorithm Steps for Non-Preemptive SJF
1. Sort all processes by arrival time.
2. Select the process with the shortest burst time among the available processes.
3. Execute the selected process until completion.
4. Repeat the process for remaining processes.
5. Calculate waiting time (WT) and turnaround time (TAT) for each process.
Example (Non-preemptive SJF):