0% found this document useful (0 votes)
5 views

Unit3_Notes

Unit 3 Process Management covers the concepts of processes, their states, and scheduling algorithms used by operating systems to manage CPU utilization. It details the structure of a Process Control Block (PCB), the types of schedulers, and various scheduling algorithms like FCFS, SJF, and Priority Scheduling. Additionally, it explains process operations such as creation, blocking, and termination, along with performance criteria for evaluating scheduling effectiveness.
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

Unit3_Notes

Unit 3 Process Management covers the concepts of processes, their states, and scheduling algorithms used by operating systems to manage CPU utilization. It details the structure of a Process Control Block (PCB), the types of schedulers, and various scheduling algorithms like FCFS, SJF, and Priority Scheduling. Additionally, it explains process operations such as creation, blocking, and termination, along with performance criteria for evaluating scheduling effectiveness.
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 13

Unit 3 Process Management

Syllabus: Scheduling Concept, Performance Criteria, Process states, process transition diagram,
schedulers, Process Control Block (PCB), Process Address Space, Process Identification
Information, Scheduling Algorithms – First come first serve (FCFS), hortest Job First (SJF),
Shortest Remaining Time (SRTN), Round Robin (RR), Priority Scheduling, Multilevel Queue
Scheduling
Concept of Process:
A process is a program in execution which then forms the basis of all computation. The process
is not as same as program code but a lot more than it. A process is an 'active' entity as opposed to
the program which is a 'passive' entity.
Process memory is divided into four sections for efficient working:
 The Text section is made up of the compiled program code, read in from non-volatile storage
when the program is launched.
 The Data section is made up of global and static variables, allocated and initialized prior to
executing the main.
 The Heap is used for dynamic memory allocation and is managed via calls to new, delete,
free, etc.
 The Stack is used for local variables. Space on the stack is reserved for local variables when
they are declared.
Scheduling Concept:

In Multiprogramming systems, the Operating system schedules the processes on the CPU to
have the maximum utilization of it and this procedure is called CPU scheduling. The
Operating System uses various scheduling algorithms to schedule the processes.
The Operating System maintains the following important process scheduling queues −
 Job queue − This queue keeps all the processes in the system.
 Ready queue − This queue keeps a set of all processes residing in main memory, ready
and waiting to execute. A new process is always put in this queue.
 Device queues − The processes which are blocked due to unavailability of an I/O device
constitute this queue.
Process States:
A process passes through different states as it executes. These states may be different in different
operating systems. However, the common process states are explained with the help of a
diagram

New
This is the state when the process has just been created. It is the initial state in the process life
cycle.

Ready
In the ready state, the process is waiting to be assigned the processor by the short term scheduler,
so it can run. This state is immediately after the new state for the process.

Running
The process is said to be in a running state when the process instructions are being executed by
the processor. This is done once the process is assigned to the processor using the short-term
scheduler.

Blocked/ Waiting

The process is in blocked state if it is waiting for some event to occur. This event may be I/O as
the I/O events are executed in the main memory and don't require the processor. After the event
is complete, the process again goes to ready state.

Terminated

The process is terminated once it finishes its execution. In the terminated state, the process is
removed from main memory and its process control block is also deleted.
Operation on process:
There are many operations that can be performed on processes. Some of these are process
creation, process preemption, process blocking, and process termination.

Process Creation :

Processes need to be created in the system for different operations. This can be done by the
following events −
1. User request for process creation
2. System initialization
3. Execution of a process creation system call by a running process
4. Batch job initialization
Process Preemption

An interrupt mechanism is used in preemption that suspends the process executing currently and
the next process to execute is determined by the short- term scheduler. Preemption makes sure that
all processes get some CPU time for execution.
Process Blocking

The process is blocked if it is waiting for some event to occur. This event may be I/O as the I/O
events are executed in the main memory and don't require the processor. After the event is
complete, the process again goes to the ready state.

Process Termination

After the process has completed the execution of its last instruction, it is terminated. The
resources held by a process are released after it is terminated.

Process Control Block (PCB)


A Process Control Block (PCB) is a data structure maintained by the operating system (OS) to
store important information about a process. It helps the OS manage and track processes
efficiently.
Process ID (PID):
Unique identifier for each process.
Parent Process ID (PPID): ID of the parent process.
Process State
Tracks whether the process is new, ready, running, waiting, or terminated.
CPU Scheduling Information
Priority, scheduling queue, and CPU registers to determine execution order.
Memory Management Information
Base and limit registers, page tables, and segment tables for process memory management.
I/O Information
List of open files and I/O device status.
Accounting Information
CPU usage, execution time, and process execution limits.

6. Process Address Space


A Process Address Space is the set of memory locations allocated to a process during its
execution. It contains code, data, stack, and heap segments, ensuring that a process runs
independently without interfering with others.

A process's address space includes:


Text Segment (Code Section)
Stores program instructions (executable code).
Data Segment
Stores global and static variables.
Divided into initialized and uninitialized (BSS) sections.
Heap Segment
Dynamically allocated memory using malloc(), new() (in C/C++).
Grows upward in memory.
Stack Segment
Stores function calls, local variables, and return addresses.
Grows downward in memory.
Process Identification Information
Process Identification Information is a set of unique identifiers assigned to each process by the
Operating System (OS) to track and manage processes efficiently.
Each process is uniquely identified by:
Process ID (PID)
Parent Process ID (PPID)
User ID (UID)
Group ID (GID)

Scheduling Criteria

Different CPU scheduling algorithms have different properties and the choice of a particular
algorithm depends on the various factors. Many criteria have been suggested for comparing
CPU scheduling algorithms.
The criteria include the following:
1. CPU utilization
The main objective of any CPU scheduling algorithm is to keep the CPU as busy as possible.
Theoretically, CPU utilization can range from 0 to 100 but in a real-time system, it varies from
40 to 90 percent depending on the load upon the system.
2. Throughput –

A measure of the work done by CPU is the number of processes being executed and completed
per unit time. This is called throughput. The throughput may vary depending upon the length
or duration of processes.

3. Turnaround time
For a particular process, an important criterion is how long it takes to execute that process. The
time elapsed from the time of submission of a process to the time of completion is known as the
turnaround time. Turn- around time is the sum of times spent waiting to get into memory,
waiting in ready queue, executing in CPU, and waiting for I/O.
4. Waiting time
A scheduling algorithm does not affect the time required to complete the process once it starts
execution. It only affects the waiting time of a process, i.e. time spent by a process waiting in the
ready queue.
5. Response time
In an interactive system, turn-around time is not the best criteria. A process may produce some
output early and continue computing new results while previous results are being output to the
user. Thus another criteria is the time taken from submission of the process of request until the
first response is produced. This measure is called response time.
Types of Schedulers:

Schedulers are special system software which handle process scheduling in various ways. Their
main task is to select the jobs to be submitted into the system and to decide which process to
run. Schedulers are of three types −
 Long-Term Scheduler

 Short-Term Scheduler

 Medium-Term Scheduler

Long Term Scheduler


 It is also called a job scheduler. A long-term scheduler determines which programs are
admitted to the system for processing. It selects processes from the queue and loads them into
memory for execution. Process loads into the memory for CPU scheduling.
 The primary objective of the job scheduler is to provide a balanced mix of jobs, such as I/O
bound and processor bound. It also controls the degree ofmultiprogramming. If the degree of
multiprogramming is stable, then the average rate of process creation must be equal to the
average departure rate of processes leaving the system.
 On some systems, the long-term scheduler may not be available or minimal. Time-sharing
operating systems have no long term scheduler. When a process changes the state from new to
ready, then there is use of long-term scheduler.

Short Term Scheduler


It is also called as CPU scheduler. Its main objective is to increase system performance in
accordance with the chosen set of criteria. It is the change of ready state to running state of the
process. CPU scheduler selects a process among the processes that are ready to execute and
allocates CPU to one of them. Short-term schedulers, also known as dispatchers, make the
decision of which process to execute next. Short-term schedulers are faster than long-term
schedulers.

Medium Term Scheduler


Medium-term scheduling is a part of swapping. It removes the processes from the memory. It
reduces the degree of multiprogramming. The medium-term scheduler is in-charge of handling
the swapped out-processes.
A running process may become suspended if it makes an I/O request. A suspended processes
cannot make any progress towards completion. In this condition, to remove the process from
memory and make space for other processes, the suspended process is moved to the secondary
storage. This process is called swapping, and the process is said to be swapped out or rolled out.
Swapping may be necessary to improve the process mix.

Context Switching
A context switch is the mechanism to store and restore the state or context of a CPU in Process
Control block so that a process execution can be resumed from the same point at a later time.
Using this technique, a context switcher enables multiple processes to share a single CPU.
Context switching is an essential part of multitasking operating system features.

When the scheduler switches the CPU from executing one process to execute another, the state
from the current running process is stored into the process control block. After this, the state for
the process to run next is loaded from its own PCB and used to set the PC, registers, etc. At that
point, the second process can start executing.

Scheduling Algorithms:
A Process Scheduler schedules different processes to be assigned to the CPU based on
scheduling algorithms. Scheduling algorithms are of two types

The following are the popular process scheduling algorithms:


1. First-Come, First-Served (FCFS) Scheduling
2. Shortest-Job-First (SJF) Scheduling
3. Shortest Remaining Time First (SRTF)
4. Priority Scheduling
5. Round-Robin Scheduling

First-Come, First-Served (FCFS) Scheduling


First Come First Serve is the full form of FCFS. It is the easiest and simplest CPU scheduling
algorithm. In this type of algorithm, the process which requests the CPU gets the CPU allocation
first. This scheduling method can be managed with a FIFO queue. As the process enters the
ready queue, its PCB (Process Control Block) is linked with the tail of the queue. So, when CPU
becomes free, it should be assigned to the process at the beginning of the queue.

Characteristics of FCFS method:


 It offers non-preemptive and pre-emptive scheduling algorithms.

 Jobs are always executed on a first-come, first-serve basis


 It is easy to implement and use.
 However, this method is poor in performance, and the general wait time is quite high.
Example:
Algorithm Steps
1. Sort processes by their arrival time.
2. Start execution of the first process that arrives.
3. When a process finishes, execute the next process in the queue.
4. Continue until all processes are complete.
In the above example, you can see that we have three processes P1, P2, and P3, and they are
coming in the ready state at 0ms, 2ms, and 2ms respectively. So, based on the arrival time,
process P1 will be executed for the first 18ms.
After that, process P2 will be executed for 7ms and finally, the process P3 will be executed for
10ms. One thing to be noted here is that if the arrival time of the processes is the same, then the
CPU can select any process.

| Process | Waiting Time | Turnaround Time |

| P1 | 0ms | 18ms |

| P2 | 16ms | 23ms |

| P3 | 23ms | 33ms |

Total waiting time: (0 + 16 + 23) = 39ms Average waiting


time: (39/3) = 13ms

Total turnaround time: (18 + 23 + 33) = 74ms Average


turnaround time: (74/3) = 24.66ms

Advantages of FCFS:
It is the simplest scheduling algorithm and is easy to implement.
(b) Shortest Job First (SJF)
Shortest Job First (SJF) is a CPU scheduling algorithm where the process with the shortest burst
time is executed first.
It is optimal in terms of minimizing average waiting time but suffers from starvation, where
longer processes may wait indefinitely if shorter processes keep arriving.
Types of SJF
1. Non-Preemptive SJF
○ The CPU is allocated to the process with the shortest burst time.
○ Once a process starts execution, it cannot be preempted until it finishes.
the CPU is preempted.
Algorithm Steps for Non-Preemptive SJF
1. Sort all processes by arrival time.
2. Select the process with the shortest burst time among the available processes.
3. Execute the selected process until completion.
4. Repeat the process for remaining processes.
5. Calculate waiting time (WT) and turnaround time (TAT) for each process.
Example (Non-preemptive SJF):

Example of Preemptive SJF


Consider the same processes, but now with preemption.

(c) Shortest Remaining Time First (SRTF)


Shortest Remaining Time Next (SRTN) is the preemptive version of Shortest Job First (SJF). It
selects the process with the shortest remaining burst time for execution. If a new process arrives
with a smaller burst time than the currently running process, the CPU preempts the current
process and executes the new one.
SRTN minimizes average waiting time but suffers from high context switching and starvation of
long processes.
Algorithm Steps
Sort processes by arrival time.
Start executing the process with the shortest burst time.
When a new process arrives, compare its burst time with the remaining time of the current
process.
If the new process has a shorter burst time, preempt the current process and run the new one.
Otherwise, continue executing the current process.
Repeat until all processes are executed.
Calculate turnaround time (TAT) and waiting time (WT) for each process.
Example
Step-by-Step Execution
At t = 0, only P1 (7ms) is available → Execute P1.
At t = 2, P2 (4ms) arrives → P2 has a shorter burst time, so P1 is preempted, and P2 starts
execution.
At t = 4, P3 (1ms) arrives → P3 has the shortest burst time, so P2 is preempted, and P3 executes.
At t = 5, P4 (4ms) arrives → P2 resumes execution (as it still has 3ms remaining).
Once P2 finishes, P4 executes, followed by P1 resuming.

(e) Priority Scheduling


Priority Scheduling is a CPU scheduling algorithm where each process is assigned a priority. The
CPU executes the highest priority process first. If two processes have the same priority, they are
scheduled based on their arrival time (FCFS as tie-breaker).
Priority scheduling can be:
Preemptive → A higher-priority process can interrupt a lower-priority one.
Non-Preemptive → The CPU does not preempt the running process until it finishes.
Priority Assignment
Priorities can be static (assigned before execution) or dynamic (can change during execution).
Lower numerical values may represent higher priority (e.g., Priority 1 is higher than Priority 5).
Higher numerical values may represent higher priority (depends on the system).
Algorithm Steps
Sort processes based on their priority (lowest number or highest priority first).
If Preemptive, check if a newly arrived process has a higher priority than the currently running
process.
If Non-Preemptive, the CPU runs a process until it completes, then moves to the next highest
priority process.
Repeat until all processes are executed.
Calculate waiting time (WT) and turnaround time (TAT).
Example:

Lower value = higher priority


(f) Multilevel Queue Scheduling
Multilevel Queue (MLQ) Scheduling is a CPU scheduling algorithm that divides processes into
multiple priority-based queues. Each queue has its own scheduling policy (e.g., FCFS, RR,
Priority Scheduling), and processes do not move between queues.
Key Features
Processes are classified into different queues based on characteristics such as priority, memory
usage, or execution time.
Each queue has its own scheduling algorithm, such as:
Foreground (interactive) processes → Round Robin (RR)
Background (batch) processes → FCFS or SJF
Queue scheduling follows a strict priority order (higher-priority queues are executed first).
Example

(g) Round Robin (RR) Scheduling Algorithm


Overview
Round Robin (RR) is a preemptive scheduling algorithm that assigns a fixed time quantum (or
time slice) to each process in a cyclic order. If a process does not complete within its assigned
quantum, it is preempted and placed at the end of the ready queue. This continues until all
processes are completed.
Key Features of RR:
Time-sharing: Ensures fair CPU distribution among processes.
Preemption: A process is interrupted after the time quantum expires.
Fairness: All processes get equal CPU time in turns.
Response Time Improvement: Shortens response time for interactive users.
Algorithm Steps
Set a time quantum (e.g., 4ms).
Place processes in a ready queue in the order of arrival.
Execute the first process for the time quantum.
If the process completes within the time quantum, remove it from the queue.
If the process does not complete, move it to the back of the queue and continue with the next
process.
Repeat the process until all processes are completed.
Example of Round Robin Scheduling
Let's consider five processes with the following burst times:

Step-by-Step Execution (Quantum = 4ms)


Initial Ready Queue: P1 → P2 → P3 → P4 → P5

Final Completion Times

Average Waiting Time = (26 + 3 + 18 + 19 + 24) / 5 = 18 ms


Average Turnaround Time = (36 + 7 + 24 + 27 + 36) / 5 = 26 ms
Advantages of Round Robin Scheduling
Fair: Every process gets CPU time.
Efficient for Interactive Systems: Quick response time for multiple users.
Prevents Starvation: No process waits indefinitely.
Works Well for Time-Sharing Systems
Disadvantages of Round Robin Scheduling
Increases Context Switching Overhead: Frequent switching between processes adds overhead.
Poor Performance for Long Processes: If a process requires a long time, it gets executed in
multiple small portions, increasing turnaround time.
Choosing a Bad Time Quantum Can Hurt Performance: If quantum is too small, too much CPU
time is spent on context switching. If quantum is too large, it behaves like FCFS, causing delays.

You might also like