0% found this document useful (0 votes)
3 views17 pages

OSmodule 2

The document discusses the concept of processes in operating systems, detailing their states, management tasks, and the structure of Process Control Blocks (PCBs). It also covers CPU scheduling algorithms, distinguishing between preemptive and non-preemptive methods, and provides examples such as First Come, First Served (FCFS) and Shortest Job First (SJF). Additionally, it highlights the advantages and disadvantages of these scheduling techniques and their impact on system performance.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views17 pages

OSmodule 2

The document discusses the concept of processes in operating systems, detailing their states, management tasks, and the structure of Process Control Blocks (PCBs). It also covers CPU scheduling algorithms, distinguishing between preemptive and non-preemptive methods, and provides examples such as First Come, First Served (FCFS) and Shortest Job First (SJF). Additionally, it highlights the advantages and disadvantages of these scheduling techniques and their impact on system performance.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

Module No: 2 Process and Process Scheduling

2.1 Concept of a Process, Process States, Process Description, Process Control Block
2.2 Uniprocessor Scheduling-Types: Preemptive and Non-preemptive scheduling algorithms (FCFS,
SJF, SRTN, Priority, RR)
2.3 Threads: Definition and Types, Concept of Multithreading
2.1.1 Concept of a Process
A process is a program in execution. For example, when we write a program in C or C++ and compile
it, the compiler creates binary code. The original code and binary code are both programs. When we
actually run the binary code, it becomes a process.
• A process is an 'active' entity instead of a program, which is considered a 'passive' entity.
• A single program can create many processes when run multiple times; for example, when we
open a .exe or binary file multiple times, multiple instances begin
A process is defined as an entity which represents the basic unit of work to be implemented in the
system.
When a program is loaded into the memory and it becomes a process, it can be divided into four sections
─ stack, heap, text and data. The following image shows a simplified layout of a process inside main
memory −

S.N. Component & Description

Stack: The process Stack contains the temporary data such as method/function
1
parameters, return address and local variables.

2 Heap: This is dynamically allocated memory to a process during its run time.

Text: This includes the current activity represented by the value of Program
3
Counter and the contents of the processor's registers.
4 Data: This section contains the global and static variables.

Process Management Tasks:


Process management is a key part in operating systems with multi-programming or multitasking.

• Process Creation and Termination: Process creation involves creating a Process ID, setting
up Process Control Block, etc. A process can be terminated either by the operating system or
by the parent process. Process termination involves clearing all resources allocated to it.
• CPU Scheduling: In a multiprogramming system, multiple processes need to get the CPU. It
is the job of Operating System to ensure smooth and efficient execution of multiple processes.
• Deadlock Handling: Making sure that system does not reach a state where two or processes
cannot proceed due to a cycling dependency on each other.
• Inter-Process Communication: Operating System provides facilities such as shared memory
and message passing for cooperating processes to communicate.
• Process Synchronization: Process Synchronization is the coordination of execution of
multiple processes in a multiprogramming system to ensure that they access shared resources
(like memory) in a controlled and predictable manner.
2.1.2 Process Life Cycle:
When a process executes, it passes through different states. These stages may differ in different
operating systems, and the names of these states are also not standardized.
In general, a process can have one of the following five states at a time.

S.N. State & Description

1 Start: This is the initial state when a process is first started/created.

Ready: The process is waiting to be assigned to a processor. Ready processes are waiting
to have the processor allocated to them by the operating system so that they can run. Process
2
may come into this state after Start state or while running it by but interrupted by the
scheduler to assign CPU to some other process.

Running: Once the process has been assigned to a processor by the OS scheduler, the
3
process state is set to running and the processor executes its instructions.

Waiting: Process moves into the waiting state if it needs to wait for a resource, such as
4
waiting for user input, or waiting for a file to become available.

Terminated or Exit: Once the process finishes its execution, or it is terminated by the
5 operating system, it is moved to the terminated state where it waits to be removed from
main memory.
2.1.3 Process Control Block (PCB):
A Process Control Block is a data structure maintained by the Operating System for every process. The
PCB is identified by an integer process ID (PID). A PCB keeps all the information needed to keep track
of a process as listed below in the table −

S.N. Information & Description

Process State: The current state of the process i.e., whether it is ready, running, waiting, or
1
whatever.

2 Process privileges: This is required to allow/disallow access to system resources.

3 Process ID: Unique identification for each of the process in the operating system.

4 Pointer: A pointer to parent process.

Program Counter: Program Counter is a pointer to the address of the next instruction to be
5
executed for this process.

CPU registers: Various CPU registers where process need to be stored for execution for
6
running state.

CPU Scheduling Information: Process priority and other scheduling information which is
7
required to schedule the process.

Memory management information: This includes the information of page table, memory
8
limits, Segment table depending on memory used by the operating system.

Accounting information: This includes the amount of CPU used for process execution, time
9
limits, execution ID etc.

10 IO status information: This includes a list of I/O devices allocated to the process.
The architecture of a PCB is completely dependent on Operating System and may contain
different information in different operating systems. Here is a simplified diagram of a PCB −

The PCB is maintained for a process throughout its lifetime and is deleted once the process terminates.
Attributes of a Process:
A process has several important attributes that help the operating system manage and control it. These
attributes are stored in a structure called the Process Control Block (PCB) (sometimes called a task
control block). The PCB keeps all the key information about the process, including:
1. Process ID (PID): A unique number assigned to each process so the operating system can
identify it.
2. Process State: This shows the current status of the process, like whether it is running, waiting,
or ready to execute.
3. Priority and other CPU Scheduling Information: Data that helps the operating system decide
which process should run next, like priority levels and pointers to scheduling queues.
4. I/O Information: Information about input/output devices the process is using.
5. File Descriptors: Information about open files files and network connections.
6. Accounting Information: Tracks how long the process has run, the amount of CPU time used,
and other resource usage data.
7. Memory Management Information: Details about the memory space allocated to the process,
including where it is loaded in memory and the structure of its memory layout (stack, heap,
etc.).
These attributes in the PCB help the operating system control, schedule, and manage each process
effectively.
2.2.1 Uniprocessor Scheduling-Types: Preemptive and Non-preemptive scheduling
algorithms
Scheduling is the method by which processes are given access the CPU. Efficient scheduling is essential
for optimal system performance and user experience. There are two primary types of CPU scheduling:
preemptive and non-preemptive.
2.2.1.1 Preemptive Scheduling
In preemptive scheduling, the CPU can be taken away from a running process if a higher-priority
process arrives. This allows multiple processes to share the CPU more efficiently.
Characteristics:
• A process can be interrupted and moved back to the ready queue.
• Provides better responsiveness but has more context switching overhead.
• Used in time-sharing systems.
Examples of Preemptive Scheduling Algorithms:
a) Round Robin (RR)
• Each process gets a fixed time slice (quantum).
• After the quantum expires, the process is moved back to the ready queue.
• Ensures fairness but may cause higher context switching overhead.
b) Shortest Remaining Time First (SRTF)
• A preemptive version of Shortest Job Next (SJN).
• The process with the shortest remaining execution time is executed first.
• If a new process arrives with a shorter remaining time, it preempts the current process.
c) Priority Scheduling (Preemptive)
• Each process has a priority.
• The CPU is assigned to the process with the highest priority.
• A new higher-priority process can preempt the running one.
d) Multilevel Queue Scheduling
• Processes are divided into multiple queues based on priority.
• Higher-priority queues preempt lower-priority ones.
Advantages of Preemptive Scheduling
• Because a process may not monopolize the processor, it is a more reliable method and does not
cause denial of service attack.
• Each preemption occurrence prevents the completion of ongoing tas s.
• The average response time is improved. Utilizing this method in a multi-programming
environment is more advantageous.
• Most of the modern operating systems (Window, Linux and macOS) implement Preemptive
Scheduling.
Disadvantages of Preemptive Scheduling
• More Complex to implement in Operating Systems.
• Suspending the running process, change the context, and dispatch the new incoming process all
take more time.
• Might cause starvation : A low-priority process might be preempted again and again if multiple
high-priority processes arrive.
• Causes Concurrency Problems as processes can be stopped when they were accessing shared
memory (or variables) or resources.
2.2.1.2 Non-Preemptive Scheduling
In non-preemptive scheduling, a running process cannot be interrupted by the operating system; it
voluntarily relinquishes control of the CPU. In this scheduling, once the resources (CPU cycles) are
allocated to a process, the process holds the CPU till it gets terminated or reaches a waiting state.
In non-preemptive scheduling, once a process starts executing, it cannot be interrupted until it
completes or voluntarily yields the CPU (e.g., I/O operation).
Characteristics:
• Simple to implement but can cause long waiting times (e.g., for short processes waiting behind
long ones).
• Less overhead due to no context switching during execution.
• Can lead to convoy effect (long processes delaying shorter ones).
Examples of Non-Preemptive Scheduling Algorithms:
a) First Come, First Served (FCFS)
• The first process in the ready queue gets the CPU.
• Simple but can lead to convoy effect (large jobs delaying small ones).
b) Shortest Job Next (SJN)
• The process with the shortest burst time is executed first.
• Minimizes average waiting time but not practical in real-time systems.
c) Priority Scheduling (Non-Preemptive)
• The process with the highest priority is selected first.
• If two processes have the same priority, FCFS is used as a tiebreaker.
Advantages of Non-Preemptive Scheduling
• It is easy to implement in an operating system. It was used in Windows 3.11 and early macOS.
• It has a minimal scheduling burden.
• Less computational resources are used.
Disadvantages of Non-Preemptive Scheduling
• It is open to denial of service attack. A malicious process can take CPU forever.
• Since we cannot implement round robin, the average response time becomes less.

2.2.1.3 First Come, First Served (FCFS) Scheduling:


FCFS (First Come, First Served) is the simplest CPU scheduling algorithm. It follows the non-
preemptive scheduling approach, where the process that arrives first in the ready queue gets
executed first.
FCFS Working:
1. The CPU schedules processes in the order they arrive in the ready queue.
2. The process continues execution until it completes; no preemption occurs.
3. Once completed, the next process in the queue starts execution.

Consider a set of processes with their arrival times and burst times.

Process Arrival Time Burst Time

P1 0 5

P2 1 3

P3 2 8

P4 3 6

Solution:
Gantt Chart Representation:
P1 P2 P3 P4
0 5 8 16 22
Calculations
1. Completion Time (CT)
o P1 = 5
o P2 = 8
o P3 = 16
o P4 = 22
2. Turnaround Time (TAT) = Completion Time - Arrival Time
o P1 = 5 - 0 = 5
o P2 = 8 - 1 = 7
o P3 = 16 - 2 = 14
o P4 = 22 - 3 = 19
3. Waiting Time (WT) = Turnaround Time - Burst Time
o P1 = 5 - 5 = 0
o P2 = 7 - 3 = 4
o P3 = 14 - 8 = 6
o P4 = 19 - 6 = 13
4. Average Waiting Time (AWT)
(0+4+6+13)/4=5.75 ms

Advantages of FCFS:

• Simple to implement
• Fair scheduling (No process is starved)
Disadvantages of FCFS:

• High waiting time (if a long process arrives first, short ones must wait – called convoy effect)
• Poor response time for short processes
• Non-preemptive (cannot switch tasks if a more urgent process arrives)

2.2.1.4 Shortest Job First (SJF) Scheduling Algorithm


SJF (Shortest Job First) is a CPU scheduling algorithm where the process with the shortest burst time
is executed first. It can be:
• Non-preemptive (Once a process starts, it runs until completion).
• Preemptive (SRTF - Shortest Remaining Time First) (A new shorter process can interrupt
the current one).
SJF Working:
1. The scheduler selects the process with the smallest burst time from the ready queue.
2. If multiple processes have the same burst time, FCFS is used as a tiebreaker.
3. The selected process runs to completion (non-preemptive) or can be interrupted by a shorter
process (preemptive).
Example:
Consider a set of processes:

Process Arrival Time Burst Time

P1 0 7
Process Arrival Time Burst Time

P2 2 4

P3 4 1

P4 5 4

Solution:
Gantt Chart:
P1 P3 P2 P4
0 7 8 12 16
Calculations:
1. Completion Time (CT)
o P1 = 7
o P2 = 12
o P3 = 8
o P4 = 16
2. Turnaround Time (TAT) = Completion Time - Arrival Time
o P1 = 7 - 0 = 7
o P2 = 12 - 2 = 10
o P3 = 8 - 4 = 4
o P4 = 16 - 5 = 11
3. Waiting Time (WT) = Turnaround Time - Burst Time
o P1 = 7 - 7 = 0
o P2 = 10 - 4 = 6
o P3 = 4 - 1 = 3
o P4 = 11 - 4 = 7
4. Average Waiting Time (AWT)
(0+6+3+7)/ 4=4 ms

Advantages of SJF

• Minimizes average waiting time (optimal scheduling for batch systems)


• Better CPU utilization compared to FCFS
Disadvantages of SJF

• Difficult to predict burst time accurately


• Starvation (Longer processes may get delayed if shorter ones keep arriving)
• Not suitable for interactive systems

2.2.1.5 Shortest Remaining Time Next:


Shortest Remaining Time Next (SRTN) is the preemptive version of Shortest Job First (SJF). It
selects the process with the shortest remaining burst time for execution. If a new process arrives with a
shorter burst time than the currently running process, the CPU switches to the new process.
SRTN Working:
1. At every time unit, the scheduler picks the process with the smallest remaining burst time.
2. If a new process arrives with a shorter burst time than the currently running process, it
preempts the running process.
3. The preempted process is placed back in the ready queue and will continue when it has the
shortest remaining time.
4. This continues until all processes are completed.
Example of SRTN
Consider a set of processes:

Process Arrival Time Burst Time

P1 0 8

P2 1 4

P3 2 2

P4 3 1

Solution:
Gantt Chart:

P1 P2 P3 P4 P3 P2 P1
0 1 2 3 4 5 9 17
Calculations
1. Completion Time (CT)
o P1 = 17
o P2 = 9
o P3 = 5
o P4 = 4
2. Turnaround Time (TAT) = Completion Time - Arrival Time
o P1 = 17 - 0 = 17
o P2 = 9 - 1 = 8
o P3 = 5 - 2 = 3
o P4 = 4 - 3 = 1
3. Waiting Time (WT) = Turnaround Time - Burst Time
o P1 = 17 - 8 = 9
o P2 = 8 - 4 = 4
o P3 = 3 - 2 = 1
o P4 = 1 - 1 = 0
4. Average Waiting Time (AWT)
(9+4+1+0)/4=3.5 ms
Advantages of SRTN

• Minimizes average waiting time and turnaround time


• More responsive than non-preemptive SJF
• Efficient for interactive systems
Disadvantages of SRTN

• High overhead due to frequent context switching


• Starvation (longer processes may be delayed indefinitely)
• Difficult to predict burst times accurately

2.2.1.6 Priority Scheduling:


Priority Scheduling is a CPU scheduling algorithm where each process is assigned a priority, and the
CPU selects the process with the highest priority for execution.
Priority can be assigned in two ways:
• Lower number = Higher priority (e.g., 1 is the highest priority)
• Higher number = Higher priority (e.g., 10 is the highest priority)
Types of Priority Scheduling:
1. Non-Preemptive Priority Scheduling → The CPU executes a process until it completes, even
if a higher-priority process arrives.
2. Preemptive Priority Scheduling → The CPU switches to a higher-priority process
immediately if one arrives.
Example of Non-Preemptive Priority Scheduling

Consider the following processes:


Process Arrival Time Burst Time Priority
P1 0 5 2
P2 1 3 1
P3 2 8 4
P4 3 6 3

Here, lower priority number = higher priority.

Gantt Chart:
P1 P2 P4 P3
0 5 8 14 22
Calculations
1. Completion Time (CT)
o P1 = 5
o P2 = 8
o P3 = 22
o P4 = 14
2. Turnaround Time (TAT) = Completion Time - Arrival Time
o P1 = 5 - 0 = 5
o P2 = 8 - 1 = 7
o P3 = 22 - 2 = 20
o P4 = 14 - 3 = 11
3. Waiting Time (WT) = Turnaround Time - Burst Time
o P1 = 5 - 5 = 0
o P2 = 7 - 3 = 4
o P3 = 20 - 8 = 12
o P4 = 11 - 6 = 5
4. Average Waiting Time (AWT)
(0+4+12+5)/4=5.25 ms

2.2.1.7 Round Robin Scheduling Algorithm:


Round Robin is a preemptive CPU scheduling algorithm where each process is assigned a fixed time
slice (quantum). If a process does not finish within its time slice, it is preempted and moved to the
back of the queue.
Key Features:
• Time-sharing: Each process gets a fair share of CPU time.
• Preemptive: Prevents long-running processes from monopolizing the CPU.
• FIFO (First-In-First-Out) order: Processes are executed in the order they arrive.
• Ideal for multitasking & time-sharing systems.
Example of Round Robin Scheduling
Given Data:

Process Arrival Time Burst Time


P1 0 5
P2 1 3
P3 2 8
P4 3 6

Time Quantum = 3ms

Gantt Chart:
P1 P2 P3 P4 P1 P3 P4 P3
0 3 6 9 12 14 17 20 22
Calculating Waiting Time & Turnaround Time
• Turnaround Time (TAT) = Completion Time - Arrival Time
• Waiting Time (WT) = TAT - Burst Time

Arrival Burst Completion Turnaround Time Waiting Time


Process
Time Time Time (TAT) (WT)

P1 0 5 7 7-0=7 7-5=2

P2 1 3 4 4-1=3 3-3=0

P3 2 8 14 14 - 2 = 12 12 - 8 = 4

P4 3 6 12 12 - 3 = 9 9-6=3

Average Waiting Time:


(2+0+4+3)/4=2.25 ms
Average Turnaround Time:
(7+3+12+9)/4=7.75 ms

2.3 Threads
A thread is the smallest unit of execution within a process. A process can have multiple threads, each
running independently but sharing the same resources (memory, files, data, etc.). Thread is light weight
process created by a process.
Processes are used to execute large, ‘heavyweight’ jobs such as working in word, while threads are used
to carry out smaller or ‘lightweight’ jobs such as auto saving a word document.
Thread is light weight process created by a process. Thread is a single sequence stream within a
process. Thread has it own

• program counter that keeps track of which instruction to execute next.


• system registers which hold its current working variables.
• stack which contains the execution history.

2.3.1 Types of Threads


Threads are classified into two main types:
1. User-Level Threads (ULT)
o Managed by the application without kernel support.
o Faster and more efficient but cannot take advantage of multi-core systems.
o Example: Java threads, POSIX threads.
2. Kernel-Level Threads (KLT)
o Managed by the operating system kernel.
o Can run on multiple processors, improving performance.
o Example: Windows threads, Linux threads.
2.3.2 Single Threaded Vs. Multi-Threaded System:

• A single-threaded process is a process with a single thread.


• A multi-threaded process is a process with multiple threads.
• The multiple threads have its own registers, stack and counter but they share the code and
data segment.
1. Single-Threaded System
A single-threaded system executes one task at a time. It processes instructions sequentially and does
not perform multiple tasks concurrently.
Characteristics:

• Only one thread executes at a time.


• Simple and easy to manage.
• No need for thread synchronization.
• Slower for multitasking.
Example:
• A program that reads a file and prints its contents sequentially.
• Simple Python script running a loop.
2. Multi-Threaded System
A multi-threaded system allows multiple threads to run concurrently within a process, improving
efficiency and performance.
Characteristics:

• Multiple threads share the same resources.


• Faster execution through parallelism.
• Requires synchronization to avoid conflicts.
• Better CPU utilization.
Example:

• Web browsers handling multiple tabs.


• Video streaming while downloading files.
• A server handling multiple client requests simultaneously.

2.3.3 Multi-Threading:
Multithreading is a technique that allows a process to run multiple threads simultaneously, improving
CPU utilization and performance.
Key Features of Multithreading

• Multiple threads run within a single process.


• Threads share the same memory and resources.
• Faster than creating multiple processes.
• Used in multitasking and parallel computing.
Types of Multithreading Models
1. Many-to-One: Multiple user threads mapped to a single kernel thread.
2. One-to-One: Each user thread has a corresponding kernel thread.
3. Many-to-Many: Multiple user threads are mapped to multiple kernel threads.
Advantages of Multithreading

• Faster execution due to parallelism.


• Efficient CPU utilization.
• Allows concurrent task execution.
• Reduces resource consumption compared to multiple processes.
Disadvantages of Multithreading

• Complexity in programming (synchronization issues).


• Risk of deadlocks and race conditions.
• Increased overhead due to context switching.

You might also like