OSmodule 2
OSmodule 2
2.1 Concept of a Process, Process States, Process Description, Process Control Block
2.2 Uniprocessor Scheduling-Types: Preemptive and Non-preemptive scheduling algorithms (FCFS,
SJF, SRTN, Priority, RR)
2.3 Threads: Definition and Types, Concept of Multithreading
2.1.1 Concept of a Process
A process is a program in execution. For example, when we write a program in C or C++ and compile
it, the compiler creates binary code. The original code and binary code are both programs. When we
actually run the binary code, it becomes a process.
• A process is an 'active' entity instead of a program, which is considered a 'passive' entity.
• A single program can create many processes when run multiple times; for example, when we
open a .exe or binary file multiple times, multiple instances begin
A process is defined as an entity which represents the basic unit of work to be implemented in the
system.
When a program is loaded into the memory and it becomes a process, it can be divided into four sections
─ stack, heap, text and data. The following image shows a simplified layout of a process inside main
memory −
Stack: The process Stack contains the temporary data such as method/function
1
parameters, return address and local variables.
2 Heap: This is dynamically allocated memory to a process during its run time.
Text: This includes the current activity represented by the value of Program
3
Counter and the contents of the processor's registers.
4 Data: This section contains the global and static variables.
• Process Creation and Termination: Process creation involves creating a Process ID, setting
up Process Control Block, etc. A process can be terminated either by the operating system or
by the parent process. Process termination involves clearing all resources allocated to it.
• CPU Scheduling: In a multiprogramming system, multiple processes need to get the CPU. It
is the job of Operating System to ensure smooth and efficient execution of multiple processes.
• Deadlock Handling: Making sure that system does not reach a state where two or processes
cannot proceed due to a cycling dependency on each other.
• Inter-Process Communication: Operating System provides facilities such as shared memory
and message passing for cooperating processes to communicate.
• Process Synchronization: Process Synchronization is the coordination of execution of
multiple processes in a multiprogramming system to ensure that they access shared resources
(like memory) in a controlled and predictable manner.
2.1.2 Process Life Cycle:
When a process executes, it passes through different states. These stages may differ in different
operating systems, and the names of these states are also not standardized.
In general, a process can have one of the following five states at a time.
Ready: The process is waiting to be assigned to a processor. Ready processes are waiting
to have the processor allocated to them by the operating system so that they can run. Process
2
may come into this state after Start state or while running it by but interrupted by the
scheduler to assign CPU to some other process.
Running: Once the process has been assigned to a processor by the OS scheduler, the
3
process state is set to running and the processor executes its instructions.
Waiting: Process moves into the waiting state if it needs to wait for a resource, such as
4
waiting for user input, or waiting for a file to become available.
Terminated or Exit: Once the process finishes its execution, or it is terminated by the
5 operating system, it is moved to the terminated state where it waits to be removed from
main memory.
2.1.3 Process Control Block (PCB):
A Process Control Block is a data structure maintained by the Operating System for every process. The
PCB is identified by an integer process ID (PID). A PCB keeps all the information needed to keep track
of a process as listed below in the table −
Process State: The current state of the process i.e., whether it is ready, running, waiting, or
1
whatever.
3 Process ID: Unique identification for each of the process in the operating system.
Program Counter: Program Counter is a pointer to the address of the next instruction to be
5
executed for this process.
CPU registers: Various CPU registers where process need to be stored for execution for
6
running state.
CPU Scheduling Information: Process priority and other scheduling information which is
7
required to schedule the process.
Memory management information: This includes the information of page table, memory
8
limits, Segment table depending on memory used by the operating system.
Accounting information: This includes the amount of CPU used for process execution, time
9
limits, execution ID etc.
10 IO status information: This includes a list of I/O devices allocated to the process.
The architecture of a PCB is completely dependent on Operating System and may contain
different information in different operating systems. Here is a simplified diagram of a PCB −
The PCB is maintained for a process throughout its lifetime and is deleted once the process terminates.
Attributes of a Process:
A process has several important attributes that help the operating system manage and control it. These
attributes are stored in a structure called the Process Control Block (PCB) (sometimes called a task
control block). The PCB keeps all the key information about the process, including:
1. Process ID (PID): A unique number assigned to each process so the operating system can
identify it.
2. Process State: This shows the current status of the process, like whether it is running, waiting,
or ready to execute.
3. Priority and other CPU Scheduling Information: Data that helps the operating system decide
which process should run next, like priority levels and pointers to scheduling queues.
4. I/O Information: Information about input/output devices the process is using.
5. File Descriptors: Information about open files files and network connections.
6. Accounting Information: Tracks how long the process has run, the amount of CPU time used,
and other resource usage data.
7. Memory Management Information: Details about the memory space allocated to the process,
including where it is loaded in memory and the structure of its memory layout (stack, heap,
etc.).
These attributes in the PCB help the operating system control, schedule, and manage each process
effectively.
2.2.1 Uniprocessor Scheduling-Types: Preemptive and Non-preemptive scheduling
algorithms
Scheduling is the method by which processes are given access the CPU. Efficient scheduling is essential
for optimal system performance and user experience. There are two primary types of CPU scheduling:
preemptive and non-preemptive.
2.2.1.1 Preemptive Scheduling
In preemptive scheduling, the CPU can be taken away from a running process if a higher-priority
process arrives. This allows multiple processes to share the CPU more efficiently.
Characteristics:
• A process can be interrupted and moved back to the ready queue.
• Provides better responsiveness but has more context switching overhead.
• Used in time-sharing systems.
Examples of Preemptive Scheduling Algorithms:
a) Round Robin (RR)
• Each process gets a fixed time slice (quantum).
• After the quantum expires, the process is moved back to the ready queue.
• Ensures fairness but may cause higher context switching overhead.
b) Shortest Remaining Time First (SRTF)
• A preemptive version of Shortest Job Next (SJN).
• The process with the shortest remaining execution time is executed first.
• If a new process arrives with a shorter remaining time, it preempts the current process.
c) Priority Scheduling (Preemptive)
• Each process has a priority.
• The CPU is assigned to the process with the highest priority.
• A new higher-priority process can preempt the running one.
d) Multilevel Queue Scheduling
• Processes are divided into multiple queues based on priority.
• Higher-priority queues preempt lower-priority ones.
Advantages of Preemptive Scheduling
• Because a process may not monopolize the processor, it is a more reliable method and does not
cause denial of service attack.
• Each preemption occurrence prevents the completion of ongoing tas s.
• The average response time is improved. Utilizing this method in a multi-programming
environment is more advantageous.
• Most of the modern operating systems (Window, Linux and macOS) implement Preemptive
Scheduling.
Disadvantages of Preemptive Scheduling
• More Complex to implement in Operating Systems.
• Suspending the running process, change the context, and dispatch the new incoming process all
take more time.
• Might cause starvation : A low-priority process might be preempted again and again if multiple
high-priority processes arrive.
• Causes Concurrency Problems as processes can be stopped when they were accessing shared
memory (or variables) or resources.
2.2.1.2 Non-Preemptive Scheduling
In non-preemptive scheduling, a running process cannot be interrupted by the operating system; it
voluntarily relinquishes control of the CPU. In this scheduling, once the resources (CPU cycles) are
allocated to a process, the process holds the CPU till it gets terminated or reaches a waiting state.
In non-preemptive scheduling, once a process starts executing, it cannot be interrupted until it
completes or voluntarily yields the CPU (e.g., I/O operation).
Characteristics:
• Simple to implement but can cause long waiting times (e.g., for short processes waiting behind
long ones).
• Less overhead due to no context switching during execution.
• Can lead to convoy effect (long processes delaying shorter ones).
Examples of Non-Preemptive Scheduling Algorithms:
a) First Come, First Served (FCFS)
• The first process in the ready queue gets the CPU.
• Simple but can lead to convoy effect (large jobs delaying small ones).
b) Shortest Job Next (SJN)
• The process with the shortest burst time is executed first.
• Minimizes average waiting time but not practical in real-time systems.
c) Priority Scheduling (Non-Preemptive)
• The process with the highest priority is selected first.
• If two processes have the same priority, FCFS is used as a tiebreaker.
Advantages of Non-Preemptive Scheduling
• It is easy to implement in an operating system. It was used in Windows 3.11 and early macOS.
• It has a minimal scheduling burden.
• Less computational resources are used.
Disadvantages of Non-Preemptive Scheduling
• It is open to denial of service attack. A malicious process can take CPU forever.
• Since we cannot implement round robin, the average response time becomes less.
Consider a set of processes with their arrival times and burst times.
P1 0 5
P2 1 3
P3 2 8
P4 3 6
Solution:
Gantt Chart Representation:
P1 P2 P3 P4
0 5 8 16 22
Calculations
1. Completion Time (CT)
o P1 = 5
o P2 = 8
o P3 = 16
o P4 = 22
2. Turnaround Time (TAT) = Completion Time - Arrival Time
o P1 = 5 - 0 = 5
o P2 = 8 - 1 = 7
o P3 = 16 - 2 = 14
o P4 = 22 - 3 = 19
3. Waiting Time (WT) = Turnaround Time - Burst Time
o P1 = 5 - 5 = 0
o P2 = 7 - 3 = 4
o P3 = 14 - 8 = 6
o P4 = 19 - 6 = 13
4. Average Waiting Time (AWT)
(0+4+6+13)/4=5.75 ms
Advantages of FCFS:
• Simple to implement
• Fair scheduling (No process is starved)
Disadvantages of FCFS:
• High waiting time (if a long process arrives first, short ones must wait – called convoy effect)
• Poor response time for short processes
• Non-preemptive (cannot switch tasks if a more urgent process arrives)
P1 0 7
Process Arrival Time Burst Time
P2 2 4
P3 4 1
P4 5 4
Solution:
Gantt Chart:
P1 P3 P2 P4
0 7 8 12 16
Calculations:
1. Completion Time (CT)
o P1 = 7
o P2 = 12
o P3 = 8
o P4 = 16
2. Turnaround Time (TAT) = Completion Time - Arrival Time
o P1 = 7 - 0 = 7
o P2 = 12 - 2 = 10
o P3 = 8 - 4 = 4
o P4 = 16 - 5 = 11
3. Waiting Time (WT) = Turnaround Time - Burst Time
o P1 = 7 - 7 = 0
o P2 = 10 - 4 = 6
o P3 = 4 - 1 = 3
o P4 = 11 - 4 = 7
4. Average Waiting Time (AWT)
(0+6+3+7)/ 4=4 ms
Advantages of SJF
P1 0 8
P2 1 4
P3 2 2
P4 3 1
Solution:
Gantt Chart:
P1 P2 P3 P4 P3 P2 P1
0 1 2 3 4 5 9 17
Calculations
1. Completion Time (CT)
o P1 = 17
o P2 = 9
o P3 = 5
o P4 = 4
2. Turnaround Time (TAT) = Completion Time - Arrival Time
o P1 = 17 - 0 = 17
o P2 = 9 - 1 = 8
o P3 = 5 - 2 = 3
o P4 = 4 - 3 = 1
3. Waiting Time (WT) = Turnaround Time - Burst Time
o P1 = 17 - 8 = 9
o P2 = 8 - 4 = 4
o P3 = 3 - 2 = 1
o P4 = 1 - 1 = 0
4. Average Waiting Time (AWT)
(9+4+1+0)/4=3.5 ms
Advantages of SRTN
Gantt Chart:
P1 P2 P4 P3
0 5 8 14 22
Calculations
1. Completion Time (CT)
o P1 = 5
o P2 = 8
o P3 = 22
o P4 = 14
2. Turnaround Time (TAT) = Completion Time - Arrival Time
o P1 = 5 - 0 = 5
o P2 = 8 - 1 = 7
o P3 = 22 - 2 = 20
o P4 = 14 - 3 = 11
3. Waiting Time (WT) = Turnaround Time - Burst Time
o P1 = 5 - 5 = 0
o P2 = 7 - 3 = 4
o P3 = 20 - 8 = 12
o P4 = 11 - 6 = 5
4. Average Waiting Time (AWT)
(0+4+12+5)/4=5.25 ms
Gantt Chart:
P1 P2 P3 P4 P1 P3 P4 P3
0 3 6 9 12 14 17 20 22
Calculating Waiting Time & Turnaround Time
• Turnaround Time (TAT) = Completion Time - Arrival Time
• Waiting Time (WT) = TAT - Burst Time
P1 0 5 7 7-0=7 7-5=2
P2 1 3 4 4-1=3 3-3=0
P3 2 8 14 14 - 2 = 12 12 - 8 = 4
P4 3 6 12 12 - 3 = 9 9-6=3
2.3 Threads
A thread is the smallest unit of execution within a process. A process can have multiple threads, each
running independently but sharing the same resources (memory, files, data, etc.). Thread is light weight
process created by a process.
Processes are used to execute large, ‘heavyweight’ jobs such as working in word, while threads are used
to carry out smaller or ‘lightweight’ jobs such as auto saving a word document.
Thread is light weight process created by a process. Thread is a single sequence stream within a
process. Thread has it own
2.3.3 Multi-Threading:
Multithreading is a technique that allows a process to run multiple threads simultaneously, improving
CPU utilization and performance.
Key Features of Multithreading