0% found this document useful (0 votes)
5 views

OS Assignment 1

The document provides an overview of Operating Systems (OS), detailing their definition, types, system calls, structures, and services. It categorizes OS into various types such as Batch, Time-Sharing, Distributed, Real-Time, Network, and Mobile OS, and explains the execution steps of system calls. Additionally, it discusses different OS structures like Monolithic, Layered, Microkernel, Modular, Hybrid, and Virtual Machine, along with the essential services provided by OS for process management, memory management, file system management, device management, security, network management, and user interface.

Uploaded by

Aarav Saroliya
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

OS Assignment 1

The document provides an overview of Operating Systems (OS), detailing their definition, types, system calls, structures, and services. It categorizes OS into various types such as Batch, Time-Sharing, Distributed, Real-Time, Network, and Mobile OS, and explains the execution steps of system calls. Additionally, it discusses different OS structures like Monolithic, Layered, Microkernel, Modular, Hybrid, and Virtual Machine, along with the essential services provided by OS for process management, memory management, file system management, device management, security, network management, and user interface.

Uploaded by

Aarav Saroliya
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 28

Operating Systems

202040402

ASSIGNMENT-1

Q-1) What is OS? Briefly explain types of OS.

Soln: What is an Operating System (OS)?

An Operating System (OS) is system software that acts as an interface between computer
hardware and users. It manages hardware resources, provides essential services, and
enables users to run applications efficiently.

Types of Operating Systems

1. Batch Operating System


a. Executes a series of jobs automatically without user interaction.
b. Used in early mainframes.
c. Example: IBM OS/360.
2. Time-Sharing (Multitasking) OS
a. Allows multiple users to share system resources simultaneously.
b. Uses CPU scheduling and time-slicing techniques.
c. Example: UNIX, Linux, Windows.
3. Distributed OS
a. Manages multiple computers as a single system.
b. Enhances performance and resource sharing.
c. Example: Google’s Cluster OS, Amoeba.
4. Real-Time OS (RTOS)
a. Processes tasks within strict time constraints.
b. Used in embedded systems and critical applications.
c. Example: VxWorks, RTLinux.
5. Network OS
a. Manages and supports networking capabilities like communication and file
sharing.
b. Example: Windows Server, Novell NetWare.
6. Mobile OS
a. Designed for smartphones and tablets.

Aarav Saroliya Page | 1


12302110501001
Operating Systems
202040402

b. Manages touch interface, sensors, and apps.


c. Example: Android, iOS.

Q-2) What is a system call? Explain all the steps with an example.

Soln: What is a System Call?

A System Call is a mechanism that allows user-level processes to request services from
the operating system's kernel. It acts as an interface between user applications and the
hardware by providing essential functionalities like file management, process control,
memory management, and communication.

Steps in a System Call Execution

1. User Program Requests a System Call


a. The user program invokes a system call using a library function (e.g.,
printf(), open(), read() in C).
2. System Call Interface (Wrapper Function)
a. The request is passed through a system call wrapper in the standard library
(e.g., glibc in Linux).
3. Switch to Kernel Mode (Trap Instruction)
a. The OS switches from User Mode to Kernel Mode using a software
interrupt (trap instruction).
4. Execution of System Call in Kernel
a. The OS kernel identifies the requested service and executes it.
5. Return to User Mode
a. Once completed, the kernel switches back to User Mode and returns the
result to the user program.

Example of a System Call (File Opening in Linux - open())

#include <stdio.h>
#include <fcntl.h>
#include <unistd.h>

Aarav Saroliya Page | 2


12302110501001
Operating Systems
202040402

int main() {
int fd;
fd = open("example.txt", O_RDONLY); // System call to open a file
in read mode

if (fd == -1) {
printf("Error opening file\n");
} else {
printf("File opened successfully with file descriptor: %d\n",
fd);
close(fd); // Closing the file
}

return 0;
}

Step-by-Step Execution

1. The user program calls open().


2. The standard library (glibc) translates open() into a system call number and
triggers a trap instruction.
3. The OS switches to Kernel Mode and executes the open() system call.
4. The OS interacts with the filesystem and opens the file.
5. The OS returns a file descriptor (FD) (a unique integer for file access) to the user
program.

Q-3) Explain Operating System Structures in detail.

Soln: Operating System Structures


An Operating System (OS) structure defines how its components are organized and
interact to manage hardware and software resources efficiently. Different OS structures
provide various levels of flexibility, efficiency, and security.

Aarav Saroliya Page | 3


12302110501001
Operating Systems
202040402

1. Monolithic Structure

🔹 Overview

• The entire OS is a single large program running in kernel mode.


• All system services (e.g., process management, file system, memory management)
are part of a single codebase.
• Communication occurs via function calls.

🔹 Advantages

✔ Fast execution (no overhead of message passing).

✔ Efficient use of system resources.

🔹 Disadvantages

Hard to modify and debug due to large codebase.

A single error can crash the entire system.

🔹 Example OS

• UNIX
• Linux

2. Layered Structure

🔹 Overview

• OS is divided into multiple layers, each performing specific functions.


• Each layer interacts only with the layer directly above and below it.
• The lower layers handle hardware while upper layers manage user services.

Aarav Saroliya Page | 4


12302110501001
Operating Systems
202040402

🔹 Advantages

✔ Easier to modify and maintain.

✔ Higher security (each layer restricts access to lower layers).

🔹 Disadvantages

Performance overhead due to multiple layers of communication.

Designing an optimal layer structure is complex.

🔹 Example OS

• THE OS (by E. W. Dijkstra)


• Windows NT

3. Microkernel Structure

🔹 Overview

• Only essential services (e.g., memory management, process scheduling) run in


kernel mode.
• Other services (e.g., device drivers, file systems) run in user mode.
• Communication occurs via message passing.

🔹 Advantages

✔ More secure and reliable (fewer functions in the kernel).

✔ Easy to extend and modify.

🔹 Disadvantages

Slower due to message passing overhead.

Aarav Saroliya Page | 5


12302110501001
Operating Systems
202040402

Complex implementation.

🔹 Example OS

• QNX
• Minix
• macOS (partially uses Microkernel)

4. Modular Structure

🔹 Overview

• OS is divided into separate modules, each handling a specific function.


• Modules are dynamically loaded and unloaded as needed.
• Hybrid of Monolithic and Microkernel structures.

🔹 Advantages

✔ Flexibility and scalability.

✔ Better maintainability.

🔹 Disadvantages

Complexity in module management.

Performance overhead in module communication.

🔹 Example OS

• Linux (Loadable Kernel Modules - LKM)


• Windows (Hybrid Kernel)

Aarav Saroliya Page | 6


12302110501001
Operating Systems
202040402

5. Hybrid Structure

🔹 Overview

• Combines best features of Monolithic and Microkernel designs.


• Core functionalities remain in the kernel, while other services run in user space.

🔹 Advantages

✔ Efficient like Monolithic OS.

✔ Secure and modular like Microkernel OS.

🔹 Disadvantages

Complexity in design and maintenance.

🔹 Example OS

• Windows (NT, XP, 10, 11)


• macOS (XNU Kernel)

6. Virtual Machine (VM) Structure

🔹 Overview

• OS creates virtual instances of a machine, each running its own OS.


• Uses a Hypervisor (Type 1 or Type 2) to manage VMs.

🔹 Advantages

✔ Isolation between systems (ideal for cloud computing).

✔ Supports multiple OS on a single hardware.

Aarav Saroliya Page | 7


12302110501001
Operating Systems
202040402

🔹 Disadvantages

High resource consumption.

Requires hardware virtualization support.

🔹 Example OS

• VMware ESXi
• Microsoft Hyper-V
• VirtualBox

Comparison Table
Performa Secur Scalabi Comple
Structure Example OS
nce ity lity xity
Monolithic High Low Low High Linux, UNIX
Layered Medium High Medium High Windows NT
Microkernel Low High High High Minix, QNX
Medi Linux,
Modular High High Medium
um Windows
macOS,
Hybrid High High High High
Windows
Virtual VMware,
Low High High High
Machine Hyper-V

Conclusion
Each OS structure has its own advantages and trade-offs. The choice depends on system
requirements, performance needs, security concerns, and flexibility.

Q-4) Explain Operating System services in detail.

Aarav Saroliya Page | 8


12302110501001
Operating Systems
202040402

Soln: Operating System Services

Operating System (OS) services are the fundamental functionalities provided by the OS to
manage hardware, software, and system resources. These services allow user
applications to run efficiently, securely, and reliably by interacting with hardware and
managing system resources.

Key OS Services

1. Process Management Services

• Process Creation and Termination: OS is responsible for creating processes


(programs in execution) and terminating them after their execution.
• Process Scheduling: OS allocates CPU time to processes based on scheduling
algorithms (e.g., First-Come, First-Served, Round-Robin, etc.).
• Context Switching: OS saves and restores the state (context) of processes when
switching between them to ensure that each process resumes execution from
where it was paused.
• Inter-Process Communication (IPC): OS allows processes to communicate and
synchronize with each other through mechanisms like pipes, message queues,
semaphores, etc.

2. Memory Management Services

• Memory Allocation: OS allocates physical memory to processes as they run. This


includes managing free memory and assigning space for programs, variables, and
system data.
• Virtual Memory: OS provides virtual memory to give processes the illusion of
having continuous memory, even if physical memory is fragmented or full. It uses
techniques like paging and segmentation to achieve this.
• Memory Protection: OS ensures that one process cannot access the memory of
another process (protection), thus avoiding data corruption.
• Swapping: OS can move processes between main memory and secondary storage
(e.g., disk) to optimize memory usage and ensure running processes have enough
space.
Aarav Saroliya Page | 9
12302110501001
Operating Systems
202040402

3. File System Management Services

• File Creation and Deletion: OS provides services to create, read, write, and delete
files.
• File Organization: OS organizes files into directories (folders) and maintains the
structure using metadata (file names, permissions, sizes, timestamps).
• Access Control: OS enforces access control policies to determine who can read,
write, or execute specific files.
• File Storage Management: OS manages physical disk storage by organizing files
into blocks, sectors, and tracks. It handles file storage, retrieval, and caching for
efficient access.

4. Device Management Services

• Device Drivers: OS provides device drivers to facilitate communication between


software and hardware devices (e.g., printers, network cards, storage devices).
• Input/Output (I/O) Operations: OS manages I/O operations, ensuring correct data
transfer between memory and external devices.
• Buffering and Spooling: OS uses buffers to temporarily store data during I/O
operations and spools output data to queues (e.g., printing tasks) for processing in
order.
• Interrupt Handling: OS manages hardware interrupts (e.g., when a keyboard or
mouse sends data to the processor) and ensures timely response to hardware
requests.

5. Security and Access Control Services

• Authentication and Authorization: OS provides security by verifying user identities


(authentication) and managing access rights (authorization) to system resources.
• Encryption: OS can encrypt data to protect it from unauthorized access, ensuring
confidentiality.
• User Permissions: OS controls file access through permissions, specifying which
users or processes can read, write, or execute files.
• Audit and Logging: OS tracks and logs security-related events to detect
unauthorized activities and support forensics.

Aarav Saroliya Page | 10


12302110501001
Operating Systems
202040402

6. Network Management Services

• Communication: OS provides network communication services, enabling


processes to send and receive data across a network (e.g., TCP/IP stack, DNS
resolution).
• Network Configuration: OS manages network configurations such as IP addresses,
subnet masks, and routing protocols.
• Network Security: OS manages firewall rules, access control lists (ACLs), and
encryption to secure communication across networks.
• Socket Management: OS supports the use of sockets for inter-process
communication over networks, handling different communication protocols (e.g.,
HTTP, FTP, TCP).

7. System Resource Management Services

• CPU Scheduling: OS determines how and when a process gets access to the CPU.
Scheduling policies and algorithms are used to ensure fairness and efficiency.
• Resource Allocation: OS allocates system resources (CPU time, memory, I/O
devices) to different processes based on priority and availability.
• Deadlock Handling: OS prevents and resolves deadlocks (situations where two or
more processes are waiting indefinitely for each other’s resources) through
techniques like resource allocation graphs and timeout policies.

8. User Interface Services

• Command-Line Interface (CLI): OS provides a terminal or console where users can


type commands to interact with the system.
• Graphical User Interface (GUI): OS provides a visual interface with windows,
icons, buttons, and menus for users to interact with the system.
• Shell: OS provides shell environments (e.g., Bash, PowerShell) to run commands
and scripts, offering a way to interact with the system programmatically.

9. Time Management Services

• Clock Management: OS keeps track of system time, including setting and


maintaining the system’s internal clock.

Aarav Saroliya Page | 11


12302110501001
Operating Systems
202040402

• Timers: OS uses timers to schedule and manage events, like task timeouts,
process suspension, and alarms.
• Time Sharing: OS allocates CPU time to processes in a round-robin fashion,
allowing multiple processes to share the CPU in a time-sliced manner.

10. Error Handling and Logging Services

• Error Detection: OS detects hardware or software errors (e.g., out-of-memory


conditions, hardware failures) and handles them appropriately.
• Error Reporting: OS logs errors, sometimes providing notifications or diagnostics to
users or administrators.
• System Health Monitoring: OS continuously monitors system health, logging
events related to system performance, resource usage, and potential failures.

Summary of OS Services

Service Description
Process creation, scheduling, termination, and inter-process
Process Management
communication.
Allocation, protection, swapping, and virtual memory
Memory Management
management.
File System
File creation, deletion, access control, storage management.
Management
Interaction with hardware devices, I/O operations, buffering,
Device Management
and interrupt handling.
Security and Access
User authentication, permissions, encryption, and auditing.
Control
Network
Network communication, configuration, and security.
Management
System Resource
CPU scheduling, resource allocation, and deadlock handling.
Management
User Interface CLI, GUI, and shell environments for user interaction.
Time Management System time tracking, timers, and time-sharing.
Error Handling and
Error detection, reporting, and system health monitoring.
Logging

Aarav Saroliya Page | 12


12302110501001
Operating Systems
202040402

Conclusion

Operating System services are crucial for the management of resources and efficient
operation of a computer system. These services ensure resource allocation, security,
and user interaction, making it possible for users and applications to interact with
hardware in a stable and controlled environment.

Q-5)

Soln: Let's break down the scheduling for each algorithm to compute the average waiting
time and average turnaround time.

Job Information

Process ID Arrival Time (ms) Execution Time (ms) Priority


1 0 10 2
2 2 1 1
3 4 2 5
4 8 4 4
5 12 3 3

Formulas

1. Waiting Time (WT):

Aarav Saroliya Page | 13


12302110501001
Operating Systems
202040402

2. WT=Turnaround Time−Execution Time\text{WT} = \text{Turnaround Time} -


\text{Execution Time}
3. Turnaround Time (TAT):
TAT=Completion Time−Arrival Time\text{TAT} = \text{Completion Time} - \text{Arrival
Time}

1. FCFS (First-Come, First-Served) Scheduling

Execution Order:

• Process 1 starts at 0 ms, finishes at 10 ms.


• Process 2 starts at 10 ms, finishes at 11 ms.
• Process 3 starts at 11 ms, finishes at 13 ms.
• Process 4 starts at 13 ms, finishes at 17 ms.
• Process 5 starts at 17 ms, finishes at 20 ms.

Waiting Time Calculation:

• WT for P1 = Start Time - Arrival Time = 0 - 0 = 0 ms


• WT for P2 = Start Time - Arrival Time = 10 - 2 = 8 ms
• WT for P3 = Start Time - Arrival Time = 11 - 4 = 7 ms
• WT for P4 = Start Time - Arrival Time = 13 - 8 = 5 ms
• WT for P5 = Start Time - Arrival Time = 17 - 12 = 5 ms

Turnaround Time Calculation:

• TAT for P1 = Finish Time - Arrival Time = 10 - 0 = 10 ms


• TAT for P2 = Finish Time - Arrival Time = 11 - 2 = 9 ms
• TAT for P3 = Finish Time - Arrival Time = 13 - 4 = 9 ms
• TAT for P4 = Finish Time - Arrival Time = 17 - 8 = 9 ms
• TAT for P5 = Finish Time - Arrival Time = 20 - 12 = 8 ms

Average Waiting Time:

Average WT=0+8+7+5+55=5 ms\text{Average WT} = \frac{0 + 8 + 7 + 5 + 5}{5} = 5 \,


\text{ms}

Average Turnaround Time:

Aarav Saroliya Page | 14


12302110501001
Operating Systems
202040402

Average TAT=10+9+9+9+85=9 ms\text{Average TAT} = \frac{10 + 9 + 9 + 9 + 8}{5} =


9 \, \text{ms}

2. SJF (Shortest Job First) Scheduling

Execution Order:

• Process 1 arrives first and executes first (10 ms).


• Process 2 arrives second and executes next (1 ms).
• Process 3 arrives next and executes next (2 ms).
• Process 5 arrives after but executes next (3 ms).
• Process 4 executes last (4 ms).

Waiting Time Calculation:

• WT for P1 = 0 (as it starts at 0 ms)


• WT for P2 = Start Time - Arrival Time = 10 - 2 = 8 ms
• WT for P3 = Start Time - Arrival Time = 11 - 4 = 7 ms
• WT for P5 = Start Time - Arrival Time = 13 - 12 = 1 ms
• WT for P4 = Start Time - Arrival Time = 16 - 8 = 8 ms

Turnaround Time Calculation:

• TAT for P1 = Finish Time - Arrival Time = 10 - 0 = 10 ms


• TAT for P2 = Finish Time - Arrival Time = 11 - 2 = 9 ms
• TAT for P3 = Finish Time - Arrival Time = 13 - 4 = 9 ms
• TAT for P5 = Finish Time - Arrival Time = 16 - 12 = 4 ms
• TAT for P4 = Finish Time - Arrival Time = 20 - 8 = 12 ms

Average Waiting Time:

Average WT=0+8+7+1+85=4.8 ms\text{Average WT} = \frac{0 + 8 + 7 + 1 + 8}{5} =


4.8 \, \text{ms}

Average Turnaround Time:

Average TAT=10+9+9+4+125=8.8 ms\text{Average TAT} = \frac{10 + 9 + 9 + 4 +


12}{5} = 8.8 \, \text{ms}
Aarav Saroliya Page | 15
12302110501001
Operating Systems
202040402

3. SRTN (Shortest Remaining Time Next) Scheduling

SRTN is a preemptive version of SJF. It prioritizes jobs with the shortest remaining time.

Execution Order:

• Process 1 starts first and is preempted by Process 2 at 2 ms.


• Process 2 finishes at 3 ms.
• Process 1 resumes and completes at 10 ms.
• Process 3 starts at 10 ms, finishes at 12 ms.
• Process 5 starts at 12 ms, finishes at 15 ms.
• Process 4 starts at 15 ms and finishes at 19 ms.

Waiting Time Calculation:

• WT for P1 = 0 (starts at 0 ms, resumes at 3 ms)


• WT for P2 = 0 (starts at 2 ms)
• WT for P3 = 6 ms
• WT for P5 = 0 ms
• WT for P4 = 7 ms

Turnaround Time Calculation:

• TAT for P1 = 10 - 0 = 10 ms
• TAT for P2 = 3 - 2 = 1 ms
• TAT for P3 = 12 - 4 = 8 ms
• TAT for P5 = 15 - 12 = 3 ms
• TAT for P4 = 19 - 8 = 11 ms

Average Waiting Time:

Average WT=0+0+6+0+75=2.6 ms\text{Average WT} = \frac{0 + 0 + 6 + 0 + 7}{5} =


2.6 \, \text{ms}

Average Turnaround Time:

Aarav Saroliya Page | 16


12302110501001
Operating Systems
202040402

Average TAT=10+1+8+3+115=6.6 ms\text{Average TAT} = \frac{10 + 1 + 8 + 3 +


11}{5} = 6.6 \, \text{ms}

4. Preemptive Priority Scheduling

In preemptive priority scheduling, the process with the highest priority (lowest priority
number) runs first. A running process can be preempted if a new process with a higher
priority arrives.

Execution Order:

• Process 2 has the highest priority (priority = 1), so it runs first (1 ms).
• Process 1 runs next (priority = 2, 10 ms).
• Process 5 runs after (priority = 3, 3 ms).
• Process 4 runs next (priority = 4, 4 ms).
• Process 3 runs last (priority = 5, 2 ms).

Waiting Time Calculation:

• WT for P1 = 0 ms (starts at 0 ms)


• WT for P2 = 0 ms
• WT for P3 = 7 ms
• WT for P4 = 8 ms
• WT for P5 = 5 ms

Turnaround Time Calculation:

• TAT for P1 = 10 ms
• TAT for P2 = 1 ms
• TAT for P3 = 9 ms
• TAT for P4 = 12 ms
• TAT for P5 = 8 ms

Average Waiting Time:

Average WT=0+0+7+8+55=4 ms\text{Average WT} = \frac{0 + 0 + 7 + 8 + 5}{5} = 4 \,


\text{ms}

Aarav Saroliya Page | 17


12302110501001
Operating Systems
202040402

Average Turnaround Time:

Average TAT=10+1+9+12+85=8 ms\text{Average TAT} = \frac{10 + 1 + 9 + 12 + 8}{5}


= 8 \, \text{ms}

5. Non-Preemptive Priority Scheduling

In non-preemptive priority scheduling, the process runs to completion once it starts,


even if another process with a higher priority arrives.

Execution Order:

• Process 2 runs first (priority = 1).


• Process 1 runs next (priority = 2).
• Process 5 runs next (priority = 3).
• Process 4 runs next (priority = 4).
• Process 3 runs last (priority = 5).

Waiting Time Calculation:

• WT for P1 = 0 ms (starts at 0 ms)


• WT for P2 = 0 ms
• WT for P3 = 10 ms
• WT for P4 = 8 ms
• WT for P5 = 7 ms

Turnaround Time Calculation:

• TAT for P1 = 10 ms
• TAT for P2 = 1 ms
• TAT for P3 = 12 ms
• TAT for P4 = 12 ms
• TAT for P5 = 10 ms

Average Waiting Time:

Average WT=0+0+10+8+75=5 ms\text{Average WT} = \frac{0 + 0 + 10 + 8 + 7}{5} = 5


\, \text{ms}
Aarav Saroliya Page | 18
12302110501001
Operating Systems
202040402

Average Turnaround Time:

Average TAT=10+1+12+12+105=9 ms\text{Average TAT} = \frac{10 + 1 + 12 + 12 +


10}{5} = 9 \, \text{ms}

6. Round Robin (Quantum = 1 ms)

Execution Order:

• Process 1 (10 ms) runs for 1 ms, then Process 2 (1 ms) runs for 1 ms, then Process 3
(2 ms) runs for 1 ms, and so on, with each process getting 1 ms of CPU time in a
cyclic manner until completion.

Waiting Time Calculation:

• WT for P1 = 8 ms
• WT for P2 = 0 ms
• WT for P3 = 8 ms
• WT for P4 = 8 ms
• WT for P5 = 5 ms

Turnaround Time Calculation:

• TAT for P1 = 18 ms
• TAT for P2 = 1 ms
• TAT for P3 = 10 ms
• TAT for P4 = 14 ms
• TAT for P5 = 15 ms

Average Waiting Time:

Average WT=8+0+8+8+55=5.8 ms\text{Average WT} = \frac{8 + 0 + 8 + 8 + 5}{5} =


5.8 \, \text{ms}

Average Turnaround Time:

Average TAT=18+1+10+14+155=11.6 ms\text{Average TAT} = \frac{18 + 1 + 10 + 14


+ 15}{5} = 11.6 \, \text{ms}
Aarav Saroliya Page | 19
12302110501001
Operating Systems
202040402

Summary Table:

Scheduling
Average Waiting Time Average Turnaround Time
Algorithm
FCFS 5 ms 9 ms
SJF 4.8 ms 8.8 ms
**S

RTN** | 2.6 ms | 6.6 ms | | Preemptive Priority | 4 ms | 8 ms | | Non-Preemptive Priority | 5


ms | 9 ms | | Round Robin (1 ms quantum) | 5.8 ms | 11.6 ms |

Q-6) Explain Process states and Process State transitions with a diagram.

Soln: Process States in Operating System

A process is an instance of a program in execution. A process goes through different


states during its life cycle. These states help the operating system manage processes and
allocate resources effectively. The primary process states are as follows:

1. New: The process is being created. The operating system is initializing the process,
and the process is not yet ready to be executed.
2. Ready: The process is waiting for CPU time to execute. It is loaded into memory and
is ready to run, but the CPU is not currently available.
3. Running: The process is currently being executed by the CPU. This is the state
where the actual execution of the program takes place.
4. Blocked/Waiting: The process cannot continue execution because it is waiting for
some event, such as I/O completion or waiting for resources (e.g., data from disk).
5. Terminated/Exit: The process has finished execution, or it has been killed by the
operating system. It has released all the resources, and the operating system
removes it from the process table.

Process State Transitions

The transitions between the process states happen due to events such as the availability of
resources, I/O completion, or the preemption of the CPU. The transitions are as follows:

Aarav Saroliya Page | 20


12302110501001
Operating Systems
202040402

1. New → Ready: When the process is initialized and is waiting for the CPU to execute.
2. Ready → Running: When the CPU becomes available, the process is scheduled to
run.
3. Running → Blocked: If the process requires I/O or some event that is not available
immediately, it transitions to the blocked state.
4. Blocked → Ready: Once the event or I/O completes, the process is moved back to
the ready state, waiting for the CPU.
5. Running → Terminated: When the process finishes execution, it moves to the
terminated state.
6. Ready → Terminated: This occurs when a process is aborted or killed by the
operating system.

Process State Diagram

Below is a diagram representing the process states and transitions:

+------------+ (CPU Scheduling) +--------+ (I/O or event)


+----------+
| New | --------------------> | Ready | ----------------------
-> | Blocked |
+------------+ +--------+
+----------+
^ |
|
| v
|
| (Preemption or Time Slice)
|
| |
|
| +---------+
|
| | Running | <---------------------
-------+
| +---------+
| |
| v
| (Completion or Abort)

Aarav Saroliya Page | 21


12302110501001
Operating Systems
202040402

| |
+------------------------------>+
Terminated

Explanation of Transitions:

1. New → Ready: The operating system moves the process to the ready state once it is
initialized and ready for execution.
2. Ready → Running: The process scheduler selects a process from the ready queue
based on the scheduling algorithm (e.g., FCFS, SJF, etc.), and the process starts
execution on the CPU.
3. Running → Blocked: The process enters the blocked state if it needs to wait for
some external event (e.g., waiting for I/O operations like disk read/write).
4. Blocked → Ready: After the event that the process was waiting for is completed
(e.g., the I/O operation finishes), the process is moved back to the ready queue to
wait for the CPU.
5. Running → Terminated: When the process completes its execution, it moves to the
terminated state, and its resources are released by the OS.
6. Ready → Terminated: A process may also be terminated before it can run, either
because the user aborted it or due to a system error.

This cycle continues for all processes during their lifecycle in an operating system.

Q-7) Explain User level thread and Kernel level thread.

Soln: User-Level Threads (ULT)

User-level threads are threads that are managed entirely by the user-level thread library,
and not by the operating system kernel. In this model, the operating system kernel is
unaware of the existence of threads within a process. The threads are created, scheduled,
and managed in user space by a user-level thread library (such as pthreads).

Characteristics of User-Level Threads:

1. Thread Management: Managed entirely by a user-level library (e.g., Pthreads).

Aarav Saroliya Page | 22


12302110501001
Operating Systems
202040402

2. No Kernel Involvement: The operating system is unaware of the threads; it only


knows about the process itself.
3. Faster Creation and Switching: Since thread management (e.g., creation,
destruction, context switching) is handled entirely in user space, it is usually faster
than kernel-managed threads.
4. Blocking: If one user-level thread performs a blocking system call (e.g., I/O), the
entire process may block, since the kernel sees the process as a single entity. Other
user-level threads in the process do not get CPU time while the blocked thread is
waiting.
5. Portability: As user-level threads are managed by a user-level library, they can be
implemented across different operating systems without modifying the OS.
6. No Support for Multiprocessing: In a multi-core system, all user-level threads run
on a single CPU core unless explicit kernel support (like "user-level multithreading"
in some systems) is provided.

Example:

• An application using a library like Pthreads in which multiple threads run


concurrently within the same process, all managed by the user-level thread library.

Kernel-Level Threads (KLT)

Kernel-level threads are threads that are directly managed by the operating system
kernel. The kernel is aware of each thread and can schedule them independently, allowing
for more efficient thread management, especially in a multi-core processor environment.

Characteristics of Kernel-Level Threads:

1. Thread Management: Managed by the operating system kernel, which schedules


and controls the threads.
2. Kernel Involvement: The kernel is aware of the existence of each thread within a
process. The kernel directly handles the scheduling of these threads.
3. Blocking: If a kernel-level thread performs a blocking operation (e.g., I/O), only that
specific thread is blocked, and the other threads of the same process continue
executing. This is a major advantage over user-level threads, where the entire
process might block.

Aarav Saroliya Page | 23


12302110501001
Operating Systems
202040402

4. Multiprocessing Support: Kernel-level threads can be distributed across multiple


CPUs or cores in a multiprocessor system, improving the performance of
multithreaded applications.
5. Slower Creation and Context Switching: Because thread management is done by
the kernel, creating threads and switching between them can be slower compared
to user-level threads, as it requires more system calls and kernel involvement.
6. More System Resources: Kernel-level threads require more memory overhead
because the kernel needs to maintain additional data structures for managing each
thread.

Example:

• An operating system like Linux or Windows, where the kernel itself creates and
manages threads for applications.

Comparison: User-Level Threads vs Kernel-Level Threads

Feature User-Level Threads (ULT) Kernel-Level Threads (KLT)


Thread
Managed by user-level thread
Manage Managed by the kernel
libraries (e.g., Pthreads)
ment
Kernel
Kernel is aware and manages
Involvem Kernel is unaware of the threads
threads individually
ent
If one thread blocks (e.g., on I/O), all Only the blocking thread is blocked,
Blocking
threads in the process block others can run
Creation Fast thread creation and context Slower thread creation and context
Time switching switching due to kernel involvement
Cannot utilize multiple processors
Multipro Can utilize multiple processors or
(unless explicitly supported by the
cessing cores
OS)
System Higher memory overhead (kernel
Lower memory overhead (only the
Resourc must manage each thread
process is in the kernel)
es separately)
Example Pthreads (on Linux, Unix), Java Linux threads, Windows threads,
s threads (with user-level libraries) Solaris threads
Aarav Saroliya Page | 24
12302110501001
Operating Systems
202040402

Hybrid Approach

Some modern systems use a hybrid approach that combines the benefits of both user-
level threads and kernel-level threads. For example:

• Many-to-Many Model: Multiple user-level threads are mapped to many kernel-level


threads, allowing for efficient user-level thread management while still allowing the
kernel to schedule threads on multiple processors.
• One-to-One Model: Each user-level thread is mapped to a kernel-level thread,
allowing full kernel scheduling and the benefits of parallel processing.

Example of Hybrid Systems:

• Solaris and Linux use such hybrid models, which allow them to manage threads at
both the user level and kernel level.

Q-8) Explain Context switch with example.

Soln: Context Switch in Operating Systems

A context switch is the process of storing the state of a currently running process or
thread and loading the state of another process or thread. This is essential for multitasking
in a system where the CPU switches between different processes or threads, giving the
illusion of simultaneous execution.

The context of a process or thread consists of its program counter (PC), registers, and
stack pointer, along with other essential information that defines the state of the process
at any given time. When the operating system decides to switch between processes or
threads (due to time slicing, I/O operations, or preemption), it performs a context switch to
save and load the necessary information.

Steps in a Context Switch

1. Saving the Context of the Current Process:


a. The current process is paused. The operating system saves the state of the
CPU (i.e., the program counter, stack pointer, and all the CPU registers) into

Aarav Saroliya Page | 25


12302110501001
Operating Systems
202040402

a data structure called the Process Control Block (PCB) or Thread Control
Block (TCB) of the current process.
2. Selecting the Next Process to Run:
a. The operating system selects the next process to run from the ready queue
based on the scheduling algorithm (e.g., FCFS, Round Robin, SJF).
3. Loading the Context of the New Process:
a. The state of the new process is retrieved from its PCB/TCB, and the values of
its registers and program counter are loaded into the CPU. This step restores
the process to the state it was in when it was last interrupted.
4. Resuming Execution:
a. The CPU starts executing the instructions of the new process as per its saved
context.

Example of a Context Switch

Consider an operating system running with two processes, P1 and P2:

• P1 is currently running, and the operating system decides to switch to P2 for some
reason (e.g., time quantum has expired, or a higher-priority process needs to run).

Context Switch Example:

1. Process P1 (Running):
a. State of P1:
i. Program Counter (PC) = 1000 (indicating the next instruction to
execute).
ii. Registers = {R1=5, R2=10, R3=20}
iii. Stack Pointer (SP) = 0x2000
iv. PCB of P1: Contains all the state information for P1.
2. Context Switch Initiation:
a. The operating system decides to stop P1 and switch to P2.
3. Saving P1's Context:
a. The OS saves the values of PC, registers, and SP of P1 into P1's PCB.
4. Selecting Process P2:
a. The scheduler selects P2 from the ready queue.
5. Loading P2's Context:
a. The OS loads the saved state (PC, registers, SP) of P2 from its PCB:

Aarav Saroliya Page | 26


12302110501001
Operating Systems
202040402

i. Program Counter (PC) = 1500 (the next instruction for P2).


ii. Registers = {R1=2, R2=4, R3=30}
iii. Stack Pointer (SP) = 0x3000
6. Process P2 (Running):
a. Now, P2 starts executing from the point it left off (PC=1500, with its registers
and stack restored).

Time Taken for Context Switch

• Context switching introduces overhead because saving and restoring the process
context takes time, during which the CPU is not executing any application code.
• The time taken for a context switch depends on various factors, including the
number of registers, the size of the PCB, and the architecture of the CPU.

Example in Practice: Time-Sliced Multitasking

Consider the scenario where a system uses Round Robin (RR) scheduling with a time
quantum of 10ms:

1. Time 0-10ms: P1 runs and is interrupted after 10ms.


2. Context Switch: The state of P1 is saved, and P2 is selected to run.
3. Time 10-20ms: P2 runs and is interrupted after 10ms.
4. Context Switch: The state of P2 is saved, and P1 resumes execution from where it
left off.

In each time slice, a context switch occurs, and the process state is saved and loaded,
allowing the operating system to switch between processes.

Benefits and Costs of Context Switching

• Benefits:
o Multitasking: Allows multiple processes or threads to run concurrently,
making better use of CPU time.
o Preemptive Scheduling: Ensures that higher-priority processes can take
control of the CPU as needed.
• Costs:
o Performance Overhead: The time taken to save and restore context reduces
the CPU time available for actual execution.
Aarav Saroliya Page | 27
12302110501001
Operating Systems
202040402

o Increased Latency: Frequent context switches can lead to increased


latency for processes that need continuous CPU time.

Conclusion

A context switch is a crucial component of multitasking operating systems, allowing them


to switch between processes or threads, ensuring that resources are shared efficiently.
However, frequent context switches can introduce performance overhead, so efficient
scheduling algorithms are used to minimize unnecessary switches.

Aarav Saroliya Page | 28


12302110501001

You might also like