OS Assignment 1
OS Assignment 1
202040402
ASSIGNMENT-1
An Operating System (OS) is system software that acts as an interface between computer
hardware and users. It manages hardware resources, provides essential services, and
enables users to run applications efficiently.
Q-2) What is a system call? Explain all the steps with an example.
A System Call is a mechanism that allows user-level processes to request services from
the operating system's kernel. It acts as an interface between user applications and the
hardware by providing essential functionalities like file management, process control,
memory management, and communication.
#include <stdio.h>
#include <fcntl.h>
#include <unistd.h>
int main() {
int fd;
fd = open("example.txt", O_RDONLY); // System call to open a file
in read mode
if (fd == -1) {
printf("Error opening file\n");
} else {
printf("File opened successfully with file descriptor: %d\n",
fd);
close(fd); // Closing the file
}
return 0;
}
Step-by-Step Execution
1. Monolithic Structure
🔹 Overview
🔹 Advantages
🔹 Disadvantages
🔹 Example OS
• UNIX
• Linux
2. Layered Structure
🔹 Overview
🔹 Advantages
🔹 Disadvantages
🔹 Example OS
3. Microkernel Structure
🔹 Overview
🔹 Advantages
🔹 Disadvantages
Complex implementation.
🔹 Example OS
• QNX
• Minix
• macOS (partially uses Microkernel)
4. Modular Structure
🔹 Overview
🔹 Advantages
✔ Better maintainability.
🔹 Disadvantages
🔹 Example OS
5. Hybrid Structure
🔹 Overview
🔹 Advantages
🔹 Disadvantages
🔹 Example OS
🔹 Overview
🔹 Advantages
🔹 Disadvantages
🔹 Example OS
• VMware ESXi
• Microsoft Hyper-V
• VirtualBox
Comparison Table
Performa Secur Scalabi Comple
Structure Example OS
nce ity lity xity
Monolithic High Low Low High Linux, UNIX
Layered Medium High Medium High Windows NT
Microkernel Low High High High Minix, QNX
Medi Linux,
Modular High High Medium
um Windows
macOS,
Hybrid High High High High
Windows
Virtual VMware,
Low High High High
Machine Hyper-V
Conclusion
Each OS structure has its own advantages and trade-offs. The choice depends on system
requirements, performance needs, security concerns, and flexibility.
Operating System (OS) services are the fundamental functionalities provided by the OS to
manage hardware, software, and system resources. These services allow user
applications to run efficiently, securely, and reliably by interacting with hardware and
managing system resources.
Key OS Services
• File Creation and Deletion: OS provides services to create, read, write, and delete
files.
• File Organization: OS organizes files into directories (folders) and maintains the
structure using metadata (file names, permissions, sizes, timestamps).
• Access Control: OS enforces access control policies to determine who can read,
write, or execute specific files.
• File Storage Management: OS manages physical disk storage by organizing files
into blocks, sectors, and tracks. It handles file storage, retrieval, and caching for
efficient access.
• CPU Scheduling: OS determines how and when a process gets access to the CPU.
Scheduling policies and algorithms are used to ensure fairness and efficiency.
• Resource Allocation: OS allocates system resources (CPU time, memory, I/O
devices) to different processes based on priority and availability.
• Deadlock Handling: OS prevents and resolves deadlocks (situations where two or
more processes are waiting indefinitely for each other’s resources) through
techniques like resource allocation graphs and timeout policies.
• Timers: OS uses timers to schedule and manage events, like task timeouts,
process suspension, and alarms.
• Time Sharing: OS allocates CPU time to processes in a round-robin fashion,
allowing multiple processes to share the CPU in a time-sliced manner.
Summary of OS Services
Service Description
Process creation, scheduling, termination, and inter-process
Process Management
communication.
Allocation, protection, swapping, and virtual memory
Memory Management
management.
File System
File creation, deletion, access control, storage management.
Management
Interaction with hardware devices, I/O operations, buffering,
Device Management
and interrupt handling.
Security and Access
User authentication, permissions, encryption, and auditing.
Control
Network
Network communication, configuration, and security.
Management
System Resource
CPU scheduling, resource allocation, and deadlock handling.
Management
User Interface CLI, GUI, and shell environments for user interaction.
Time Management System time tracking, timers, and time-sharing.
Error Handling and
Error detection, reporting, and system health monitoring.
Logging
Conclusion
Operating System services are crucial for the management of resources and efficient
operation of a computer system. These services ensure resource allocation, security,
and user interaction, making it possible for users and applications to interact with
hardware in a stable and controlled environment.
Q-5)
Soln: Let's break down the scheduling for each algorithm to compute the average waiting
time and average turnaround time.
Job Information
Formulas
Execution Order:
Execution Order:
SRTN is a preemptive version of SJF. It prioritizes jobs with the shortest remaining time.
Execution Order:
• TAT for P1 = 10 - 0 = 10 ms
• TAT for P2 = 3 - 2 = 1 ms
• TAT for P3 = 12 - 4 = 8 ms
• TAT for P5 = 15 - 12 = 3 ms
• TAT for P4 = 19 - 8 = 11 ms
In preemptive priority scheduling, the process with the highest priority (lowest priority
number) runs first. A running process can be preempted if a new process with a higher
priority arrives.
Execution Order:
• Process 2 has the highest priority (priority = 1), so it runs first (1 ms).
• Process 1 runs next (priority = 2, 10 ms).
• Process 5 runs after (priority = 3, 3 ms).
• Process 4 runs next (priority = 4, 4 ms).
• Process 3 runs last (priority = 5, 2 ms).
• TAT for P1 = 10 ms
• TAT for P2 = 1 ms
• TAT for P3 = 9 ms
• TAT for P4 = 12 ms
• TAT for P5 = 8 ms
Execution Order:
• TAT for P1 = 10 ms
• TAT for P2 = 1 ms
• TAT for P3 = 12 ms
• TAT for P4 = 12 ms
• TAT for P5 = 10 ms
Execution Order:
• Process 1 (10 ms) runs for 1 ms, then Process 2 (1 ms) runs for 1 ms, then Process 3
(2 ms) runs for 1 ms, and so on, with each process getting 1 ms of CPU time in a
cyclic manner until completion.
• WT for P1 = 8 ms
• WT for P2 = 0 ms
• WT for P3 = 8 ms
• WT for P4 = 8 ms
• WT for P5 = 5 ms
• TAT for P1 = 18 ms
• TAT for P2 = 1 ms
• TAT for P3 = 10 ms
• TAT for P4 = 14 ms
• TAT for P5 = 15 ms
Summary Table:
Scheduling
Average Waiting Time Average Turnaround Time
Algorithm
FCFS 5 ms 9 ms
SJF 4.8 ms 8.8 ms
**S
Q-6) Explain Process states and Process State transitions with a diagram.
1. New: The process is being created. The operating system is initializing the process,
and the process is not yet ready to be executed.
2. Ready: The process is waiting for CPU time to execute. It is loaded into memory and
is ready to run, but the CPU is not currently available.
3. Running: The process is currently being executed by the CPU. This is the state
where the actual execution of the program takes place.
4. Blocked/Waiting: The process cannot continue execution because it is waiting for
some event, such as I/O completion or waiting for resources (e.g., data from disk).
5. Terminated/Exit: The process has finished execution, or it has been killed by the
operating system. It has released all the resources, and the operating system
removes it from the process table.
The transitions between the process states happen due to events such as the availability of
resources, I/O completion, or the preemption of the CPU. The transitions are as follows:
1. New → Ready: When the process is initialized and is waiting for the CPU to execute.
2. Ready → Running: When the CPU becomes available, the process is scheduled to
run.
3. Running → Blocked: If the process requires I/O or some event that is not available
immediately, it transitions to the blocked state.
4. Blocked → Ready: Once the event or I/O completes, the process is moved back to
the ready state, waiting for the CPU.
5. Running → Terminated: When the process finishes execution, it moves to the
terminated state.
6. Ready → Terminated: This occurs when a process is aborted or killed by the
operating system.
| |
+------------------------------>+
Terminated
Explanation of Transitions:
1. New → Ready: The operating system moves the process to the ready state once it is
initialized and ready for execution.
2. Ready → Running: The process scheduler selects a process from the ready queue
based on the scheduling algorithm (e.g., FCFS, SJF, etc.), and the process starts
execution on the CPU.
3. Running → Blocked: The process enters the blocked state if it needs to wait for
some external event (e.g., waiting for I/O operations like disk read/write).
4. Blocked → Ready: After the event that the process was waiting for is completed
(e.g., the I/O operation finishes), the process is moved back to the ready queue to
wait for the CPU.
5. Running → Terminated: When the process completes its execution, it moves to the
terminated state, and its resources are released by the OS.
6. Ready → Terminated: A process may also be terminated before it can run, either
because the user aborted it or due to a system error.
This cycle continues for all processes during their lifecycle in an operating system.
User-level threads are threads that are managed entirely by the user-level thread library,
and not by the operating system kernel. In this model, the operating system kernel is
unaware of the existence of threads within a process. The threads are created, scheduled,
and managed in user space by a user-level thread library (such as pthreads).
Example:
Kernel-level threads are threads that are directly managed by the operating system
kernel. The kernel is aware of each thread and can schedule them independently, allowing
for more efficient thread management, especially in a multi-core processor environment.
Example:
• An operating system like Linux or Windows, where the kernel itself creates and
manages threads for applications.
Hybrid Approach
Some modern systems use a hybrid approach that combines the benefits of both user-
level threads and kernel-level threads. For example:
• Solaris and Linux use such hybrid models, which allow them to manage threads at
both the user level and kernel level.
A context switch is the process of storing the state of a currently running process or
thread and loading the state of another process or thread. This is essential for multitasking
in a system where the CPU switches between different processes or threads, giving the
illusion of simultaneous execution.
The context of a process or thread consists of its program counter (PC), registers, and
stack pointer, along with other essential information that defines the state of the process
at any given time. When the operating system decides to switch between processes or
threads (due to time slicing, I/O operations, or preemption), it performs a context switch to
save and load the necessary information.
a data structure called the Process Control Block (PCB) or Thread Control
Block (TCB) of the current process.
2. Selecting the Next Process to Run:
a. The operating system selects the next process to run from the ready queue
based on the scheduling algorithm (e.g., FCFS, Round Robin, SJF).
3. Loading the Context of the New Process:
a. The state of the new process is retrieved from its PCB/TCB, and the values of
its registers and program counter are loaded into the CPU. This step restores
the process to the state it was in when it was last interrupted.
4. Resuming Execution:
a. The CPU starts executing the instructions of the new process as per its saved
context.
• P1 is currently running, and the operating system decides to switch to P2 for some
reason (e.g., time quantum has expired, or a higher-priority process needs to run).
1. Process P1 (Running):
a. State of P1:
i. Program Counter (PC) = 1000 (indicating the next instruction to
execute).
ii. Registers = {R1=5, R2=10, R3=20}
iii. Stack Pointer (SP) = 0x2000
iv. PCB of P1: Contains all the state information for P1.
2. Context Switch Initiation:
a. The operating system decides to stop P1 and switch to P2.
3. Saving P1's Context:
a. The OS saves the values of PC, registers, and SP of P1 into P1's PCB.
4. Selecting Process P2:
a. The scheduler selects P2 from the ready queue.
5. Loading P2's Context:
a. The OS loads the saved state (PC, registers, SP) of P2 from its PCB:
• Context switching introduces overhead because saving and restoring the process
context takes time, during which the CPU is not executing any application code.
• The time taken for a context switch depends on various factors, including the
number of registers, the size of the PCB, and the architecture of the CPU.
Consider the scenario where a system uses Round Robin (RR) scheduling with a time
quantum of 10ms:
In each time slice, a context switch occurs, and the process state is saved and loaded,
allowing the operating system to switch between processes.
• Benefits:
o Multitasking: Allows multiple processes or threads to run concurrently,
making better use of CPU time.
o Preemptive Scheduling: Ensures that higher-priority processes can take
control of the CPU as needed.
• Costs:
o Performance Overhead: The time taken to save and restore context reduces
the CPU time available for actual execution.
Aarav Saroliya Page | 27
12302110501001
Operating Systems
202040402
Conclusion