Structure and Role of an Operating System in a Computing Environment
Structure and Role of an Operating System in a Computing Environment
An Operating System (OS) is system software that manages hardware and software resources in
a computer. It provides essential services for computer programs and ensures efficient execution
of processes.
Structure of an OS
1. Kernel – The core of the OS, managing system resources and communication between
hardware and software.
2. Shell/User Interface – Provides interaction with the OS (CLI or GUI).
3. File System – Manages storage, organization, retrieval, and permissions of files.
4. Process Management – Handles process scheduling, execution, and termination.
5. Memory Management – Allocates and deallocates memory to processes.
6. Device Management – Controls peripheral devices (printers, keyboards, etc.).
Role of an OS
1. Hardware Layer – Physical components like CPU, memory, and I/O devices.
2. Operating System Layer – Manages hardware and provides services for applications.
3. Application Layer – User programs that interact with the OS to execute tasks.
1. FORK The fork() system call creates a new child process. The child receives a copy of the
parent process.
2. JOIN A process can wait for a child process to complete using wait() before continuing
execution.
#include <stdio.h>
#include <stdlib.h>
#include <sys/types.h>
#include <sys/wait.h>
#include <unistd.h>
int main() {
pid_t pid = fork(); // Create a new process
if (pid < 0) {
perror("Fork failed");
return 1;
} else if (pid == 0) {
printf("Child Process: PID = %d\n", getpid());
exit(0); // Child process exits
} else {
wait(NULL); // Parent waits for child to complete (JOIN)
printf("Parent Process: PID = %d\n", getpid());
}
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
int main() {
fork();
printf("Hello from process %d\n", getpid());
return 0;
}
#include <stdio.h>
#include <stdlib.h>
#include <pthread.h>
int main() {
pthread_t thread;
pthread_create(&thread, NULL, thread_function, NULL);
pthread_join(thread, NULL);
return 0;
}
Key Difference:
These concepts are fundamental in OS design and are critical for efficient multitasking.
The OS ensures proper execution of processes by maintaining data structures such as the Process
Control Block (PCB), which stores essential information about each process, including its state,
program counter, CPU registers, memory allocation, and scheduling details.
The OS manages the following resources for processes:
1. CPU – The OS schedules processes using algorithms like First Come First Serve
(FCFS), Round Robin (RR), Shortest Job Next (SJN), and Priority Scheduling.
2. Memory – Each process gets its own address space for execution, which includes code,
data, stack, and heap segments.
3. I/O Devices – Processes interact with devices like keyboards, printers, and disks through
device drivers and system calls.
4. File System – Processes may need access to files; the OS provides controlled access
through permissions and file management.
By efficiently managing processes and their resources, the OS ensures multi-tasking, multi-
user support, and system stability.Steps Involved in Initializing the OS During
Address Space Creation
The OS undergoes several steps to set up address spaces and initialize processes:
This ensures that address spaces are initialized correctly, and processes can execute smoothly.
The Process State Diagram illustrates how a process moves through different states during
execution:
1. New – The process is created and waiting for OS approval.
2. Ready – The process is loaded into memory and waiting for CPU execution.
3. Running – The process is currently being executed by the CPU.
4. Blocked (Waiting) – The process is waiting for an event (e.g., I/O operation).
5. Terminated – The process has completed execution or was forcibly stopped.
Transitions between states occur due to scheduling decisions, resource availability, and external
events. The OS manages these transitions to ensure efficient multitasking.
To enhance process management and ensure better resource allocation, the OS employs several
strategies:
By refining process management, the OS can enhance performance, resource utilization, and
system responsiveness.
CPU sharing occurs in multitasking environments where multiple processes need to use the CPU.
This can be voluntary or involuntary, depending on how control of the CPU is relinquished.
• A process is waiting for I/O operations (e.g., reading a file from disk).
• The process explicitly calls a sleep or yield function.
• The process completes its execution.
Example:
A web browser waiting for a network response releases the CPU so other processes can run.
In involuntary CPU sharing, the operating system forces a process to release the CPU. This is
done using preemptive scheduling when:
Example:
A word processor running in the background is interrupted by an incoming video call
application, which takes CPU control.
Non-preemptive scheduling means a process, once given CPU time, runs until it completes or
blocks for I/O. The OS does not interrupt it.
Example:
• Description: Processes are assigned a priority. The highest-priority process runs first.
• Implementation: Uses a priority queue.
• Advantages: Ensures critical tasks get CPU time.
• Disadvantages: Starvation may occur, where low-priority processes never execute.
Example:
Process Priority Arrival Time Burst Time Completion Time Turnaround Time
P1 2 0 5 5 5
P2 1 2 3 8 6
P3 3 3 2 10 7
Higher-priority processes run first, but lower-priority ones may suffer delays.
Preemptive scheduling allows the OS to interrupt a process and switch to another. This
improves responsiveness but can increase context switching overhead.
• Description: Each process gets a fixed time slice (quantum) and is preempted when it
expires.
• Implementation: Uses a circular queue.
• Advantages: Ensures fairness, prevents CPU monopolization.
• Disadvantages: Too small a quantum increases context switches; too large behaves like
FCFS.
RR improves system responsiveness but increases overhead due to frequent context switching.
To improve execution efficiency, large processes can be divided into smaller sub-processes
(threads or tasks).
Methods of Partitioning
1. Multithreading – A single process is split into multiple threads sharing the same
memory.
o Example: A web browser with a separate thread for rendering, downloading, and
UI response.
2. Pipeline Processing – A large computation is divided into stages, where each stage
executes in parallel.
o Example: Compiler stages (Lexical Analysis → Parsing → Code Generation).
3. Process Pooling – Dividing workloads among multiple worker processes.
o Example: A web server handling multiple HTTP requests.
By partitioning processes into smaller tasks, CPU utilization improves, and execution becomes
more efficient.
• The OS must allocate memory efficiently to processes and reclaim it after execution.
• Techniques such as contiguous and non-contiguous allocation are used to manage memory
dynamically.
C. Address Translation
• The OS must map logical addresses (used by processes) to physical addresses (actual memory
locations).
• This is done using page tables and segmentation tables in modern memory management
systems.
• The OS should support swapping, where inactive processes are temporarily moved to disk to
free RAM.
• Virtual memory extends physical memory using paging or segmentation, allowing processes to
use more memory than available RAM.
E. Fragmentation Handling
• The OS should manage internal and external fragmentation effectively to maximize memory
utilization.
• Compaction, paging, and segmentation are used to address fragmentation issues.
• In multi-user systems, the OS must allocate memory fairly among users and prevent starvation.
• Memory management techniques ensure smooth execution of multiple processes.
In this strategy, RAM is divided into fixed-sized partitions at system startup. Each partition
can hold one process at a time.
Characteristics:
• Static Allocation: Partitions are created at boot time and do not change.
• Simple Implementation: Easy to manage as partitions are pre-defined.
• Fast Access: No overhead of dynamic allocation.
Disadvantages:
• Internal Fragmentation: If a process is smaller than the partition size, unused memory is
wasted.
• Limited Flexibility: Cannot accommodate processes larger than a partition.
Example:
If a system has 4 GB of RAM divided into four 1 GB partitions, a process requiring 1.5 GB
cannot run because no partition is large enough.
This strategy allows dynamic allocation of memory based on process size. Partitions are
created as needed, and their sizes vary depending on process requirements.
Characteristics:
• Efficient Memory Utilization: Reduces internal fragmentation by allocating only the required
memory.
• Flexibility: Can accommodate processes of varying sizes.
Disadvantages:
• External Fragmentation: As processes terminate, free memory spaces are scattered, leading to
wasted memory.
• Compaction Overhead: The OS may need to rearrange memory to combine free spaces
(memory compaction).
Example:
Virtual memory is a memory management technique that allows processes to use more
memory than physically available RAM by storing parts of a process in disk storage.
• Logical Address Space: Virtual memory provides an abstraction of a larger memory space.
• Swapping Mechanism: The OS swaps inactive portions of a process to disk (swap space) and
loads necessary parts into RAM.
• Demand Paging: Only required pages of a process are loaded into memory, improving
efficiency.
In multiprocessor systems, multiple CPUs share a common physical memory. The OS must
manage memory efficiently to ensure synchronization, security, and consistency among
processes.
1. Concurrency Control:
o Multiple processors may try to read/write to the same memory location.
o The OS uses locking mechanisms (mutexes, semaphores) to prevent data corruption.
2. Cache Coherence:
o Each processor has its cache for faster access.
o The OS ensures that all caches reflect consistent data across processors.
3. Memory Protection:
o The OS isolates memory regions of different processes to prevent conflicts.
o Techniques like Memory Access Control (MAC) ensure only authorized processes access
specific memory regions.
4. NUMA (Non-Uniform Memory Access) Optimization:
o In NUMA architectures, each processor has a local memory that is faster to access.
o The OS optimizes memory placement for minimized latency and improved
performance.
Consider a dual-processor system where CPU1 and CPU2 need to access shared data. If CPU1
updates a variable in cache, CPU2 must see the updated value. The OS ensures this through
cache coherence protocols like MESI (Modified, Exclusive, Shared, Invalid).