0% found this document useful (0 votes)
34 views11 pages

Structure and Role of an Operating System in a Computing Environment

An Operating System (OS) is crucial system software that manages computer hardware and software resources, providing essential services for efficient process execution. It consists of components such as the kernel, user interface, file system, and memory management, and plays a key role in resource allocation, process management, and security. The document also discusses process creation, differences between processes and threads, scheduling strategies, memory management techniques, and the concept of virtual memory.

Uploaded by

Bigyan Khanal 33
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
34 views11 pages

Structure and Role of an Operating System in a Computing Environment

An Operating System (OS) is crucial system software that manages computer hardware and software resources, providing essential services for efficient process execution. It consists of components such as the kernel, user interface, file system, and memory management, and plays a key role in resource allocation, process management, and security. The document also discusses process creation, differences between processes and threads, scheduling strategies, memory management techniques, and the concept of virtual memory.

Uploaded by

Bigyan Khanal 33
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

Structure and Role of an Operating System in a Computing Environment

An Operating System (OS) is system software that manages hardware and software resources in
a computer. It provides essential services for computer programs and ensures efficient execution
of processes.

Structure of an OS

1. Kernel – The core of the OS, managing system resources and communication between
hardware and software.
2. Shell/User Interface – Provides interaction with the OS (CLI or GUI).
3. File System – Manages storage, organization, retrieval, and permissions of files.
4. Process Management – Handles process scheduling, execution, and termination.
5. Memory Management – Allocates and deallocates memory to processes.
6. Device Management – Controls peripheral devices (printers, keyboards, etc.).

Role of an OS

• Resource Allocation: Assigns CPU, memory, and I/O devices to processes.


• Process Management: Handles process creation, execution, synchronization, and
termination.
• File System Management: Organizes and secures data storage.
• Security and Access Control: Protects data and system resources.

Abstract Model of Computing and Its Components

A computing system consists of three major components:

1. Hardware Layer – Physical components like CPU, memory, and I/O devices.
2. Operating System Layer – Manages hardware and provides services for applications.
3. Application Layer – User programs that interact with the OS to execute tasks.

The abstract model illustrates how these layers interact:

• Users → Interact with applications.


• Applications → Request OS services via system calls.
• Operating System → Manages hardware resources and execution.
• Hardware → Executes machine instructions.

Process Creation Using C Functions (FORK, JOIN, QUIT)

1. FORK The fork() system call creates a new child process. The child receives a copy of the
parent process.
2. JOIN A process can wait for a child process to complete using wait() before continuing
execution.

3. QUIT A process terminates using exit().

#include <stdio.h>

#include <stdlib.h>
#include <sys/types.h>
#include <sys/wait.h>
#include <unistd.h>

int main() {
pid_t pid = fork(); // Create a new process

if (pid < 0) {
perror("Fork failed");
return 1;
} else if (pid == 0) {
printf("Child Process: PID = %d\n", getpid());
exit(0); // Child process exits
} else {
wait(NULL); // Parent waits for child to complete (JOIN)
printf("Parent Process: PID = %d\n", getpid());
}

return 0; // Parent process exits (QUIT)


}

Difference Between Processes and Threads

Feature Process Thread


Definition A program in execution A lightweight subunit of a process
Memory Separate memory space Shared memory within a process
Uses Inter-Process Communication Direct communication via shared
Communication
(IPC) memory
Creation Time Slow (due to memory allocation) Fast (less overhead)
Dependency Independent execution Dependent on parent process

Process Example (Using fork()):

#include <stdio.h>

#include <stdlib.h>
#include <unistd.h>

int main() {
fork();
printf("Hello from process %d\n", getpid());
return 0;
}

Thread Example (Using pthreads):

#include <stdio.h>

#include <stdlib.h>
#include <pthread.h>

void *thread_function(void *arg) {


printf("Hello from thread!\n");
return NULL;
}

int main() {
pthread_t thread;
pthread_create(&thread, NULL, thread_function, NULL);
pthread_join(thread, NULL);
return 0;
}

Key Difference:

• fork() creates a new process with a separate memory space.


• pthread_create() creates a new thread within the same process, sharing memory.

These concepts are fundamental in OS design and are critical for efficient multitasking.

System View of Processes and Their Associated Resources

In an operating system (OS), a process is an instance of a running program. The OS is


responsible for managing processes efficiently while ensuring that system resources are allocated
properly. Each process requires resources such as CPU time, memory, files, and I/O devices to
execute. The OS acts as an intermediary, handling resource allocation, process scheduling, and
communication between processes.

Processes can be categorized into:

• User Processes: Programs executed by users, such as applications.


• System Processes: Background tasks managed by the OS, like system daemons.

The OS ensures proper execution of processes by maintaining data structures such as the Process
Control Block (PCB), which stores essential information about each process, including its state,
program counter, CPU registers, memory allocation, and scheduling details.
The OS manages the following resources for processes:

1. CPU – The OS schedules processes using algorithms like First Come First Serve
(FCFS), Round Robin (RR), Shortest Job Next (SJN), and Priority Scheduling.
2. Memory – Each process gets its own address space for execution, which includes code,
data, stack, and heap segments.
3. I/O Devices – Processes interact with devices like keyboards, printers, and disks through
device drivers and system calls.
4. File System – Processes may need access to files; the OS provides controlled access
through permissions and file management.

By efficiently managing processes and their resources, the OS ensures multi-tasking, multi-
user support, and system stability.Steps Involved in Initializing the OS During
Address Space Creation

The OS undergoes several steps to set up address spaces and initialize processes:

1. Bootstrapping (System Startup)


o The OS is loaded into memory when the system is powered on.
o The bootstrap loader (BIOS/UEFI) initializes the hardware and loads the OS
kernel.
2. Process Creation and Address Space Allocation
o The OS allocates virtual memory for each process.
o Memory is divided into text (code), data, stack, and heap segments.
o The OS loads the executable file of a program into memory.
3. Memory Management and Address Mapping
o The OS sets up page tables and segment tables for efficient memory access.
o If the OS supports virtual memory, it initializes the page swapping mechanism.
4. Loading and Execution of the Program
o The program’s code is copied from disk to memory.
o The process is placed in the Ready Queue for execution.
o The program counter is set to the first instruction.
5. Process Scheduling and Execution
o The OS selects a process from the Ready Queue.
o CPU registers are loaded with the process's state.
o Execution begins under the supervision of the OS.

This ensures that address spaces are initialized correctly, and processes can execute smoothly.

Process Abstraction and Process State Diagram

A process is an abstraction that represents an executing program. The OS abstracts hardware


complexities, allowing multiple processes to run efficiently.

The Process State Diagram illustrates how a process moves through different states during
execution:
1. New – The process is created and waiting for OS approval.
2. Ready – The process is loaded into memory and waiting for CPU execution.
3. Running – The process is currently being executed by the CPU.
4. Blocked (Waiting) – The process is waiting for an event (e.g., I/O operation).
5. Terminated – The process has completed execution or was forcibly stopped.

Transitions between states occur due to scheduling decisions, resource availability, and external
events. The OS manages these transitions to ensure efficient multitasking.

Strategies for Refining the Process Manager to Improve Resource Allocation

To enhance process management and ensure better resource allocation, the OS employs several
strategies:

1. Efficient CPU Scheduling


o The OS uses advanced scheduling algorithms like Round Robin, Shortest Job
First, and Multi-Level Queue Scheduling to balance resource utilization.
o This prevents CPU starvation and ensures fair allocation of processing time.
2. Dynamic Memory Management
o The OS uses paging and segmentation to manage memory efficiently.
o Dynamic allocation methods like First Fit, Best Fit, and Worst Fit help in
optimizing memory usage.
3. Virtualization and Resource Sharing
o Techniques like multi-threading and virtual machines allow better utilization
of CPU and memory.
o The OS ensures that multiple processes can share resources securely.
4. Deadlock Prevention and Avoidance
o Deadlocks occur when processes hold resources while waiting for others to be
freed.
o Strategies like Banker’s Algorithm, Resource Ordering, and Deadlock
Detection help prevent system hangs.
5. Process Prioritization and Load Balancing
o High-priority processes (e.g., real-time applications) get more CPU time.
o Load balancing ensures that no single CPU core is overburdened in multi-core
systems.

By refining process management, the OS can enhance performance, resource utilization, and
system responsiveness.

Difference Between Voluntary and Involuntary CPU Sharing

CPU sharing occurs in multitasking environments where multiple processes need to use the CPU.
This can be voluntary or involuntary, depending on how control of the CPU is relinquished.

1. Voluntary CPU Sharing


In voluntary CPU sharing, a process willingly gives up control of the CPU. This usually happens
when:

• A process is waiting for I/O operations (e.g., reading a file from disk).
• The process explicitly calls a sleep or yield function.
• The process completes its execution.

Example:
A web browser waiting for a network response releases the CPU so other processes can run.

2. Involuntary CPU Sharing

In involuntary CPU sharing, the operating system forces a process to release the CPU. This is
done using preemptive scheduling when:

• A higher-priority process needs the CPU.


• A time quantum expires in a round-robin scheduling strategy.
• The OS enforces fairness among processes.

Example:
A word processor running in the background is interrupted by an incoming video call
application, which takes CPU control.

Comparison of Non-Preemptive Scheduling Strategies

Non-preemptive scheduling means a process, once given CPU time, runs until it completes or
blocks for I/O. The OS does not interrupt it.

1. First-Come, First-Served (FCFS)

• Description: Processes are scheduled in the order they arrive.


• Implementation: A simple FIFO queue.
• Advantages: Easy to implement, fair to processes.
• Disadvantages: Can cause the convoy effect, where a long process delays shorter ones.

Example:

Process Arrival Time Burst Time Completion Time Turnaround Time


P1 0 10 10 10
P2 2 5 15 13
P3 3 2 17 14
2. Priority Scheduling (Non-Preemptive)

• Description: Processes are assigned a priority. The highest-priority process runs first.
• Implementation: Uses a priority queue.
• Advantages: Ensures critical tasks get CPU time.
• Disadvantages: Starvation may occur, where low-priority processes never execute.

Example:

Process Priority Arrival Time Burst Time Completion Time Turnaround Time
P1 2 0 5 5 5
P2 1 2 3 8 6
P3 3 3 2 10 7

Higher-priority processes run first, but lower-priority ones may suffer delays.

Preemptive Scheduling Strategies and System Performance

Preemptive scheduling allows the OS to interrupt a process and switch to another. This
improves responsiveness but can increase context switching overhead.

1. Round-Robin Scheduling (RR)

• Description: Each process gets a fixed time slice (quantum) and is preempted when it
expires.
• Implementation: Uses a circular queue.
• Advantages: Ensures fairness, prevents CPU monopolization.
• Disadvantages: Too small a quantum increases context switches; too large behaves like
FCFS.

Example (Time Quantum = 3ms):

Process Arrival Time Burst Time Completion Order


P1 0 10 P1 → P2 → P3 → P1 → P2 → P1
P2 1 5
P3 2 2

RR improves system responsiveness but increases overhead due to frequent context switching.

Partitioning Processes into Smaller Sub-Processes for Efficient Scheduling

To improve execution efficiency, large processes can be divided into smaller sub-processes
(threads or tasks).

Methods of Partitioning

1. Multithreading – A single process is split into multiple threads sharing the same
memory.
o Example: A web browser with a separate thread for rendering, downloading, and
UI response.
2. Pipeline Processing – A large computation is divided into stages, where each stage
executes in parallel.
o Example: Compiler stages (Lexical Analysis → Parsing → Code Generation).
3. Process Pooling – Dividing workloads among multiple worker processes.
o Example: A web server handling multiple HTTP requests.

By partitioning processes into smaller tasks, CPU utilization improves, and execution becomes
more efficient.

1. Requirements on Primary Memory for an Effective Memory Management System

An effective memory management system ensures optimal utilization of primary memory


(RAM) while maintaining system performance and stability. The following are key
requirements:

A. Memory Allocation and Deallocation

• The OS must allocate memory efficiently to processes and reclaim it after execution.
• Techniques such as contiguous and non-contiguous allocation are used to manage memory
dynamically.

B. Protection and Isolation

• Processes should not interfere with each other’s memory.


• The OS enforces memory protection mechanisms to prevent unauthorized access.

C. Address Translation

• The OS must map logical addresses (used by processes) to physical addresses (actual memory
locations).
• This is done using page tables and segmentation tables in modern memory management
systems.

D. Swapping and Virtual Memory Support

• The OS should support swapping, where inactive processes are temporarily moved to disk to
free RAM.
• Virtual memory extends physical memory using paging or segmentation, allowing processes to
use more memory than available RAM.

E. Fragmentation Handling

• The OS should manage internal and external fragmentation effectively to maximize memory
utilization.
• Compaction, paging, and segmentation are used to address fragmentation issues.

F. Multi-User and Multi-Tasking Support

• In multi-user systems, the OS must allocate memory fairly among users and prevent starvation.
• Memory management techniques ensure smooth execution of multiple processes.

2. Fixed-Partition and Variable-Partition Memory Strategies

A. Fixed-Partition Memory Management

In this strategy, RAM is divided into fixed-sized partitions at system startup. Each partition
can hold one process at a time.

Characteristics:

• Static Allocation: Partitions are created at boot time and do not change.
• Simple Implementation: Easy to manage as partitions are pre-defined.
• Fast Access: No overhead of dynamic allocation.

Disadvantages:

• Internal Fragmentation: If a process is smaller than the partition size, unused memory is
wasted.
• Limited Flexibility: Cannot accommodate processes larger than a partition.

Example:

If a system has 4 GB of RAM divided into four 1 GB partitions, a process requiring 1.5 GB
cannot run because no partition is large enough.

B. Variable-Partition Memory Management

This strategy allows dynamic allocation of memory based on process size. Partitions are
created as needed, and their sizes vary depending on process requirements.

Characteristics:

• Efficient Memory Utilization: Reduces internal fragmentation by allocating only the required
memory.
• Flexibility: Can accommodate processes of varying sizes.

Disadvantages:
• External Fragmentation: As processes terminate, free memory spaces are scattered, leading to
wasted memory.
• Compaction Overhead: The OS may need to rearrange memory to combine free spaces
(memory compaction).

Example:

If a system has 4 GB RAM, it can allocate 1 GB to Process A, 2 GB to Process B, and 1 GB to


Process C dynamically. If Process A terminates, 1 GB is freed, which another process can use.

3. Concept and Working of Virtual Memory

A. Concept of Virtual Memory

Virtual memory is a memory management technique that allows processes to use more
memory than physically available RAM by storing parts of a process in disk storage.

• Logical Address Space: Virtual memory provides an abstraction of a larger memory space.
• Swapping Mechanism: The OS swaps inactive portions of a process to disk (swap space) and
loads necessary parts into RAM.
• Demand Paging: Only required pages of a process are loaded into memory, improving
efficiency.

B. Working of Virtual Memory

1. Page Table Management


o The OS divides virtual memory into pages and physical memory into frames.
o The page table maps virtual addresses to physical addresses.
2. Page Fault Handling
o When a requested page is not in memory, a page fault occurs.
o The OS loads the missing page from disk into memory.
3. Thrashing Prevention
o If too many page faults occur, the system experiences thrashing (constant swapping
between RAM and disk).
o The OS uses LRU (Least Recently Used) or FIFO (First-In-First-Out) page replacement
algorithms to manage paging efficiently.

C. Advantages of Virtual Memory

✔ Allows execution of large programs without needing large RAM.


✔ Improves multi-tasking by allowing multiple programs to run concurrently.
✔ Enhances security by isolating process address spaces.

D. Disadvantages of Virtual Memory


❌ Slower than RAM due to disk access latency.
❌ Increased disk wear due to frequent swapping.
❌ Thrashing can degrade performance if memory management is inefficient.

4. Memory Management in Shared-Memory Multiprocessors

A. Overview of Shared-Memory Multiprocessors

In multiprocessor systems, multiple CPUs share a common physical memory. The OS must
manage memory efficiently to ensure synchronization, security, and consistency among
processes.

B. Challenges in Shared Memory Management

1. Concurrency Control:
o Multiple processors may try to read/write to the same memory location.
o The OS uses locking mechanisms (mutexes, semaphores) to prevent data corruption.
2. Cache Coherence:
o Each processor has its cache for faster access.
o The OS ensures that all caches reflect consistent data across processors.
3. Memory Protection:
o The OS isolates memory regions of different processes to prevent conflicts.
o Techniques like Memory Access Control (MAC) ensure only authorized processes access
specific memory regions.
4. NUMA (Non-Uniform Memory Access) Optimization:
o In NUMA architectures, each processor has a local memory that is faster to access.
o The OS optimizes memory placement for minimized latency and improved
performance.

C. Strategies for Managing Shared Memory in Multiprocessors

✔ Interleaved Memory Allocation: Balances memory load across processors.


✔ Memory Partitioning: Divides memory into segments assigned to different CPUs.
✔ Synchronization Mechanisms: Uses semaphores, monitors, and barriers to coordinate
memory access.

D. Example of Shared Memory in Multiprocessors

Consider a dual-processor system where CPU1 and CPU2 need to access shared data. If CPU1
updates a variable in cache, CPU2 must see the updated value. The OS ensures this through
cache coherence protocols like MESI (Modified, Exclusive, Shared, Invalid).

You might also like