OS Unit 1
OS Unit 1
Unit 1
An Operating system (OS) is system software that manages computer hardware and
software resources and provides common services for computer programs. It acts as an
intermediary between users and the computer hardware, enabling communication and
efficient resource usage.
1. Process Management
o Schedules processes for execution.
o Manages process creation, execution, and termination.
o Provides mechanisms for process synchronization and inter-process communication.
2. Memory Management
o Allocates and deallocates memory to processes.
o Keeps track of each byte in a computer’s memory.
o Manages virtual memory and swapping.
4. Device Management
o Manages device communication via drivers.
o Controls access to input/output devices (e.g., keyboards, printers, disk drives).
o Ensures efficient utilization of peripheral devices.
5. User Interface
o Provides interfaces like Command Line Interface (CLI) or Graphical User Interface
(GUI) for user interaction.
o Allows users to issue commands or manipulate graphical elements.
What is Booting?
Booting is the process of starting up a computer or device and loading the operating system
(OS) into the system's memory (RAM) so that the device can become operational. It involves
a series of steps that prepare the computer to execute user programs and applications.
The term "booting" is short for "bootstrap", which refers to the process of starting with a
small, basic program and gradually loading more complex software until the system is fully
functional.
Types of Booting:
Booting involves several steps, whether it's a cold boot or warm boot. Below is a simplified
breakdown of the booting process:
2. Bootstrap Loader:
o After the POST, the computer looks for a small program called the Bootstrap
Loader (or simply "bootloader") that is usually stored in read-only memory (ROM)
or flash memory on the motherboard. This program's role is to locate and load the
operating system.
o The bootstrap loader typically looks for a bootable device, such as a hard drive, SSD,
or USB drive, based on the boot sequence configured in the system's BIOS or UEFI.
6. System Initialization:
o The kernel initializes system services and processes, such as user interface, network
services, and background system processes.
o User-level processes, such as login screens, desktop environments, and other system
applications, are started.
Operating systems can be classified based on their functionality and the type of tasks they are
designed to handle. Below is a detailed classification:
A batch operating system processes tasks (jobs) in groups or batches without user
interaction during execution. This system was widely used in early computers and is still used
in specific applications today.
3. Automation:
o Once a batch is submitted, jobs are executed without user intervention, saving time
and effort.
4. Cost-Effective for Large Jobs:
o Ideal for repetitive, long-running tasks, such as payroll processing or large-scale data
analysis.
5. Prioritization:
o Jobs can be scheduled based on priority, ensuring critical tasks are processed first.
6. Simplified Operation:
o Users submit jobs, and the system handles execution, making it straightforward for
end-users.
1. Lack of Interaction:
o Users cannot interact with their programs while they are running, which can be
problematic if input or corrections are required.
2. Debugging Challenges:
o Errors are typically identified after the batch execution is complete, making
debugging time-consuming.
3. Idle Time:
o If a job in the batch depends on user input or additional data, the system may remain
idle, waiting for input.
5. Expensive Setup:
o Early batch systems required specialized personnel and expensive setups to manage
and process batches efficiently.
7. Dependency on Operators:
o Early batch systems relied heavily on operators to group and load jobs, which could
introduce delays or errors.
Use Cases
Description: Provides direct communication between the user and the system.
Features:
o Real-time feedback to user commands.
o User inputs are processed immediately.
o Example: Modern desktop operating systems like Windows and macOS.
Use Cases: Word processing, web browsing, games.
Interactive systems are operating systems that allow direct communication between the user
and the computer during program execution. Examples include modern operating systems
like Windows, macOS, and Linux.
2. User-Friendly Interfaces:
o Designed for ease of use with graphical interfaces, command-line tools, or touch-
based controls.
4. Flexibility:
o Users can modify or control processes dynamically, making these systems adaptable
for diverse tasks.
5. Enhanced Productivity:
o Real-time interaction helps users perform multiple tasks quickly and efficiently.
7. Interactive Debugging:
o Developers can debug programs during execution by observing outputs and changing
inputs on the fly.
1. Resource Intensive:
o Requires significant CPU, memory, and I/O resources to support real-time
interaction.
2. Complexity:
o Managing simultaneous user interactions and processes increases system complexity.
3. Higher Cost:
o The hardware and software requirements for interactive systems are often more
expensive compared to simpler systems.
4. Security Risks:
o Increased interaction provides more opportunities for unauthorized access or security
breaches.
7. Dependency on User:
o Execution may stall if the system requires constant user input, reducing efficiency.
Description: Shares system resources among multiple users or processes by time slicing.
Features:
o Each user gets a small time slice (quantum) for their tasks.
o Ensures responsiveness for all users.
o Example: Unix, Linux.
Use Cases: Multi-user systems like servers.
2. Interactive Environment:
o Users can interact with the system in real time, making it suitable for multitasking
and multiuser applications.
3. Reduced Idle Time:
o The system switches between tasks quickly, ensuring minimal idle time for the CPU.
4. Quick Response:
o The system allocates time slices in such a way that users experience minimal delay.
6. Improved Productivity:
o Multiple users can work on the same system without interfering with each other,
increasing efficiency.
7. Isolation:
o Faults in one process do not typically affect other processes, ensuring system
stability.
2. Complex Implementation:
o The design of time-sharing systems involves complex scheduling and process
management.
4. Security Concerns:
o With multiple users sharing the system, ensuring data privacy and security is
challenging.
5. Potential Delays:
o A large number of processes or users can lead to delays in response time if the system
is overloaded.
7. Cost:
o The need for advanced hardware and software makes time-sharing systems more
expensive.
4. Real-Time Operating Systems (RTOS)
A Real-Time Operating System (RTOS) is designed to process data and execute tasks
within strict time constraints. It is commonly used in systems where timing is critical, such as
embedded systems, medical devices, and industrial automation.
1. Deterministic Behavior:
o RTOS ensures predictable and consistent response times, crucial for time-critical
applications.
2. High Reliability:
o Designed to function without failure under defined conditions, making them ideal for
mission-critical systems.
4. Low Latency:
o Tasks are processed with minimal delay, meeting stringent deadlines.
6. Supports Multitasking:
o Can handle multiple tasks simultaneously while adhering to time constraints.
8. Scalability:
o Suitable for both small-scale embedded systems and large-scale real-time
applications.
2. High Cost:
o RTOS solutions often require specialized hardware and software, increasing costs.
5. Debugging Challenges:
o Testing and debugging real-time systems is complex due to the time-critical nature of
tasks.
6. Lack of Flexibility:
o RTOS is optimized for specific applications, making it less adaptable to general-
purpose tasks.
8. Failure Risks:
o Missing a deadline in a real-time system can lead to critical failures, especially in
safety-critical applications.
Difference between Hard Real time and Soft Real time Operating Systems:
The key difference between Hard Real-Time and Soft Real-Time operating systems lies in
the criticality of meeting deadlines for task execution and the consequences of missing those
deadlines.
1. Increased Throughput:
o Multiprocessor systems can execute multiple tasks simultaneously, which leads to
higher overall throughput. By distributing tasks across processors, the system can
handle more operations in less time.
2. Improved Performance:
o Parallel Processing: The ability to execute parallel tasks increases system
performance, especially for computationally intensive applications, such as scientific
simulations, image processing, or complex calculations.
o Faster Processing: Multiprocessor systems can handle more data and complete tasks
faster by dividing the workload among several processors.
3. Enhanced Reliability:
o Fault Tolerance: With multiple processors, if one processor fails, the other
processors can take over its workload, ensuring that the system continues to operate
smoothly. This redundancy improves system reliability and availability.
o Error Recovery: The system can detect processor failure and reassign tasks, which
ensures continuity in service.
4. Scalability:
o Multiprocessor systems can easily scale by adding more processors, allowing for
growth in system capacity and performance without a major overhaul. The ability to
scale makes it easier to adapt to increasing demands.
6. Load Balancing:
o Multiprocessor operating systems can intelligently distribute tasks among processors,
ensuring that no processor is overloaded, and work is evenly spread out, optimizing
performance.
2. Cost:
o Hardware: A multiprocessor system is typically more expensive to build and
maintain than a single-processor system. The cost of multiple processors,
interconnects, and additional hardware (like memory) can add up.
o Power Consumption: More processors mean higher power consumption, which can
lead to increased operational costs, especially in large-scale systems.
3. Software Complexity:
o Parallel Programming: Software needs to be written to take advantage of multiple
processors, which often requires specialized knowledge of parallel programming.
Programs need to be divided into smaller tasks that can be processed concurrently.
o Concurrency Issues: Managing data consistency and preventing conflicts (such as
race conditions) when processors access shared resources or memory can be difficult.
4. Overhead:
o Communication Overhead: Multiprocessor systems require communication
between processors, especially when they are sharing data. This communication can
introduce significant overhead, affecting system performance if not carefully
managed.
o Context Switching: Managing multiple processors requires frequent context
switching, which can lead to additional overhead and reduced efficiency.
5. Diminishing Returns:
o Adding more processors to a system does not always result in a linear increase in
performance. There are diminishing returns as the number of processors increases due
to overhead from managing multiple processors, coordination, and inter-process
communication.
6. Hardware Dependence:
o Multiprocessor systems are often tightly coupled, meaning that the operating system
is highly dependent on the hardware architecture. The efficiency and capabilities of
the OS are limited by the underlying hardware's design.
Symmetric Multiprocessing
Feature Asymmetric Multiprocessing (AMP)
(SMP)
All processors are equal, with no One master processor controls the
Processor Role
master-slave relation system, others are slaves
All processors share common Only the master processor has access
Memory Access
memory to memory
Task All processors can manage and The master processor manages tasks
Management execute tasks and assigns them to slaves
Cost and
More expensive and complex Less expensive and simpler
Complexity
A Multiuser Operating System is designed to allow multiple users to access and interact
with the computer's resources simultaneously, or at different times, with the system ensuring
that users do not interfere with each other. Examples include Unix, Linux, and Windows
Server editions. Below are the advantages and disadvantages of multiuser operating
systems:
1. Resource Sharing:
o Multiuser systems allow several users to share resources (such as CPU, memory,
storage, and peripherals) efficiently. This can help reduce costs, as one system can
serve multiple users without needing separate hardware for each user.
2. Centralized Administration:
o System administrators can manage user accounts, permissions, security settings, and
resources from a central location. This simplifies system maintenance and user
management.
3. Cost Efficiency:
o Multiple users can access a single machine and share resources like storage or
computing power, reducing the need for separate machines for each user. This makes
it more cost-effective, particularly in environments where many users need access to
computing resources but not full standalone systems.
7. Remote Access:
o Multiuser systems often support remote login, allowing users to access the system
from different locations. This is particularly useful in environments like businesses
and educational institutions, where users need access to centralized data and
applications from various locations.
8. Scalability:
o A multiuser system can easily accommodate more users by adding resources or
upgrading hardware, allowing the system to scale as more users require access.
1. Complexity in Management:
o Managing multiple users, permissions, and security settings can be complex. Admins
need to ensure that resources are allocated efficiently and that users have appropriate
access without causing conflicts.
2. Security Risks:
o With multiple users accessing the same system, the risk of unauthorized access or
malicious activities increases. If one user’s account is compromised, it may lead to a
broader security breach unless proper isolation and access controls are enforced.
3. Resource Contention:
o As multiple users share the system's resources, there can be competition for resources
like CPU time, memory, and storage. This can lead to performance degradation,
particularly if the system is not designed to handle many concurrent users or lacks
sufficient resources.
4. System Overhead:
o The operating system must manage multiple user sessions, which adds overhead to
the system. The need for context switching, maintaining user environments, and
managing access rights can consume additional resources, potentially affecting
overall performance.
5. User Conflicts:
o In a multiuser environment, users may unintentionally interfere with each other. For
example, multiple users may try to modify the same file simultaneously, leading to
conflicts, data loss, or corruption if not properly managed.
7. Dependence on Network:
o In cases where users are accessing the system remotely or via a network, the
performance and availability of the system can be affected by network latency or
downtime. Users may experience delays or downtime if the network is unreliable.
2. Increased Throughput:
o By executing multiple programs concurrently, the system can process more tasks in a
given time frame, leading to increased throughput. More work gets done because the
system makes use of idle times when one program is waiting for resources like I/O.
7. Cost-Effective:
o Multiprogramming allows for better use of expensive hardware by supporting
multiple applications simultaneously, thus making it more cost-effective compared to
having separate systems for each task.
1. Increased Complexity:
o Managing multiple programs at the same time adds complexity to the operating
system. Scheduling, memory management, and resource allocation must be handled
carefully to avoid conflicts or errors.
2. Resource Contention:
o With multiple programs running concurrently, there is potential for resource
contention, where two or more programs attempt to access the same resource at the
same time. This can lead to bottlenecks, delays, and decreased system performance if
not managed properly.
5. Possibility of Deadlock:
o Since multiple programs might require the same resources at the same time,
deadlocks (where two or more programs are stuck waiting for each other to release
resources) can occur. Deadlock prevention and resolution are challenging and require
careful design.
7. Security Risks:
o Since multiple programs share the same memory and resources, there is an increased
risk of one program corrupting another or gaining unauthorized access to sensitive
data. Proper isolation between programs must be ensured, which adds to the operating
system's complexity.
Definition Multiple processors run tasks One processor runs multiple tasks
Aspect Multiprocessing Multiprogramming
concurrently sequentially
Resource Utilization Utilizes multiple CPU resources Maximizes the use of a single CPU resource
Description: Supports multiple threads within a single process for parallel execution.
Features:
o Threads share the same memory space.
o Improves application performance on multi-core CPUs.
o Example: Windows, Linux, macOS.
Use Cases: Web servers, database management, concurrent programming.
Benefits of Multithreading
Benefit Description
User interface remains active even when performing lengthy operations in the
Responsiveness
background.
Scalability Efficient use of multiprocessor systems (each core can run a separate thread).
Bugs like race conditions, deadlocks, and thread interference are hard to
Complex Debugging
detect and fix.
Thread Management Creating, destroying, and managing threads can become costly if not
Overhead handled properly.
Too many threads can overwhelm the system and cause performance
Scalability Limitations
degradation (thread thrashing).
Spooling:
1. Queueing Data: When a job (e.g., a print request) is generated, it is written to a temporary
storage area (like a disk or memory), often referred to as the "spool". This storage acts as a
buffer for the data.
2. Processing Jobs: The operating system or a dedicated spooling service fetches the jobs from
the spool in a queue and sends them to the appropriate device (e.g., a printer) when the device
is ready to process them.
3. Parallel Execution: While one job is being processed, other jobs can be spooled and stored
for future processing. This ensures that the CPU and other system resources are not idly
waiting for a slow I/O device.
Job Execution One job is executed at a time. Multiple jobs are executed concurrently.
User No user interaction during Allows user interaction while tasks are
Interaction execution. running.
Task Jobs are processed sequentially, The CPU switches between tasks to
Scheduling one at a time. keep it busy.
CPU CPU may be idle while waiting for CPU is kept busy by switching between
Utilization I/O. tasks.
The layered structure is an operating system design approach where the system is divided
into a hierarchy of layers, each performing specific functions. Higher layers utilize the
services provided by lower layers, creating a modular and organized framework.
Features of a Layered OS
Each layer interacts only with the layer directly above or below it. Here's a typical breakdown
of layers:
1. Layer 0: Hardware
o The physical components of the computer (e.g., CPU, memory, I/O devices).
o Provides basic hardware operations.
3. Layer 2: Kernel
o Core OS functions like process scheduling, memory management, and interrupt
handling.
o Acts as the bridge between hardware and higher-level software.
Advantages of Layered OS
1. Simplicity: The system is easier to understand and implement due to modular design.
2. Isolation: Changes in one layer don’t affect other layers, improving maintainability.
3. Security: Access control is inherent because each layer communicates only with its
neighbors.
4. Reusability: Common services provided by lower layers can be reused across higher layers.
Disadvantages of Layered OS
The system components of an operating system are the essential modules or subsystems
that collectively manage hardware and software resources, provide user services, and execute
applications efficiently. Here’s an overview of the main components:
1. Process Management
2. Memory Management
Function: Manages files on storage devices, ensuring secure and organized data storage.
Responsibilities:
o File creation, deletion, and access.
o Directory organization.
o File permissions and security.
Example: Reading and writing data to a hard drive.
4. Device Management
5. Storage Management
Function: Manages secondary storage, such as hard drives and SSDs, for persistent data
storage.
Responsibilities:
o Disk scheduling for read/write operations.
o Space allocation and free-space management.
Example: Organizing files in a disk partition.
Function: Protects system data and resources from unauthorized access or malicious
activities.
Responsibilities:
o Authentication (e.g., user login credentials).
o Access control (e.g., permissions for files and processes).
o Encryption for secure data transmission and storage.
Example: Password authentication and firewall management.
7. Networking
8. User Interface
Function: Provides a way for users to interact with the operating system.
Types:
o Command Line Interface (CLI): Text-based interaction.
o Graphical User Interface (GUI): Visual interaction using windows, icons, and
menus.
Example: Shell (CLI) in Linux or the GUI in Windows.
Summary of Components
Component Key Function Example
File System Management Organizes and secures file storage File permissions
Component Key Function Example
Networking Enables data communication between systems File sharing via network
Operating system services are functionalities provided by the operating system to users,
applications, and system components. These services make it easier for users and software to
interact with the hardware and manage system resources effectively.
2. I/O Operations
Purpose: Manages input/output devices and allows programs to perform I/O operations.
Features:
o Abstracts hardware details for the user.
o Handles data transfer between devices and processes.
Example: Reading a file from disk or sending output to a printer.
Purpose: Provides methods to create, delete, read, write, and manage files and directories.
Features:
o Enforces permissions and security for files.
o Organizes files in a directory hierarchy.
Example: Saving a document or retrieving a photo from a folder.
4. Communication
6. Resource Allocation
Purpose: Allocates system resources such as CPU, memory, and I/O devices to processes.
Features:
o Ensures fair resource distribution among processes.
o Tracks resource usage to prevent conflicts.
Example: Allocating CPU time to multiple processes in a multitasking system.
8. User Interface
10. Accounting
11. Protection
File System Manipulation File and directory management Creating or reading a file
Error Detection Detect and handle system errors Recovering from a disk failure
Security and Protection Ensure system and data security User authentication
The kernel is the core component of an operating system (OS) that manages system
resources and provides essential services for all other parts of the system. It acts as an
intermediary between the hardware and the software, ensuring that processes have access to
the hardware resources they need to function efficiently. The kernel operates at a very low
level and directly interacts with the system’s hardware, performing tasks such as memory
management, process scheduling, and device handling.
The kernel is fundamental to the functioning of the operating system, as it provides the
necessary services that allow programs to run and interact with hardware.
Types of Kernels:
1. Monolithic Kernel:
o A monolithic kernel is a type of kernel where the entire operating system,
including device drivers, process management, and file system management, is
implemented as a single large block of code. Examples of monolithic kernels
include Linux and Unix.
o Advantages: High performance due to direct communication between different
parts of the kernel.
o Disadvantages: Complex to manage and debug due to the large codebase and
close coupling between components.
2. Microkernel:
o A microkernel is designed to run the most basic functions of an operating
system, such as communication between hardware and software, while leaving
other services (like device drivers and file systems) to be handled by user-
space programs. Examples of microkernel-based systems include Minix and
QNX.
o Advantages: More modular and easier to maintain. Crashes in user space do
not affect the kernel.
o Disadvantages: May incur a performance overhead due to frequent
communication between the kernel and user space.
3. Hybrid Kernel:
o A hybrid kernel combines elements of both monolithic and microkernel
architectures, aiming to provide the performance benefits of a monolithic
kernel while maintaining the modularity of a microkernel. Windows NT and
macOS are examples of hybrid kernels.
o Advantages: Balances performance with modularity.
o Disadvantages: Can be more complex to design and implement.
File System Manages files and directories, handles file operations, and enforces file
Management access permissions.
Security and Access Manages user authentication, file and resource access control, and
Control ensures process isolation.
Interaction with
No direct interaction with the user. Direct interface with the user.
User
A reentrant kernel is a type of operating system kernel designed to allow multiple processes
to access the kernel simultaneously without interfering with each other. This capability is
crucial in multitasking environments where multiple processes might require kernel services
concurrently.
1. Concurrency Support:
o Multiple processes can execute kernel code simultaneously without conflict.
o Achieved using synchronization mechanisms like semaphores, locks, or monitors.
3. Code Reusability:
o The same kernel code can be executed by different processes at the same time.
1. Efficient Multitasking:
o Multiple processes can use kernel services simultaneously, improving system
responsiveness.
2. Scalability:
o Ideal for multiprocessor systems where kernel code can run on multiple processors
concurrently.
3. Reduced Latency:
o Kernel reentrancy minimizes delays by allowing processes to execute kernel code
concurrently without waiting for others to finish.
4. Fault Isolation:
o Since kernel code operates independently for each process, faults in one process are
less likely to impact others.
1. Increased Complexity:
o Writing reentrant code requires careful handling of shared resources to avoid race
conditions or deadlocks.
3. Resource Management:
o Allocating separate resources for each process can consume more memory and
processing power.
Supports multiple processes in the Only one process can execute in the
Concurrency
kernel. kernel at a time.
Modern Unix/Linux Kernels: Utilize reentrant design to handle multiple processes and
threads.
Windows NT Kernel: Designed with reentrancy to support concurrent execution on
multiprocessor systems.
Monolithic vs. Microkernel Systems
The architecture of an operating system kernel determines how it manages system resources
and communicates with hardware and applications. Monolithic kernels and microkernels
represent two fundamental design philosophies for OS kernels.
1. Monolithic Kernel
A monolithic kernel is a single large process running in a single address space. It includes
all the essential services like process management, memory management, file system, device
drivers, and more.
Advantages
Performance: All kernel services operate in the same memory space, reducing the overhead
of inter-process communication (IPC).
Simplicity: Easier to design and implement.
Device Driver Integration: Device drivers run in the kernel space, allowing fast interactions.
Disadvantages
Poor Stability: A bug in one component can crash the entire system.
Large Codebase: Monolithic kernels tend to be large and harder to maintain.
Security Risks: Since all components run in kernel mode, any vulnerability affects the entire
kernel.
Examples
Linux
Unix
MS-DOS
2. Microkernel
A microkernel is a minimalistic kernel that includes only essential services such as inter-
process communication (IPC), basic scheduling, and memory management. Other services
(e.g., device drivers, file systems) run in user space.
Features of Microkernel
1. Minimal Core:
o The kernel handles only basic tasks.
2. Modular Design:
o Most services run as separate user-space processes.
3. Enhanced Security and Stability:
o Crashes in user-space services don’t directly impact the kernel.
Advantages
Stability: A failure in one service does not affect the entire system.
Security: Services running in user space have limited access to the system.
Flexibility: Easier to add or update components.
Disadvantages
Examples
Minix
QNX
macOS (hybrid kernel with microkernel characteristics)
Windows NT (hybrid kernel with microkernel characteristics)
Comparison Table
Less stable; a crash affects the whole More stable; service crashes don’t impact the
Stability
system. kernel.
Code Size Larger and harder to maintain. Smaller and easier to manage.
A system call is a mechanism that allows a program to request services from the operating
system's kernel. It is an essential interface between user applications and the operating
system, enabling programs to perform actions that are not directly accessible in user space,
such as interacting with hardware, managing processes, accessing files, or handling system
resources.
System calls provide a controlled interface to the kernel, allowing programs to perform tasks
that would normally require direct interaction with system resources. They are crucial for
tasks such as input/output operations, process management, memory allocation, and network
communication.
Accessing Privileged Operations: The operating system kernel runs in privileged mode,
meaning it can access system resources directly. Regular applications, on the other hand, run
in user mode with limited access. System calls act as a controlled gateway between the user
mode and the kernel mode.
Security and Stability: By using system calls, the OS ensures that applications can't directly
interfere with the system's hardware or cause instability. The kernel mediates these
interactions, providing a safe environment for both system and user programs.
1. Process Control:
o These system calls deal with the creation, termination, and control of processes.
o Examples:
fork() – Creates a new process.
exit() – Terminates a process.
wait() – Makes a process wait for a child process to finish.
exec() – Replaces the current process with a new process.
2. File Management:
o These system calls are used to manage files and directories.
o Examples:
open() – Opens a file.
read() – Reads data from a file.
write() – Writes data to a file.
close() – Closes a file.
3. Device Management:
o These system calls allow programs to interact with hardware devices.
o Examples:
ioctl() – Controls the behavior of a device.
read() and write() – Perform I/O operations on devices like disks or network
interfaces.
4. Memory Management:
o These system calls manage memory allocation and deallocation.
o Examples:
malloc() – Allocates memory.
free() – Frees allocated memory.
mmap() – Maps files or devices into memory.
5. Information Maintenance:
o These system calls retrieve or set system information.
o Examples:
getpid() – Returns the process ID of the calling process.
time() – Returns the current system time.
getuid() – Returns the user ID of the calling process.
6. Communication:
o These system calls are used for inter-process communication (IPC) and networking.
o Examples:
pipe() – Creates a pipe for communication between processes.
send() and recv() – Send and receive data over a network.
Here's a simple example of how a read system call works in a Unix-like operating system
(such as Linux):