Pgdca 101 OS
Pgdca 101 OS
IHRD AC
PGDCA-101:Operating
System
First Edition
IHRD
P REFACE
An Operating System is the interface between the computer hardware and the end-user. Processing of
data, running applications, file management and handling the memory is all managed by the computer OS.
Windows, Mac, Android etc. Are examples of Operating systems which are generally used nowadays.
Operating systems are an essential part of any computer system. Similarly, a course on operating
systems is an essential part of any computer-science education. This field is undergoing rapid change,
as computers are now prevalent in virtually every application, from games for children through the most
sophisticated planning tools for governments and multinational firms. Yet the fundamental concepts
— IHRD AC
remain fairly clear, and it is on these that we base this handbook.
2024-06-05
I
C ONTENTS
I Module 1 1
1 Module -1 3
1.1 Introduction to Computer. . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Characteristics of Computers . . . . . . . . . . . . . . . . . . . . . . . 4
1.3 E-Governance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.4 Multimedia . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.4.1 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.5 Overview of Operating Systems . . . . . . . . . . . . . . . . . . . . . . 6
1.5.1 Functions of an Operating System . . . . . . . . . . . . . . . . . . 7
1.5.2 Types of Operating Systems . . . . . . . . . . . . . . . . . . . . . 8
1.5.3 Popular Operating Systems . . . . . . . . . . . . . . . . . . . . . 9
1.5.4 Importance of Operating Systems . . . . . . . . . . . . . . . . . . 9
1.5.5 Generations of Operating Systems . . . . . . . . . . . . . . . . . . 10
1.5.6 Types of Operating Systems . . . . . . . . . . . . . . . . . . . . . 10
1.5.7 Structure of Operating Systems . . . . . . . . . . . . . . . . . . . 10
1.5.8 Services Provided by Operating Systems . . . . . . . . . . . . . . . . 11
1.5.9 System Calls . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.5.10 System Programs . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.5.11 Protection and Security. . . . . . . . . . . . . . . . . . . . . . . 12
1.6 Process Management . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.6.1 Process Concepts . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.6.2 Process States . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.6.3 Process Control Block (PCB) . . . . . . . . . . . . . . . . . . . . 14
1.6.4 Scheduling Algorithms . . . . . . . . . . . . . . . . . . . . . . . 14
1.6.5 Threads . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.6.5.1 Threading Issues . . . . . . . . . . . . . . . . . . . . . . 15
1.6.6 Process Synchronization . . . . . . . . . . . . . . . . . . . . . . 15
1.6.7 Classic Problems of Synchronization . . . . . . . . . . . . . . . . . 15
1.6.8 Monitors . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.6.9 Inter-Process Communication (IPC). . . . . . . . . . . . . . . . . . 16
1.6.10 Deadlock . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.6.10.1Deadlock Characterization . . . . . . . . . . . . . . . . . . 16
1.6.10.2Deadlock Prevention . . . . . . . . . . . . . . . . . . . . 16
1.6.10.3Recovery from Deadlock . . . . . . . . . . . . . . . . . . 17
III
IV CONTENTS
II Module 2 18
2 Module -2 20
2.1 Memory Management . . . . . . . . . . . . . . . . . . . . . . . . . . 20
PGDCA-101:OPERATING SYSTEM
III Module 3 33
3 Module -3 35
3.1 Storage Management . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3.1.1 Storage techniques and tools . . . . . . . . . . . . . . . . . . . . 37
3.1.1.1 Mass-Storage Structure . . . . . . . . . . . . . . . . . . . 37
3.1.1.2 Disk Structure . . . . . . . . . . . . . . . . . . . . . . . 37
3.1.1.3 Disk Attachment . . . . . . . . . . . . . . . . . . . . . . 39
3.1.1.4 Disk Scheduling . . . . . . . . . . . . . . . . . . . . . . 41
3.1.1.5 RAID Structure . . . . . . . . . . . . . . . . . . . . . . 43
3.1.2 File System Interface . . . . . . . . . . . . . . . . . . . . . . . 43
3.1.2.1 File Concept . . . . . . . . . . . . . . . . . . . . . . . 43
3.1.2.2 Access Methods . . . . . . . . . . . . . . . . . . . . . . 44
3.1.2.3 Directory Structure . . . . . . . . . . . . . . . . . . . . . 44
3.1.2.4 File System Structure . . . . . . . . . . . . . . . . . . . . 44
3.1.2.5 Allocation Methods. . . . . . . . . . . . . . . . . . . . . 44
3.1.2.6 Free-Space Management. . . . . . . . . . . . . . . . . . . 44
3.1.3 System Protection . . . . . . . . . . . . . . . . . . . . . . . . . 44
3.1.3.1 Goals . . . . . . . . . . . . . . . . . . . . . . . . . . 44
3.1.3.2 Principles. . . . . . . . . . . . . . . . . . . . . . . . . 44
3.1.3.3 Domain of Protection . . . . . . . . . . . . . . . . . . . . 44
3.1.3.4 Access Matrix . . . . . . . . . . . . . . . . . . . . . . . 45
3.1.3.5 Access Control . . . . . . . . . . . . . . . . . . . . . . 45
CONTENTS V
IV Module 4 46
4 Module -4 48
4.1 Introduction to Virtualization . . . . . . . . . . . . . . . . . . . . . . . 48
CONTENTS
4.1.1 Types of Virtualization . . . . . . . . . . . . . . . . . . . . . . . 48
4.1.1.1 Hypervisor-Based Virtualization. . . . . . . . . . . . . . . . 48
4.1.1.2 Data Virtualization . . . . . . . . . . . . . . . . . . . . . 48
4.1.1.3 Desktop Virtualization . . . . . . . . . . . . . . . . . . . 48
4.1.1.4 Server Virtualization . . . . . . . . . . . . . . . . . . . . 49
4.1.1.5 Operating System Virtualization . . . . . . . . . . . . . . . . 49
4.1.1.6 Network Functions Virtualization (NFV) . . . . . . . . . . . . 49
4.1.2 Virtualization Components . . . . . . . . . . . . . . . . . . . . . 49
4.1.3 Real-Time Operating Systems (RTOS) . . . . . . . . . . . . . . . . 50
4.1.4 Network Operating Systems (NOS) . . . . . . . . . . . . . . . . . . 50
4.1.5 Cloud Operating Systems . . . . . . . . . . . . . . . . . . . . . . 50
4.1.6 Containers: Docker and Kubernetes . . . . . . . . . . . . . . . . . . 50
4.1.6.1 Docker . . . . . . . . . . . . . . . . . . . . . . . . . . 51
4.1.6.2 Kubernetes . . . . . . . . . . . . . . . . . . . . . . . . 52
4.1.7 Docker and Kubernetes Together . . . . . . . . . . . . . . . . . . . 53
4.2 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
1
PART I
M ODULE 1
Introduction to Computer - Familiarity with the basic components of com-
puters and computer terminology, Characteristics of computer, e-governance,
multimedia. Overview of operating systems: Generations, Types, Structure,
Services, System Calls, System Boot, System Programs, Protection and Secu-
rity. Process management: Process Concepts, Process States, Process Control
Block, Scheduling-Criteria, Scheduling Algorithms and their Evaluation, Threads,
Threading Issues. Process synchronization: Background, Critical-Section Prob-
lem, Peterson’s Solution, Synchronization Hardware, Semaphores, Classic Prob-
lems of Synchronization, Monitors, Inter Process Communication, Deadlock:
System Model, Deadlock Characterization, Deadlock Prevention, Detection and
Avoidance, Recovery from Deadlock.
1 M ODULE -1
Part I
A computer is a powerful and versatile electronic device designed to process, store, and retrieve data.
It is a fundamental tool in modern society, influencing various aspects of life, including communication,
education, business, and entertainment. This introduction aims to provide a basic understanding of what
computers are, their components, types, and their evolution over time.
Definition and Basic Functions
At its core, a computer is a machine that follows a set of instructions to perform computations and
manage data. The primary functions of a computer include:
• Processing: Performing calculations and making logical decisions based on the input.
3
4 1.2. CHARACTERISTICS OF COMPUTERS
Tools for user interaction, such as keyboards and mice. Output Devices: Equipment that displays or out-
1 puts data, such as monitors and printers. Software: The intangible instructions and programs that control
hardware operations, including:
Operating System (OS): The software that manages hardware resources and provides common ser-
PGDCA-101:OPERATING SYSTEM
vices for application programs. Applications: Programs designed for specific tasks, like word processing,
browsing the internet, or playing games. Types of Computers Computers come in various forms, each
designed for different purposes:
Personal Computers (PCs): General-purpose computers for individual use, including desktops and
laptops. Servers: Powerful computers that provide services and resources to other computers over a
network. Mainframes: Large, powerful systems used by organizations for critical applications, bulk data
processing, and large-scale transaction processing. Supercomputers: Extremely fast computers used for
complex simulations, scientific research, and high-performance tasks. Embedded Systems: Specialized
computers integrated into other devices, such as appliances, vehicles, and industrial machines. Evolution
of Computers The development of computers has gone through several generations, marked by significant
technological advancements:
First Generation (1940s-1950s): Vacuum tubes were used for circuitry, and magnetic drums for mem-
ory. These machines were large, expensive, and consumed a lot of power. Second Generation (1950s-
1960s): Transistors replaced vacuum tubes, making computers smaller, faster, and more reliable. Third
Generation (1960s-1970s): Integrated Circuits (ICs) increased processing power and efficiency. Fourth
Generation (1970s-present): Microprocessors integrated thousands of ICs into a single chip, leading to
the development of personal computers. Fifth Generation (present and beyond): Based on artificial intelli-
gence and advanced parallel processing, these computers are designed to understand natural language and
mimic human reasoning. Impact of Computers Computers have revolutionized many fields by increasing
efficiency, accuracy, and accessibility. They play a crucial role in:
Education: Enhancing learning through digital resources and online platforms. Healthcare: Improv-
ing diagnostics, patient records management, and treatment plans. Business: Streamlining operations,
data management, and global communication. Entertainment: Providing platforms for gaming, stream-
ing, and social media. Research: Facilitating complex simulations, data analysis, and collaborative
projects. In conclusion, computers are integral to modern life, constantly evolving and expanding their
capabilities. Understanding the basics of computers is essential for leveraging their potential in various
domains and adapting to the rapid technological changes.
Computers are defined by several key characteristics that distinguish them from other machines and
make them incredibly versatile and powerful:
• Speed: Computers can process data and perform calculations at extremely high speeds. Modern
computers execute billions of instructions per second, making them invaluable for tasks that require
rapid processing.
• Accuracy: Computers perform tasks with a high degree of accuracy, provided they are given correct
instructions. Errors in computer processing typically stem from human input or software bugs.
• Automation: Once programmed, computers can perform a series of operations automatically with-
out human intervention, increasing efficiency and reducing the potential for human error.
• Storage: Computers can store vast amounts of data in various forms, from documents and images to
software and databases. Storage devices like hard drives and SSDs offer long-term data retention.
1.3. E-GOVERNANCE 5
• Versatility: Computers can perform a wide range of tasks, from word processing and gaming to
complex scientific simulations. This versatility is due to their ability to run different software
applications.
1
MODULE -1
• Diligence: Unlike humans, computers do not suffer from fatigue or boredom. They can perform
repetitive tasks consistently and indefinitely without losing performance quality.
• Connectivity: Computers can connect to each other and to networks, including the internet, allow-
ing for data sharing, communication, and remote access.
E-governance, or electronic governance, refers to the use of digital technologies by government bod-
ies to enhance the access to and delivery of government services to the public. It aims to improve the
efficiency, transparency, and inclusiveness of governmental operations. Key aspects of e-governance
include:
• Online Services: Government services such as tax filing, license renewals, and social benefits can
be accessed and processed online, making them more accessible and convenient for citizens.
• Transparency: E-governance promotes transparency by making information and data readily avail-
able to the public. This reduces corruption and builds trust between the government and citizens.
• Participation: Digital platforms enable greater citizen participation in governance through e-voting,
public consultations, and feedback mechanisms.
• Efficiency: Automating processes reduces the time and cost associated with delivering services,
thereby improving governmental efficiency.
• Inclusivity: E-governance can help bridge the digital divide by providing access to services for
remote and underserved populations.
• Examples of e-governance initiatives include digital ID systems, online portals for public services,
e-procurement systems, and digital communication channels between citizens and government of-
ficials.
Note: Tutor should guide through E-governance website/portals of the State as well as India
Multimedia refers to the integration of multiple forms of media, including text, graphics, audio, video,
and animation, into a single cohesive presentation or application. It is widely used in various fields, such
as education, entertainment, advertising, and communication. Key characteristics and components of
multimedia include:
• Interactivity: Multimedia applications often include interactive elements that allow users to engage
with the content, such as clickable links, quizzes, and interactive simulations.
6 1.5. OVERVIEW OF OPERATING SYSTEMS
• Integration: Multimedia combines various media types to enhance the user experience. For exam-
1 ple, a website might include text descriptions, images, audio clips, and video demonstrations to
provide comprehensive information.
PGDCA-101:OPERATING SYSTEM
• Digital Formats: Multimedia content is typically created, stored, and distributed in digital formats,
making it easy to access and share over the internet.
• Rich User Experience: The combination of different media types can create a more engaging and
immersive experience for users, making it effective for learning, marketing, and entertainment.
1.4.1 Applications
• Education: Multimedia enhances learning through interactive tutorials, e-books, educational games,
and virtual classrooms.
• Entertainment: It is used in video games, movies, music videos, and online streaming services to
provide rich, engaging content.
• Advertising: Advertisers use multimedia to create compelling and interactive ads that attract and
retain customer attention.
• Communication: Multimedia tools like video conferencing, webinars, and social media platforms
facilitate richer communication experiences.
In summary, the characteristics of computers make them essential tools in today’s digital world, e-
governance leverages these characteristics to improve governmental functions and public service delivery,
and multimedia utilizes the diverse capabilities of computers to create rich, interactive content across
various domains.
An operating system (OS) is a fundamental software that manages the hardware and software re-
sources of a computer. It acts as an intermediary between users and the computer hardware, providing
essential services and enabling the execution of application software.
The kernel of an operating system (OS) is its core component responsible for managing system re-
sources and enabling communication between hardware and software. The kernel operates at the lowest
level of the software hierarchy, directly interfacing with the hardware. It provides essential services
required for all other parts of the OS and applications to function.
• Monolithic kernels provide rich and powerful abstractions of the underlying hardware.
• Microkernels provide a small set of simple hardware abstractions and use applications called 1
servers to provide more functionality.
MODULE -1
• Exokernels provide minimal abstractions, allowing low-level hardware access. In exokernel sys-
tems, library operating systems provide the abstractions typically present in monolithic kernels.
• Hybrid (modified microkernels) are much like pure microkernels, except that they include some
additional code in kernelspace to increase performance.
2. Memory Management:
• Allocation: Assigns memory space to processes and manages its release when no longer
needed.
• Virtual Memory: Extends physical memory by using disk space, allowing the system to run
larger applications and handle more processes simultaneously.
• Paging and Segmentation: Techniques to manage and optimize the use of memory.
• Organization: Manages files and directories, providing a logical structure for data storage and
retrieval.
• Access Control: Enforces security measures to protect data from unauthorized access.
• File Operations: Facilitates actions like creation, deletion, reading, and writing of files.
4. Device Management:
• Drivers: Interfaces with hardware devices, providing necessary instructions for communica-
tion and control.
• I/O Operations: Manages input and output operations, ensuring efficient data transfer be-
tween devices and memory.
5. User Interface:
8 1.5. OVERVIEW OF OPERATING SYSTEMS
• Command Line Interface (CLI): Allows users to interact with the system through text com-
1 mands.
• Graphical User Interface (GUI): Provides a visual interface with graphical elements like win-
dows, icons, and menus for user interaction.
PGDCA-101:OPERATING SYSTEM
• Allows multiple users to interact with the computer simultaneously by sharing CPU time.
• Examples: UNIX.
• Manages a group of distinct computers and makes them appear as a single system.
• Examples: Windows Server, Linux clusters.
• Provides features for networking, allowing computers to communicate and share resources.
• Examples: Novell NetWare, Windows Server.
• Optimized for mobile devices, providing touch interfaces and power management.
• Examples: Android, iOS.
1.5. OVERVIEW OF OPERATING SYSTEMS 9
MODULE -1
• Widely used on personal computers and servers.
• Known for its user-friendly GUI and extensive software compatibility.
2. MacOS:
3. Linux:
4. UNIX:
5. Android:
6. iOS:
Vacuum Tubes and Plugboards: Early computers like ENIAC used vacuum tubes and required manual
PGDCA-101:OPERATING SYSTEM
Batch Processing Systems: Utilized transistors, where jobs were collected and processed in batches
without user interaction. Example: IBM 7094.
Third Generation (1960s-1980s):
Multiprogramming: Allowed multiple programs to reside in memory and be executed by sharing CPU
time. Time-Sharing Systems: Enabled interactive use by multiple users. Example: IBM System/360,
UNIX.
Fourth Generation (1980s-present):
Personal Computers: Introduction of PCs with operating systems like MS-DOS, Windows, and Ma-
cOS. Graphical User Interfaces (GUIs): Made computers more user-friendly. Example: Microsoft Win-
dows, MacOS, Linux.
Fifth Generation (Present and beyond):
Mobile and Cloud Operating Systems: Focused on mobile devices (Android, iOS) and cloud comput-
ing (Google Chrome OS).
AI Integration: Incorporating AI for smarter resource management and user interaction.
All OS services run in a single address space, enhancing performance but reducing fault isolation.
Example: UNIX.
Layered Structure:
Divides the OS into layers, each built on top of the lower layers, providing modularity and ease of
debugging. Example: THE operating system.
1.5. OVERVIEW OF OPERATING SYSTEMS 11
Microkernel Structure:
Minimalist approach where the kernel only includes essential functions, with other services running
1
in user space. Example: Minix, QNX.
MODULE -1
Modular Structure:
Kernel is divided into separate modules that can be loaded and unloaded dynamically. Example:
Linux.
Hybrid Structure:
4. System Initialization: Starts system processes, mounts file systems, and initializes hardware.
5. User Login: Provides the interface for user authentication and starting user sessions.
12 1.6. PROCESS MANAGEMENT
Status Information Programs: Display system information and resource usage (e.g., Task Manager,
top).
File Modification Programs: Text editors and file editors (e.g., Notepad, vim).
Programming Language Support: Compilers and interpreters (e.g., GCC, Python interpreter).
Communication Programs: Facilitate communication between users (e.g., email clients, chat pro-
grams).
• Program vs. Process: A program is passive (a file containing a list of instructions), while a process
is active (an instance of a program running in the system).
• Process Lifecycle: The stages a process goes through from creation to termination.
• Waiting: The process is waiting for some event (like I/O completion).
MODULE -1
Figure 1.3: Process State Diagram
CPU and I/O Bound Processes: If the process is intensive in terms of CPU operations, then it is
called CPU bound process. Similarly, If the process is intensive in terms of I/O operations then it is called
I/O bound process.
A process can move between different states in an operating system based on its execution status and
resource availability. Here are some examples of how a process can move between different states:
• New to ready: When a process is created, it is in a new state. It moves to the ready state when the
operating system has allocated resources to it and it is ready to be executed.
• Ready to running: When the CPU becomes available, the operating system selects a process from
the ready queue depending on various scheduling algorithms and moves it to the running state.
• Running to blocked: When a process needs to wait for an event to occur (I/O operation or system
call), it moves to the blocked state. For example, if a process needs to wait for user input, it moves
to the blocked state until the user provides the input.
• Running to ready: When a running process is preempted by the operating system, it moves to the
ready state. For example, if a higher-priority process becomes ready, the operating system may
preempt the running process and move it to the ready state.
• Blocked to ready: When the event a blocked process was waiting for occurs, the process moves to
the ready state. For example, if a process was waiting for user input and the input is provided, it
moves to the ready state.
• Running to terminated: When a process completes its execution or is terminated by the operating
system, it moves to the terminated state.
14 1.6. PROCESS MANAGEMENT
• Process ID (PID)
• Process state
• Program counter
• CPU registers
• Accounting information
• Scheduling Criteria
• Priority Scheduling:
Processes with higher priority are executed first. Can lead to starvation of lower-priority processes.
1.6.5 Threads
Threads are the smallest unit of execution within a process. They share the same process resources but 1
can run independently. Single-threaded vs. Multi-threaded: Single-threaded processes have one thread
of execution, while multi-threaded processes have multiple threads.
MODULE -1
1.6.5.1 Threading Issues
• Context Switching: Switching between threads can be less costly than switching between pro-
cesses.
• Synchronization: Ensuring that threads do not interfere with each other when accessing shared
resources.
Synchronization Hardware
Modern processors provide hardware support for synchronization, such as atomic instructions like
TestAndSet and CompareAndSwap.
Semaphores
A semaphore is a synchronization tool that provides a way to manage concurrent processes using
simple integer values:
• Readers-Writers Problem:
1 Multiple readers can read simultaneously, but writers need exclusive access.
Philosophers need to pick up two forks to eat, leading to potential deadlocks and starvation.
1.6.8 Monitors
A high-level synchronization construct that provides a mechanism for threads to safely access shared
resources. Monitors encapsulate the shared variables, procedures, and synchronization.
1.6.10 Deadlock
A set of processes are deadlocked when each process in the set is waiting for an event that only
another process in the set can cause.
• Hold and Wait: A process holding resources can request additional resources.
• Circular Wait: A circular chain of processes exists, where each process holds at least one resource
needed by the next process in the chain.
• Deadlock Detection Allow deadlocks to occur but detect them and take action:
– Deadlock Avoidance: Ensure the system will never enter an unsafe state
– Banker’s Algorithm: An algorithm that allocates resources dynamically while avoiding un-
safe states.
1
MODULE -1
1.6.10.3 Recovery from Deadlock
• Process Termination: Terminate one or more processes involved in the deadlock.
• Resource Preemption: Temporarily preempt resources from some processes and allocate them to
others.
In summary, process management involves handling various aspects of processes, including their
states, scheduling, synchronization, and communication. Deadlock management ensures the system can
avoid, detect, or recover from deadlocks to maintain smooth operation.
18
PART II
M ODULE 2
Memory management: Main Memory, Swapping, Contiguous Memory Allo-
cation, Paging, Structure of Page Table, Segmentation, Virtual Memory, Demand
Paging, Page Replacement Algorithms, Allocation of Frames, Thrashing.
M ODULE -2
2
Part II
Main memory, often referred to as RAM (Random Access Memory), is a crucial component in a
computer system where data and instructions are stored temporarily while a program is executing. It is
directly accessible by the CPU (Central Processing Unit), providing fast access to data and instructions
compared to secondary storage devices like hard drives or SSDs. Main memory is volatile, meaning its
contents are lost when the power is turned off.
Memory management in an operating system (OS) is essential for several reasons:
• Resource Utilization: Memory management ensures efficient utilization of available memory re-
sources. It allocates memory to processes based on their requirements and deallocates memory
when it’s no longer needed, preventing wastage of resources.
• Memory Protection: Memory management provides mechanisms to protect the memory space of
each process from unauthorized access. It ensures that processes can only access memory locations
that they have permission to access, preventing accidental or malicious interference.
• Address Space Management: Memory management organizes the address space of each process,
providing a virtual address space that simplifies programming and enables processes to run inde-
pendently of each other. This abstraction allows each process to have its own logical memory
space, unaware of the physical memory layout.
• Multi-Program Execution: Memory management enables the concurrent execution of multiple pro-
cesses by allocating memory resources to each process and managing their interactions. It ensures
that processes do not interfere with each other’s memory space, maintaining system stability and
reliability.
• Dynamic Memory Allocation: Memory management facilitates dynamic memory allocation, al-
lowing processes to request memory dynamically during runtime. This enables efficient memory
usage and supports data structures such as linked lists, trees, and dynamic arrays.
• Virtual Memory: Memory management implements virtual memory systems, which provide an
abstraction layer between physical memory and processes. Virtual memory allows processes to
access more memory than physically available by temporarily storing data on disk. This feature
enhances system performance and enables the execution of larger programs.
• Memory Sharing: Memory management enables processes to share memory segments, facilitating
communication and data exchange between processes. Shared memory segments allow processes
to collaborate efficiently without the need for expensive inter-process communication mechanisms.
20
2.2. MEMORY MANAGEMENT TECHNIQUES 21
• Memory Protection: Memory management ensures that each process is isolated from other pro-
cesses, preventing unauthorized access and interference. Memory protection mechanisms prevent
processes from accessing memory locations belonging to other processes, enhancing system secu-
2
rity and stability.
MODULE -2
Overall, memory management plays a critical role in optimizing system performance, ensuring sta-
bility and security, and facilitating efficient utilization of memory resources in an operating system.
Memory management techniques refer to various strategies and mechanisms employed by operating
systems to efficiently allocate, manage, and protect memory resources. These techniques ensure opti-
mal utilization of memory while maintaining system stability and performance. Here are some common
memory management techniques:
Key Concepts
• Memory Blocks:
Fixed-size partitioning: The memory is divided into fixed-sized partitions. Each process is as-
signed a partition, which might lead to internal fragmentation if the process doesn’t use the entire
partition. Variable-size partitioning: Memory is divided into variable-sized partitions. Each pro-
cess is allocated exactly as much memory as it needs, which reduces internal fragmentation but can
lead to external fragmentation.
• Fragmentation:
– Internal Fragmentation: Occurs when fixed-sized partitions are used, and there is wasted
space within a partition that isn’t used by the process.
– External Fragmentation: Occurs when variable-sized partitions are used, and there are small
blocks of free memory scattered throughout the system that are too small to satisfy any re-
quest.
• Allocation Methods:
22 2.2. MEMORY MANAGEMENT TECHNIQUES
– First-fit: The allocator assigns the first block of memory that is large enough to accommodate
2 the process.
– Best-fit: The allocator searches for the smallest block of memory that is large enough to
accommodate the process, aiming to reduce wasted space.
PGDCA-101:OPERATING SYSTEM
– Worst-fit: The allocator assigns the largest available block of memory to the process, aiming
to leave large contiguous blocks available for future allocations.
• Advantages
• Disadvantages
– Fragmentation: Both internal and external fragmentation can occur, leading to inefficient
memory usage.
– Limited Flexibility: Growing a processs memory is difficult since the adjacent memory might
be occupied.
– Poor Scalability: Not suitable for systems with a large number of processes or those with
dynamic memory requirements.
Contiguous memory allocation is a basic and easy-to-implement method, but it suffers from signifi-
cant drawbacks, especially fragmentation, that can lead to inefficient memory usage. Modern operating
systems often use more sophisticated methods, such as paging and segmentation, to manage memory
more effectively.
2.2.2 Paging
Paging is a memory management scheme that breaks up the physical memory into fixed-size blocks
called "pages" and logical memory into fixed-size blocks called "frames." Unlike contiguous memory
allocation, pages of a process do not need to be stored in contiguous locations in the main memory.
Paging helps reduce external fragmentation, as pages can be allocated and deallocated independently of
their physical location.
Paging eliminates the need for contiguous allocation of physical memory, addressing many of the
drawbacks associated with contiguous memory allocation, such as fragmentation. Here’s a comprehen-
sive look at paging:
– Page: The logical unit of memory in which the process is divided. All pages are of equal size.
– Frame: The physical unit of memory. Physical memory is divided into fixed-size blocks
called frames, which are of the same size as pages.
• Page Table:
2.2. MEMORY MANAGEMENT TECHNIQUES 23
– A data structure used to keep track of the mapping between logical addresses (pages) and
physical addresses (frames).
– Each entry in the page table contains the frame number where the corresponding page is
2
stored in physical memory.
MODULE -2
• Address Translation:
– Logical Address: The address generated by the CPU, consisting of a page number and an
offset.
– Physical Address: The actual address in physical memory. The page number is used to find
the corresponding frame number from the page table, and the offset is added to this frame
number to get the physical address.
– When physical memory is full, and a new page needs to be loaded, the operating system must
replace one of the existing pages.
– Common algorithms include FIFO (First-In-First-Out), LRU (Least Recently Used), and Op-
timal Page Replacement.
• Advantages
– No External Fragmentation: Since any free frame can be used for any page, there are no
issues with contiguous free memory blocks.
– Efficient Memory Utilization: Small chunks of memory (frames) can be allocated as needed,
minimizing wastage.
– Flexibility: Easier to manage processes that require more memory than is available contigu-
ously.
• Disadvantages
– Overhead: Maintaining the page table can consume significant memory, especially for large
processes.
– Translation Time: Each memory access requires translation from logical to physical address,
which can slow down performance. This is often mitigated by hardware such as the Transla-
tion Lookaside Buffer (TLB).
Paging is a powerful memory management scheme that addresses many limitations of contiguous mem-
ory allocation. By breaking memory into smaller, fixed-size blocks, paging can make more efficient use
of memory and eliminate problems related to fragmentation. However, it comes with its own set of chal-
lenges, such as the overhead of maintaining page tables and the potential performance impact of address
translation.
24 2.2. MEMORY MANAGEMENT TECHNIQUES
It is a data structure used by the operating system to map virtual addresses generated by a program
to physical addresses in the main memory.Each entry in the page table contains information about the
status (valid/invalid) and location (frame number) of each page in memory. Page tables are essential for
translating virtual addresses to physical addresses efficiently during program execution.
Page Table Entries (PTEs): Each entry in the page table corresponds to a single page in the logical
address space and contains the physical frame number where the page is stored, along with other
status information.
• Address Translation:
– Frame Number: The physical frame number where the page is stored.
– Valid/Invalid Bit: Indicates whether the page is currently in physical memory.
– Protection Bits: Specify read, write, and execute permissions for the page.
– Referenced Bit: Indicates if the page has been accessed recently.
– Modified (Dirty) Bit: Indicates if the page has been modified.
– Cache Control Bits: Used to control caching policies for the page.
Logical Address Structure: A logical address consists of a page number and an offset within the
page.
Page Number: Used to index into the page table to retrieve the frame number.
Offset: Added to the frame number to get the exact physical address.
Translation Process: 1. The CPU generates a logical address, which is divided into a page number
and an offset.
2. The page number is used to look up the page table entry.
3. The frame number from the page table entry is combined with the offset to form the physical address.
Paging is a powerful memory management scheme that addresses many limitations of contiguous
memory allocation. By breaking memory into smaller, fixed-size blocks, paging can make more efficient
2.2. MEMORY MANAGEMENT TECHNIQUES 25
use of memory and eliminate problems related to fragmentation. However, it comes with its own set
of challenges, such as the overhead of maintaining page tables and the potential performance impact of
address translation.
2
MODULE -2
2.2.5 Segmentation
Segmentation is another memory management technique used by operating systems to divide a pro-
gram’s memory into different segments based on the logical divisions of the program. Unlike paging,
which divides memory into fixed-size blocks, segmentation divides memory into variable-sized segments,
each representing a different logical unit of the program, such as functions, data structures, or arrays.
Segmentation allows for more flexible memory allocation compared to paging but can also lead to
fragmentation issues.
Segments:
Segments are variable-sized parts of a program. Each segment represents a logical unit, such as code,
data, stack, heap, etc. Each segment is defined by a base address and a limit (length).
Segment Table:
Consists of two parts: a segment number and an offset within the segment. The segment number is
used to index into the segment table and retrieve the base address and limit.
Address Translation:
The logical address is converted to a physical address using the segment table.
The offset is added to the base address to get the physical address.
The system checks if the offset is within the limit of the segment to ensure it’s a valid address.
Advantages
• Logical Organization: Segmentation aligns with the logical structure of programs, making it easier
to manage and protect different parts of a program.
• Protection and Sharing: Different segments can have different protection levels and can be shared
among processes. For example, code segments can be shared while data segments are kept private.
• Efficient Memory Use: Since segments are variable-sized, they can be allocated just enough mem-
ory to fit the segment, reducing internal fragmentation.
Disadvantages
memory provides several benefits, including efficient memory utilization, simplified memory manage-
ment, and the ability to run multiple processes simultaneously.
Virtual Address Space:
Each process is given a virtual address space, which is an abstraction of the physical memory. The
virtual address space is divided into pages, similar to paging, but these pages can reside either in physical
memory (RAM) or on disk..
Page Table:
A data structure used to keep track of the mapping between virtual addresses and physical addresses.
Each entry in the page table indicates whether the corresponding page is in RAM or on disk and, if in
RAM, its frame number.
Paging:
Virtual memory often uses paging, where both the virtual memory and physical memory are divided
into fixed-size pages and frames, respectively. When a process needs to access data, the system checks
the page table to see if the data is in RAM. If not, it triggers a page fault.
Page Fault:
Page Fault occurs when a program tries to access a page that is not currently in physical memory.
The operating system then fetches the page from disk and loads it into RAM, updating the page table
accordingly.
Swapping:
Swapping is the process of moving pages between disk and RAM. When physical memory is full, the
OS may use a page replacement algorithm to decide which page to swap out to disk to make room for the
needed page.
Virtual memory allows systems to run larger applications than what could fit in physical RAM
alone. It enables more efficient use of physical memory by only loading the necessary parts of
programs into RAM.
• Isolation and Protection:
Each process has its own virtual address space, which provides isolation and protection, preventing
processes from interfering with each others memory.
• Simplified Memory Management:
Programmers can write programs without worrying about the physical memory limitations, as the
operating system manages memory allocation and swapping.
2.2. MEMORY MANAGEMENT TECHNIQUES 27
• Multitasking:
Virtual memory facilitates multitasking by allowing multiple processes to share the physical mem-
2
ory without conflict.
MODULE -2
2.2.8 Disadvantages of Virtual Memory
• Performance Overhead:
Swapping pages between RAM and disk can introduce significant performance overhead, espe-
cially if disk access is frequent (thrashing).
• Complexity:
Managing virtual memory and handling page faults adds complexity to the operating system.
• LRU (Least Recently Used): Replaces the page that has not been used for the longest time.
• Optimal: Replaces the page that will not be used for the longest time in the future (theoretical and
not practical).
• Clock Algorithm: A more efficient version of LRU that uses a circular buffer and a "use" bit to
approximate LRU behavior.
• Lazy Loading:
Pages are loaded into physical memory only when a page fault occurs, meaning that the process
tries to access a page that is not currently in memory. This contrasts with prepaging, where a set of
pages is loaded into memory before they are accessed.
• Page Faults:
When a process accesses a page that is not in physical memory, a page fault is triggered. The oper-
ating system then loads the required page from secondary storage (disk) into physical memory.
28 2.2. MEMORY MANAGEMENT TECHNIQUES
• Page Table:
2 The page table keeps track of all pages, indicating whether each page is in physical memory or on
disk. Each entry includes a present/absent bit that signifies whether the page is currently in RAM.
PGDCA-101:OPERATING SYSTEM
• Swapping:
If physical memory is full, the operating system must swap out a page to make room for the new
page being brought in. Page replacement algorithms (e.g., FIFO, LRU) determine which page to
swap out.
When a process starts, its page table is created, but no pages are loaded into physical memory
initially (all pages are marked as not present).
When the CPU accesses a page that is not in memory, a page fault occurs. The OS pauses the
process, finds the required page on disk, loads it into a free frame in physical memory, and updates
the page table. If no free frame is available, the OS uses a page replacement algorithm to select a
victim page to swap out.
• Execution Resumes:
Once the required page is loaded into memory, the process is resumed from the point where it was
interrupted.
Only the pages that are actually needed are loaded into memory, reducing the overall memory
footprint of processes. This allows more processes to be loaded simultaneously, improving multi-
tasking.
Initial load time for a process is reduced since only necessary pages are loaded on demand.
• Flexibility:
Demand paging adapts dynamically to the actual memory needs of processes, providing flexibility
in memory management.
2.2. MEMORY MANAGEMENT TECHNIQUES 29
MODULE -2
Frequent page faults can introduce significant overhead, as each page fault requires disk I/O and
context switching. This can degrade performance, especially if the page fault rate is high (known
as thrashing).
• Complexity:
Implementing demand paging requires complex management of page tables, disk I/O, and page
replacement algorithms.
The memory is divided into segments, each representing a logical unit of the program, such as code,
data, stack, etc. Each segment has a variable size and its own segment table entry containing the base
address and limit (size) of the segment.
Paging within Segments:
Each segment is further divided into fixed-size pages. This means that within each segment, demand
paging is applied, allowing pages to be loaded into memory only when needed.
Segment Table:
Contains entries for each segment, with information about the segments base address, limit, and a
pointer to the page table for that segment.
Page Table:
Each segment has its own page table that keeps track of the pages within that segment. Each entry in
the page table indicates whether the page is in memory, and if so, its frame number.
Address Translation:
1. A logical address is divided into three parts: segment number, page number, and offset within the
page.
2. The segment number is used to index into the segment table to get the base address of the segment
and the pointer to the segments page table.
3. The page number within the segment is then used to index into the page table to get the frame
number, if the page is in memory.
4. The offset is added to the frame number to get the physical address.
30 2.2. MEMORY MANAGEMENT TECHNIQUES
When a process starts, its segment table is created, but no pages are loaded into memory initially.
PGDCA-101:OPERATING SYSTEM
When a process accesses a page that is not in memory (within a segment), a page fault occurs. The
operating system loads the required page from disk into physical memory, updates the page table for that
segment, and resumes the process.
Swapping:
If physical memory is full, the OS may need to swap out a page using a page replacement algorithm
to make room for the new page.
Combines the benefits of segmentation (logical division of the program) with the efficiency of demand
paging.
Efficient Memory Use:
Only the necessary pages within each segment are loaded into memory, reducing the overall memory
footprint.
Reduced Fragmentation:
Paging within segments helps in reducing external fragmentation, a common issue with pure segmen-
tation.
Flexible and Scalable:
Supports large and complex programs by managing memory efficiently and allowing processes to
grow dynamically.
• Allocation of Frames:
Allocation of frames refers to the assignment of physical memory (frames) to processes in the
system. Various allocation strategies can be employed, such as equal allocation (each process gets
2
the same number of frames), proportional allocation (frames allocated based on process size or
MODULE -2
priority), or priority-based allocation (frames allocated based on process priority).
• Swapping:
Swapping is a memory management technique used by operating systems to temporarily move
entire processes or parts of processes between main memory (RAM) and secondary storage (usu-
ally disk). When the system runs out of available physical memory, it swaps out less frequently
used parts of the memory to disk, freeing up space for more critical processes. Swapping can sig-
nificantly slow down the system if used excessively, as disk access is much slower compared to
memory access.
Memory management in operating systems is a complex task that comes with several challenges and
issues. Here are some common issues associated with memory management:
• Fragmentation:
Fragmentation occurs when memory becomes divided into small, non-contiguous blocks over time
due to repeated allocation and deallocation of memory. External fragmentation occurs when free
memory is broken into small, non-contiguous blocks, making it challenging to allocate memory
to processes, even if sufficient memory is available. Internal fragmentation occurs when allocated
memory is larger than required, leading to wasted memory space within allocated blocks.
• Memory Leaks:
Memory leaks occur when a program fails to release memory that it no longer needs, resulting
in a gradual depletion of available memory over time. Memory leaks can lead to degradation in
system performance and eventually cause the system to run out of memory, resulting in crashes or
instability.
• Thrashing:
Thrashing occurs when the system spends a significant amount of time swapping pages between
main memory and disk due to excessive paging activity. It typically happens when the system is
overcommitted with too many processes competing for limited physical memory resources, leading
to a decrease in overall system performance.
• Concurrency Issues:
Managing memory in a multi-threaded or multi-process environment introduces concurrency is-
sues, such as race conditions and deadlocks. Synchronization mechanisms must be employed to
ensure that memory operations are performed atomically and that access to shared memory regions
is properly coordinated among multiple threads or processes.
32 2.3. ISSUES OF MEMORY MANAGEMENT
• Security Vulnerabilities:
2 Inadequate memory protection mechanisms can lead to security vulnerabilities, such as buffer over-
flows, which can be exploited by attackers to execute arbitrary code or gain unauthorized access
to sensitive information. Memory management techniques must ensure that each process has iso-
PGDCA-101:OPERATING SYSTEM
lated memory space and that access to memory regions is properly restricted based on process
permissions.
• Complexity:
Memory management involves implementing and maintaining a variety of complex algorithms
and data structures, such as paging, segmentation, and page replacement algorithms. Managing
memory efficiently requires careful optimization and tuning to balance competing objectives such
as performance, reliability, and resource utilization.
Addressing these issues requires careful design and implementation of memory management tech-
niques, as well as ongoing monitoring and optimization to ensure optimal performance and stability of
the operating system.
33
PART III
M ODULE 3
Storage management: Mass-Storage Structure, Disk Structure, Disk Attach-
ment, Disk Scheduling, RAID Structure. File system interface: File Concept, Ac-
cess Methods, Directory Structure, File System Structure, Allocation Methods, and
Free-Space Management. System Protection: Goals, Principles, Domain of Pro-
tection, Access Matrix, Access Control.
3 M ODULE -3
Part III
Storage management in an operating system (OS) refers to the systematic control and coordination
of computer storage resources. It involves managing how data is stored, accessed, and organized within
a computer system. This management is crucial for ensuring efficient and reliable storage and retrieval
of data. Key components and responsibilities of storage management include:
1. File Systems
A file system is a method and data structure that the OS uses to manage files on a disk or partition.
It provides a way to store, organize, and retrieve files on a storage device. Examples include NTFS,
FAT32, ext4, and APFS. File systems handle the following:
• File Creation and Deletion: Enabling users and applications to create and delete files. Di-
rectory Management: Organizing files into directories (or folders) for easier navigation and
management.
• File Access Permissions: Controlling who can read, write, or execute a file.
2. Disk Management
Disk management involves overseeing the physical storage devices, such as hard drives, SSDs, and
optical drives. Key tasks include:
• Partitioning: Dividing a physical disk into multiple logical partitions to separate system, user,
and application data.
• Formatting: Preparing a partition to hold data by setting up a file system.
• Disk Scheduling: Determining the order in which disk I/O operations are executed to improve
performance.
3. Virtual Memory
Virtual memory is a memory management technique that gives an application the impression it has
contiguous working memory while physically it may be fragmented and can exceed the size of
available physical memory. Key aspects include:
35
36 3.1. STORAGE MANAGEMENT
• Paging: Dividing virtual memory into fixed-size pages and physical memory into frames.
3 The OS maintains a page table to keep track of where each virtual page is stored in physical
memory.
• Swapping: Moving data between physical memory and disk storage to ensure that the most
PGDCA-101:OPERATING SYSTEM
4. Storage Allocation
Storage allocation involves deciding where and how data will be stored in memory or on disk.
Types of allocation include:
• Contiguous Allocation: Storing files in contiguous blocks of memory, which simplifies access
but can lead to fragmentation.
• Linked Allocation: Storing files in blocks that are linked together, which eliminates fragmen-
tation but can complicate access.
• Indexed Allocation: Using an index block to keep track of file block locations, combining
benefits of both contiguous and linked allocation.
Managing free space involves keeping track of available storage and allocating it efficiently. Tech-
niques include:
• Bitmaps: Using a bitmap or bit vector where each bit represents a block of storage. A bit
value of 0 indicates a free block, while 1 indicates a used block.
• Free Lists: Maintaining a linked list of free blocks, where each block points to the next free
block.
Ensuring data integrity and availability involves regular backups and the ability to recover data
after a failure. This includes:
• Backup Procedures: Regularly copying data to a secondary storage medium to prevent data
loss.
• Recovery Mechanisms: Restoring data from backups in case of data corruption or loss.
To optimize storage space and secure data, operating systems may provide built-in support for file
compression and encryption.
MODULE -3
3.1.1.1 Mass-Storage Structure
Involves the arrangement and management of storage devices capable of holding large volumes of
data persistently, addressing concerns like reliability, scalability, and performance across various storage
mediums like hard disk drives (HDDs), solid-state drives (SSDs), and network-attached storage (NAS).
1. Disk Geometry
Disk geometry describes the physical and logical layout of a storage device, typically a hard disk
drive (HDD). It includes:
• Tracks: Concentric circles on the surface of the disk where data is recorded.
• Sectors: The smallest unit of data storage on a disk, typically 512 bytes or 4KB. A track is
divided into sectors.
• Cylinders: A cylinder is a collection of tracks located at the same position on each disk platter.
It is a vertical stack of tracks through all platters.
2. Disk Partitions
A partition is a logical division of a disk. Each partition can be managed separately by the operating
system and can have its own file system. Types of partitions include:
• Primary Partition: A primary division of the disk that can contain an operating system.
• Extended Partition: A type of partition that can contain multiple logical partitions, allowing
for more than the limit of primary partitions.
• Logical Partition: A subdivision within an extended partition, each treated as a separate drive.
3. File Systems
A file system manages how data is stored and retrieved. It organizes files into directories for easy
navigation and access. Common file systems include:
• FAT32 (File Allocation Table): An older file system, still used for compatibility.
3 • ext4 (Fourth Extended Filesystem): Used by Linux.
• APFS (Apple File System): Used by macOS.
PGDCA-101:OPERATING SYSTEM
File systems use data structures to keep track of files and directories:
• Inodes: Data structures used by many file systems to store information about a file or direc-
tory, including attributes like file size, ownership, and pointers to data blocks.
• Metadata: Information about files and directories, such as creation date, modification date,
and permissions.
• Disk Blocks: The smallest unit of data storage in a file system. Blocks are typically 512 bytes
or 4KB.
• Clusters: A group of disk blocks that are treated as a single unit by the file system to improve
efficiency.
• Bitmap or Bit Vector: A bitmap is used where each bit represents a disk block. A bit value of
0 indicates a free block, while 1 indicates a used block.
• Free List: A list of free disk blocks that are available for allocation.
To improve disk performance, disk scheduling algorithms are used to determine the order in which
disk I/O operations are executed:
• SCAN (Elevator Algorithm): The disk arm moves in one direction, servicing requests, and
then reverses direction.
• C-SCAN (Circular SCAN): Similar to SCAN but only services requests in one direction, then
3
quickly returns to the start.
MODULE -3
The disk structure is a critical aspect of how data is stored, managed, and retrieved on a storage
device. Understanding the various components, such as disk geometry, partitions, file systems, inodes,
allocation methods, and disk scheduling algorithms, is essential for optimizing disk performance and
ensuring efficient data management.
Describes the methods by which storage devices are connected to a computer system, encompassing
interfaces like Serial ATA (SATA), Small Computer System Interface (SCSI), and Peripheral Compo-
nent Interconnect Express (PCIe), which influence data transfer rates, compatibility, and hot-swapping
capabilities.
Disk attachment refers to the various methods and technologies used to connect storage devices to a
computer system. The type of disk attachment impacts the performance, scalability, and manageability
of storage. Here are the key types and technologies related to disk attachment:
Internal disk attachment involves connecting storage devices directly to the motherboard or a dedi-
cated storage controller within the computer case. Common types include:
Also known as IDE (Integrated Drive Electronics). Older interface standard for connecting
storage devices. Uses parallel data transmission, typically with a 40 or 80-wire ribbon cable.
Limited to two devices per channel (master and slave).
(b) SATA (Serial ATA)
Modern interface standard replacing PATA. Uses serial data transmission, resulting in faster
data transfer rates. Easier cable management with thinner cables. Supports features like hot
swapping (removing and adding drives without shutting down the system).
(c) SCSI (Small Computer System Interface)
Interface designed specifically for SSDs. Connects directly to the PCIe (Peripheral Compo-
nent Interconnect Express) bus. Provides significantly faster data transfer rates compared to
SATA.
40 3.1. STORAGE MANAGEMENT
High-speed interface developed by Intel and Apple. Combines PCIe and DisplayPort proto-
cols in a single connection. Supports daisy-chaining multiple devices. Thunderbolt 3 and 4
use the USB-C connector and offer high data transfer rates and versatility.
(c) eSATA (External SATA)
External version of the SATA interface. Provides similar performance to internal SATA con-
nections. Requires an external power source or combined with USB for power (eSATAp).
(d) FireWire (IEEE 1394)
Used primarily in multimedia devices for high-speed data transfer. Supports daisy-chaining
and hot-swapping. Less common in modern systems, largely replaced by USB and Thunder-
bolt.
NAS is a dedicated file storage device connected to a network, allowing multiple users and devices
to access the storage over the network.
(a) File-Level Access: NAS devices operate at the file level, serving files over protocols like
SMB/CIFS (Windows), NFS (Unix/Linux), and AFP (Apple).
(b) Ease of Access: Accessible via a network, making it easy to share files across multiple de-
vices.
(c) Scalability: Can be easily expanded with additional drives or storage units.
SAN is a high-speed network of storage devices that provides block-level storage to servers.
(a) Block-Level Access: Operates at the block level, providing raw storage blocks to servers.
(b) Protocols: Uses protocols like Fibre Channel (FC), iSCSI, and FCoE (Fibre Channel over
Ethernet).
(c) Performance and Scalability: High performance and scalable storage solutions typically used
in enterprise environments.
3.1. STORAGE MANAGEMENT 41
Disk attachment methods are crucial for determining the performance, scalability, and manageability
of storage solutions. Internal attachments like SATA and NVMe provide high-speed access suitable for
desktop and laptop systems, while external attachments like USB and Thunderbolt offer portability and
3
ease of use. Network-based solutions like NAS and SAN offer advanced storage capabilities for multi-
MODULE -3
user environments and enterprise applications. Understanding these attachment types helps in selecting
the appropriate storage solution for specific needs and applications.
• Description: Selects the request closest to the current head position, minimizing the seek
time.
• Advantages: Reduces the total seek time compared to FCFS.
• Disadvantages: Can cause starvation of requests that are far from the current head position.
Not always optimal for overall performance due to the possibility of request clustering.
• Description: The disk arm moves in one direction (e.g., towards the end of the disk) servicing
all requests until it reaches the end, then reverses direction.
• Advantages: Provides a more uniform wait time for requests compared to SSTF. Avoids
starvation.
• Disadvantages: May still cause longer wait times for requests just missed during the sweep.
• Description: Similar to SCAN, but the disk arm only services requests in one direction. When
3 it reaches the end, it quickly returns to the beginning without servicing any requests during
the return trip.
• Advantages: Provides a more uniform wait time for requests across the entire disk. Reduces
PGDCA-101:OPERATING SYSTEM
• LOOK: Similar to SCAN, but the arm only goes as far as the last request in each direction
before reversing.
• C-LOOK: Similar to C-SCAN, but the arm only goes as far as the last request in one direction,
then quickly returns to the beginning.
• Advantages:
Reduces unnecessary arm movement by not going all the way to the end if there are no
requests. More efficient than SCAN and C-SCAN in terms of seek time.
• Disadvantages:
Can still have some disadvantages similar to SCAN and C-SCAN, depending on the request
pattern.
Several variants of the basic SCAN and LOOK algorithms can further optimize performance:
• N-Step-SCAN: Divides requests into sub-queues and applies the SCAN algorithm to each
sub-queue.
• FSCAN: Uses two queues, one for incoming requests and one for processing, to avoid starva-
tion and balance load.
7. Priority Scheduling
• Description: Assigns priority levels to requests and services higher-priority requests first.
• Advantages: Can ensure that critical tasks are completed promptly.
• Disadvantages: Lower-priority requests can suffer from starvation.
MODULE -3
It Encompasses various techniques for combining multiple disk drives into logical units to improve
data redundancy, performance, or both. RAID configurations range from RAID 0 (striping) for per-
formance to RAID 1 (mirroring) for redundancy, with more complex setups like RAID 5 and RAID 6
offering a balance of both.
Data redundancy, although taking up extra space, adds to disk reliability. This means, that in case of
disk failure, if the same data is also backed up onto another disk, we can retrieve the data and go on with
the operation. On the other hand, if the data is spread across multiple disks without the RAID technique,
the loss of a single disk can affect the entire data.
RAID is very transparent to the underlying system. This means, that to the host system, it appears
as a single big disk presenting itself as a linear array of blocks. This allows older technologies to be
replaced by RAID without making too many changes to the existing code.
Different RAID Levels
• RAID-0 (Stripping)
• RAID-1 (Mirroring)
• RAID-2 (Bit-Level Stripping with Dedicated Parity)
• RAID-3 (Byte-Level Stripping with Dedicated Parity)
• RAID-4 (Block-Level Stripping with Dedicated Parity)
• RAID-5 (Block-Level Stripping with Distributed Parity)
• RAID-6 (Block-Level Stripping with two Parity Bits)
3.1.3.1 Goals
Include ensuring the confidentiality (preventing unauthorized access), integrity (maintaining data ac-
curacy and consistency), availability (ensuring system uptime and responsiveness), and accountability
(tracking and auditing user actions) of computer systems and their resources.
3.1.3.2 Principles
Govern the design and implementation of system protection mechanisms, emphasizing principles
such as least privilege (granting minimal necessary access rights), separation of duties (dividing respon-
sibilities to prevent abuse), and defense in depth (layered security measures).
MODULE -3
3.1.3.5 Access Control
Enforces access policies and restrictions defined in the access matrix, regulating user interactions
with system resources based on authentication, authorization, and auditing mechanisms such as access
control lists (ACLs), capabilities, and role-based access control (RBAC).
46
PART IV
M ODULE 4
Concepts of VIRTUALIZATION Types, hypervisors, concept of host,
guest VMs, Concepts of Data Virtualization, Desktop Virtualization, Server
Virtualization, Operating System Virtualization, Network Functions Virtualization.
RTOS, Network OS, Cloud Operating Systems Advantages and Disadvantages
Containers: Docker, Kubernetes
Introduction to GUI Based Operating System, GUI based operating system, File
Management, Elements of Word Processing. Awareness on Cyber Security Act and
IT Act
M ODULE -4
4
Part IV
Virtualization is the process of creating a virtual version of something, such as hardware, storage, or
network resources. It allows multiple virtual instances to run on a single physical system, enabling better
resource utilization, scalability, and flexibility.
• Type 1 (Bare-Metal) Hypervisors: Run directly on the physical hardware, providing better perfor-
mance and efficiency. Examples: VMware ESXi, Microsoft Hyper-V, Xen
• Type 2 (Hosted) Hypervisors: Run on top of a host operating system, which manages hardware
resources. Examples: VMware Workstation, Oracle VirtualBox
• Centralized management
48
4.1. INTRODUCTION TO VIRTUALIZATION 49
• Enhanced security
MODULE -4
4.1.1.4 Server Virtualization
Server virtualization partitions a physical server into multiple virtual servers, each running its own
operating system and applications. This improves server utilization and reduces hardware costs.
Benefits
The host is the physical machine on which the virtualization software (hypervisor) is installed. It
provides resources like CPU, memory, and storage to guest VMs.
Guest VMs are the virtual instances that run on the host. Each guest VM operates as if it were a
separate physical machine, with its own OS and applications.
Hypervisors manage the creation, execution, and resource allocation of guest VMs. They ensure
isolation and efficient utilization of the host’s resources.
50 4.1. INTRODUCTION TO VIRTUALIZATION
• Scalability
• Cost-efficiency
• Flexibility and agility
• Enhanced collaboration
Disadvantages
• Security concerns
• Dependence on internet connectivity
• Potential for vendor lock-in
MODULE -4
4.1.6.1 Docker
Containers are a form of virtualization that allow developers to package applications with all the
dependencies they need to run consistently across different computing environments. This technology
ensures that the application runs the same way regardless of where it is deployed.
Docker is a platform and tool designed to create, deploy, and manage containers. It provides a
lightweight alternative to traditional virtual machines by using the host operating system’s kernel, but
with isolated user spaces.
Key Components
• Docker Image: A lightweight, stand-alone, and executable package that includes everything needed
to run a piece of software, including the code, runtime, libraries, and system tools.
Benefits
• Efficiency: Uses fewer resources compared to traditional VMs as containers share the host OS
kernel.
Basic Commands
15
16 # Define environment variable
17 ENV NAME World
18
19 # Run app.py when the container launches
20 CMD ["python", "app.py"]
21
4.1.6.2 Kubernetes
Kubernetes is an open-source container orchestration platform designed to automate deploying, scal-
ing, and managing containerized applications. It handles the complex tasks of container deployment,
scaling, and operations, freeing developers from managing underlying infrastructure.
Key Components
• Pod: The smallest deployable unit in Kubernetes, which can hold one or more containers.
• Service: An abstraction that defines a logical set of Pods and a policy by which to access them.
Benefits
• Self-Healing: Restarts failed containers, replaces containers, and kills containers that don’t respond
to health checks.
• Service Discovery and Load Balancing: Automatically exposes containers using DNS names or
their own IP addresses, and distributes network traffic.
• Storage Orchestration: Mounts storage systems like local storage, public cloud providers, and
more.
Basic Commands
MODULE -4
4 name: my-app
5 spec:
6 replicas: 3
7 selector:
8 matchLabels:
9 app: my-app
10 template:
11 metadata:
12 labels:
13 app: my-app
14 spec:
15 containers:
16 - name: my-app
17 image: my-app-image:latest
18 ports:
19 - containerPort: 80
20
Develop with Docker: Developers use Docker to containerize applications, ensuring they run consis-
tently in any environment.
Deploy with Kubernetes: Once containerized, Kubernetes is used to manage the deployment, scaling,
and operations of these containers in a production environment.
Integration
Docker Desktop: Provides an easy-to-use development environment for Docker and Kubernetes on
a local machine.
Minikube: A tool that runs a single-node Kubernetes cluster on a local machine for development and
testing.
Docker and Kubernetes are complementary technologies that provide robust solutions for container-
ized applications. Docker excels in creating and managing containers, while Kubernetes provides pow-
erful orchestration capabilities for deploying and managing containers at scale. Together, they enable
developers to build, ship, and run applications efficiently and reliably.
Sec 4.2 References
2. Operating Systems, Internals and Design Principles, Stallings, Seventh Edition, Pearson Publica-
tion.
IHRD