End Sem OS
End Sem OS
Unit 1
Introduction
Simple Batch Systems:
Multiprogrammed Batch Systems:
Time Sharing Systems:
Personal-Computer Systems:
Parallel Systems:
Distributed Systems:
Real-Time Systems:
Operating Systems as Resource Managers:
Processes
Introduction to Processes:
Process States:
Process Management:
Interrupts
Interprocess Communication (IPC):
Threads: Introduction and Thread States
Thread Operation:
Threading Models:
Processor Scheduling:
Scheduling Levels:
Preemptive Scheduling:
Non-Preemptive Scheduling:
Priorities in Scheduling:
Scheduling Objectives:
Scheduling Criteria:
Scheduling algorithms
Demand Scheduling:
Real-Time Scheduling:
Unit 2
Process Synchronization: Mutual Exclusion
Software Solutions to the Mutual Exclusion Problem:
Hardware Solutions to the Mutual Exclusion Problem:
Operating System 1
Semaphores
Critical Section Problems
Case Study: The Dining Philosophers Problem
Case Study: The Barber Shop Problem
Memory Organization
Memory Hierarchy
Memory Management Strategies
Contiguous Memory Allocation vs. Non-Contiguous Memory Allocation
Partition Management Techniques
Logical Address Space vs. Physical Address Space
Swapping
Paging
Segmentation
Segmentation with Paging Virtual Memory: Demand Paging
Page Replacement and Page-Replacement Algorithms
Performance of Demand Paging
Thrashing
Demand Segmentation and Overlay Concepts
Unit 3
Deadlocks
Deadlock Solution
Deadlock Prevention:
Deadlock Avoidance with Banker's Algorithm:
Banker's Algorithm Steps:
Deadlock Detection:
Resource Allocation Graph (RAG):
Deadlock Recovery:
Recovery Methods:
Resource concepts
Hardware Resources:
Software Resources:
Resource Allocation and Management:
Device Management
Device Management Components:
Disk Scheduling Strategies:
Common Disk Scheduling Algorithms:
Rotational Optimization:
Operating System 2
Techniques for Rotational Optimization:
System Consideration:
Key System Considerations:
Caching and Buffering:
Significance and Benefits:
Unit 4
File System Introduction:
Components of a File System:
File Organization:
Common File Organization Techniques:
Logical File System:
Characteristics and Functions:
Physical File System:
Key Aspects and Functions:
Relationship between Logical and Physical File Systems:
File allocation strategies
Common File Allocation Strategies:
Factors influencing File Allocation Strategies:
Free Space Management
Common Free Space Management Techniques:
File Access Control
Data Access Techniques:
Considerations in File Access Control and Data Access:
Data Integrity Protection
Techniques and Measures for Data Integrity Protection:
Importance and Benefits:
Challenges:
File systems
FAT32 (File Allocation Table 32):
NTFS (New Technology File System):
Ext2/Ext3 (Second Extended File System/Third Extended File System):
APFS (Apple File System):
ReFS (Resilient File System):
Unit 1
Operating System 3
Introduction
An Operating System (OS) is a fundamental component of a computer system that
acts as an intermediary between the hardware and the user or application software.
It serves several crucial functions:
Operating System 4
connectivity and networked applications.
1. Windows: Microsoft Windows is a widely used OS known for its graphical user
interface and compatibility with a variety of software applications.
4. Unix: Unix is an older, robust OS that has influenced many other operating
systems, including Linux.
6. iOS: iOS is Apple's mobile operating system used in iPhones and iPads.
Batch Jobs: In a simple batch system, users submit their jobs to the system as
batch jobs. A batch job typically consists of one or more programs or tasks that
need to be executed sequentially.
Operating System 5
Job Scheduling: The OS's primary responsibility is to schedule and manage
the execution of batch jobs. It maintains a job queue and selects the next job to
run based on criteria like job priority.
Job Control Language (JCL): Users provide job control language statements
in their batch job submissions. JCL specifies details like the input and output
files, resource requirements, and other job-specific information.
Job Spooling: Jobs are often spooled (spooling stands for Simultaneous
Peripheral Operations On-line) before execution. This means they are placed in
a queue and stored on secondary storage, making it easier for the system to
retrieve and execute them.
Efficiency: Simple batch systems are efficient for processing large volumes of
similar tasks without the overhead of user interaction.
Lack of Interactivity: They are not suitable for tasks that require user
interaction, making them unsuitable for real-time or interactive applications.
Operating System 6
Limited Flexibility: Users need to submit jobs in advance, which may lead to
delays if a high-priority task suddenly arises.
Job Pool: In a multiprogrammed batch system, there is a job pool that contains
a collection of batch jobs. These jobs are ready to run and are loaded into
memory as space becomes available.
I/O Overlap: These systems aim to overlap I/O operations with CPU
processing. While one job is waiting for I/O, another job can utilize the CPU,
enhancing overall system performance.
Operating System 7
priority ones.
Resource Utilization: Resources are used efficiently as they are not wasted on
idle time. This leads to better CPU and I/O device utilization.
Increased Overhead: The need to load and swap jobs in and out of memory
introduces some overhead in the system.
Operating System 8
Time Sharing Systems:
A Time-Sharing System, also known as a multi-user operating system, allows
multiple users to interact with the computer simultaneously. Here are the key
aspects of time-sharing systems:
Time Slicing: The CPU's time is divided into small time slices, and each user or
process is allocated a time slice to execute their tasks. This provides the illusion
of concurrent execution for multiple users.
Resource Sharing: Resources like CPU, memory, and I/O devices are shared
among users or processes. The system ensures fair access to resources.
Response Time: They are designed for fast response times to ensure that
users can interact with the system in real-time.
Personal-Computer Systems:
Personal Computer (PC) Systems are designed for individual users and small-scale
computing needs. Here are the key characteristics:
Single User: PC systems are typically single-user systems, designed for use
by a single individual.
User-Friendly GUI: They often have a graphical user interface (GUI) that
makes it easy for users to interact with the system.
Limited Resource Sharing: PC systems are not designed for heavy multi-user
interaction or resource sharing. They focus on providing resources to a single
user's tasks.
Operating System 9
Broad Application: PC systems are used for a wide range of applications, from
word processing and web browsing to gaming and multimedia.
Parallel Systems:
Parallel Systems are designed to execute tasks concurrently by using multiple
processors or cores. Here are the key aspects of parallel systems:
High Performance: Parallel systems offer high computing power and are used
for scientific computing, simulations, and tasks that can be divided into parallel
threads.
Each of these types of systems serves different purposes and has its unique
characteristics, catering to specific user needs and computing requirements.
Distributed Systems:
Distributed Systems are a collection of interconnected computers that work together
as a single, unified system. Here are the key aspects of distributed systems:
Resource Sharing: Resources like processing power, memory, and data can
be shared across the network, allowing for more efficient use of resources.
Operating System 10
Scalability: Distributed systems can be easily scaled by adding more machines
to the network.
Fault Tolerance: They are designed to handle failures gracefully, ensuring that
the system continues to function even if some nodes fail.
Real-Time Systems:
Real-Time Systems are designed to respond to events or input within a predefined
time constraint. They are used in applications where timing and predictability are
critical. Here are the key characteristics of real-time systems:
Hard and Soft Real-Time: Real-time systems can be classified as hard real-
time (where missing a deadline is catastrophic) or soft real-time (where
occasional missed deadlines are acceptable).
Applications: Real-time systems are used in areas like aviation (flight control
systems), automotive (engine control units), and industrial automation
(robotics).
Both distributed systems and real-time systems are specialized types of computer
systems, each with its unique requirements and applications. Distributed systems
focus on resource sharing and scalability across multiple machines, while real-time
systems prioritize time-bound responses and determinism.
Operating System 11
Operating Systems (OS) act as resource managers that oversee and control the
allocation and utilization of a computer system's hardware and software resources.
Here's how an OS functions as a resource manager:
Operating System 12
9. Security: OSs implement security measures like encryption, firewalls, and
access controls to protect the system from unauthorized access and data
breaches.
Processes
Introduction to Processes:
In the context of operating systems, a process is a fundamental concept that
represents the execution of a program. It's a unit of work in a computer system that
can be managed and scheduled by the operating system. Here's an overview of
processes:
A process consists of the program's code, its data, and the execution context,
including the program counter, registers, and the stack.
Each process operates in its own isolated memory space, which ensures that
one process cannot directly interfere with or access the memory of another
process.
Operating System 13
Processes can communicate and share data through inter-process
communication mechanisms provided by the operating system.
Process States:
Processes go through different states during their lifecycle. These states represent
the different stages a process can be in. The typical process states are:
1. New: In this state, a process is being created but has not yet started execution.
2. Ready: A process in the ready state is prepared to run and is waiting for its turn
to be executed. It's typically waiting in a queue.
3. Running: A process in the running state is actively executing its code on the
CPU.
Operating System 14
4. Blocked (or Waiting): When a process is unable to continue its execution due
to the need for some external event (e.g., I/O operation, user input), it enters
the blocked state and is put on hold until the event occurs.
Process Management:
Process management is a critical aspect of an operating system's responsibilities. It
involves various tasks related to process creation, scheduling, and termination.
Here's an overview of process management:
1. Process Creation: When a user or system request initiates a new process, the
OS is responsible for creating the process. This includes allocating memory,
initializing data structures, and setting up the execution environment.
6. Process Priority and Control: The OS allows users to set process priorities,
which influence their order of execution. It also provides mechanisms to control
and monitor processes.
Operating System 15
7. Process State Transitions: The OS manages the transitions between different
process states, ensuring that processes move between states as required.
Effective process management is essential for the efficient and stable operation of a
computer system, enabling multiple programs to run simultaneously, share
resources, and respond to user and system needs.
Interrupts
In the context of operating systems and computer architecture, an interrupt is a
signal or event that halts the normal execution of a program to transfer control to a
specific routine, often called an interrupt service routine (ISR) or interrupt handler.
Interrupts play a crucial role in modern computing systems by allowing the operating
system to respond to events and requests in a timely and efficient manner. Here are
the key aspects of interrupts:
1. Types of Interrupts:
Operating System 16
3. Interrupt Prioritization: In systems with multiple interrupts, prioritization
mechanisms ensure that the CPU services higher-priority interrupts first. This is
essential for handling critical events promptly.
5. Context Switching: Interrupts often involve context switching, where the CPU
switches from one program's context to another. This allows the operating
system to maintain the illusion of concurrent execution, even on single-core
processors.
Operating System 17
needs and requirements of the processes involved. Here are some of the key
methods of IPC:
1. Shared Memory:
Shared memory is a fast and efficient method of IPC since it doesn't involve
the overhead of copying data between processes.
2. Message Passing:
Pipes are a one-way communication channel that allows data to flow in one
direction between processes.
Named pipes (FIFOs) are similar but have a well-defined name in the file
system, allowing unrelated processes to communicate using a common
pipe.
4. Sockets:
Operating System 18
Sockets are a network-based IPC mechanism used for communication
between processes on different machines over a network.
5. Signals:
Signals are often used for simple forms of IPC and for handling events like
process termination.
7. Message Queues:
IPC is fundamental for modern operating systems and plays a crucial role in
enabling processes to work together, share data, and synchronize their actions. The
choice of IPC method depends on factors such as the nature of the communication,
performance requirements, and security considerations.
Operating System 19
Threads: Introduction and Thread States
Introduction to Threads:
1. Thread vs. Process: A process is a separate program execution with its own
memory space, file handles, and system resources. Threads, on the other hand,
share the same memory space as the process and have their own execution
context, such as program counter and registers.
2. Benefits of Threads:
3. Types of Threads:
Thread States:
Threads go through different states during their lifecycle, just like processes. The
typical thread states are:
Operating System 20
1. New: In this state, a thread is created but has not yet started execution.
2. Runnable: A thread in the runnable state is ready to execute and waiting for the
CPU. It is typically waiting in a queue and is eligible for execution.
3. Running: A thread in the running state is actively executing its code on the
CPU.
4. Blocked (or Waiting): When a thread cannot continue its execution due to the
need for some external event (e.g., I/O operation), it enters the blocked state
and is put on hold until the event occurs.
Thread Transitions:
Threads transition between these states based on various factors, including their
priority, the availability of CPU time, and external events. Thread scheduling
algorithms determine which thread runs next and aim to provide fair execution and
efficient resource utilization.
Thread Management:
Operating systems provide APIs and libraries to create, manage, and synchronize
threads. Popular programming languages like C, C++, Java, and Python have built-
in support for threading. Threads can communicate and synchronize their activities
using synchronization primitives like semaphores, mutexes, and condition variables.
Effective thread management is crucial for achieving concurrent execution in
applications, improving performance, and making efficient use of modern multicore
processors. However, it also introduces challenges related to synchronization, data
sharing, and avoiding race conditions.
Operating System 21
Thread Operation:
Thread operations are fundamental for creating, managing, and controlling threads
within a program or process. Here are the key thread operations:
1. Thread Creation:
2. Thread Termination:
Threads can terminate for various reasons, such as completing their tasks,
receiving a termination signal, or encountering an error. Proper thread
termination is essential to release resources and avoid memory leaks.
3. Thread Synchronization:
Operating System 22
condition variables are used to prevent race conditions and ensure orderly
access to shared resources.
4. Thread Joining:
A thread can wait for another thread to complete its execution by using a
thread join operation. This is often used to wait for the results of a thread's
work before continuing with the main thread.
5. Thread Detachment:
Threads can be detached from the calling thread, which allows them to
continue running independently. Detached threads automatically release
their resources when they terminate, without requiring the main thread to
join them.
6. Thread Prioritization:
Some threading models or libraries allow you to set thread priorities, which
influence the order in which threads are scheduled to run by the operating
system.
7. Thread Communication:
Threading Models:
Threading models define how threads are created, scheduled, and managed within
a program or an operating system. Different threading models offer various
advantages and trade-offs, depending on the application's requirements. Here are
common threading models:
1. Many-to-One Model:
Operating System 23
However, it doesn't fully utilize multiprocessor systems since a single thread
can run at a time.
2. One-to-One Model:
It offers fine-grained control but may have higher overhead due to the
increased number of kernel threads.
3. Many-to-Many Model:
This model seeks to balance control and efficiency by allowing both user-
level and kernel-level threads.
4. Hybrid Model:
Processor Scheduling:
Processor scheduling is a core component of operating systems that manages the
execution of processes and threads on a CPU. It aims to allocate CPU time
Operating System 24
efficiently and fairly to multiple competing processes. Below, we'll explore various
aspects of processor scheduling in detail.
Scheduling Levels:
Scheduling levels, also known as scheduling domains, represent the different
stages at which scheduling decisions are made within an operating system. These
levels help determine which process or thread gets access to the CPU at any given
time. There are typically three primary scheduling levels:
Role: Long-term scheduling selects processes from the job pool, which is a
queue of new processes waiting to enter the system.
Characteristics:
2. Medium-Term Scheduling:
Operating System 25
Role: This level of scheduling determines which processes that are already in
memory should be suspended (swapped out) to secondary storage or moved
back into memory (swapped in).
Characteristics:
Role: Its primary goal is to optimize CPU utilization, response time, and overall
system throughput.
Characteristics:
Operating System 26
an opportunity to run.
Preemptive Scheduling:
Preemptive scheduling is a scheduling policy where the operating system has the
authority to interrupt a running process and allocate the CPU to another process if a
higher-priority process becomes available or if a process exceeds its allocated time
slice (quantum). Preemptive scheduling ensures fairness, responsiveness, and
prioritization of tasks.
Operating System 27
saving the state of the currently executing process and restoring the state of the
newly scheduled process.
Round Robin: A process is allocated a fixed time slice (quantum) of CPU time.
When the quantum expires, the process is preempted, and another process is
given the CPU.
Non-Preemptive Scheduling:
Non-preemptive scheduling, also known as cooperative scheduling, allows a
process to continue running until it voluntarily releases the CPU by either blocking
(e.g., for I/O) or completing its execution. The operating system does not forcibly
interrupt a running process to allocate the CPU to another process. Instead, it relies
on the cooperation of processes.
Characteristics of Non-Preemptive Scheduling:
Operating System 28
4. Potential Responsiveness Issues: In non-preemptive scheduling, if a process
does not voluntarily yield the CPU, it can monopolize it, potentially causing
unresponsiveness in the system for other processes or tasks.
Shortest Job Next (SJN) or Shortest Job First (SJF): The process with the
shortest burst time is executed without preemption, allowing it to complete
before other processes are given the CPU.
Priorities in Scheduling:
In the context of processor scheduling, priorities play a crucial role in determining
the order in which processes or threads are granted access to the CPU.
Prioritization is used to manage the execution of processes based on their relative
importance or urgency. Let's delve into the concept of priorities in scheduling:
1. Importance of Priorities:
Priorities are assigned to processes or threads to reflect their significance within the
system. High-priority processes are given preference in CPU allocation, ensuring
that critical tasks are executed promptly. Here's how priorities are used and their
significance:
Operating System 29
Responsiveness: High-priority processes are scheduled more frequently,
ensuring that tasks with immediate user interaction or real-time requirements
receive timely CPU attention. This enhances system responsiveness and user
experience.
2. Priority Levels:
Priority levels can vary from system to system, with different operating systems
using distinct scales to represent priorities. Common approaches include:
Operating System 30
Dynamic Priorities: Dynamic priority scheduling allows priorities to change
during the execution of a process based on factors like aging, process behavior,
and resource usage. This approach adapts to the system's current workload
and requirements.
4. Priority Inversion:
In summary, priorities are vital for managing process execution in scheduling. They
determine the order in which processes or threads are granted access to the CPU
and play a significant role in ensuring system responsiveness, resource allocation,
and fairness. Assigning and managing priorities effectively is crucial in optimizing
system performance and meeting specific application requirements.
Operating System 31
Scheduling Objectives and Scheduling Criteria:
Scheduling Objectives:
Scheduling objectives specify the high-level goals that a scheduling algorithm aims
to achieve. These objectives guide the scheduler in making decisions about which
process or thread to execute next. Common scheduling objectives include:
Operating System 32
7. Response Time: Response time is the time taken for a process to start
executing after it enters the ready queue. Minimizing response time is essential
for ensuring rapid task initiation.
Scheduling Criteria:
Scheduling criteria are specific parameters and attributes used to make scheduling
decisions. These criteria help the scheduler compare and prioritize processes
based on measurable factors. Common scheduling criteria include:
1. Burst Time: Burst time is the time a process spends running on the CPU
before it either blocks (for I/O or other reasons) or terminates. Shorter burst
times often indicate processes that can complete quickly.
3. Waiting Time: Waiting time is the total time a process has spent waiting in the
ready queue before gaining access to the CPU. Reducing waiting time is often
a scheduling goal to improve system responsiveness.
6. Quantum (Time Slice): The quantum is the maximum amount of CPU time
allocated to a process in round-robin or time-sharing scheduling. Setting an
appropriate quantum helps balance fairness and system responsiveness.
7. I/O and CPU Burst Times: Different processes may have varying I/O burst
times and CPU burst times. Schedulers may prioritize I/O-bound processes to
improve overall system efficiency.
Operating System 33
8. Process Age and Aging: Aging is a dynamic criterion that increases the priority
of processes in the ready queue if they have been waiting for a long time. Aging
helps prevent processes from being indefinitely starved.
Scheduling algorithms
https://www.youtube.com/watch?v=zFnrUVqtiOY&list=PLxCzCOWd7aiGz9d
onHRrE9I3Mwn6XdP8p&index=14&pp=iAQB
Characteristics:
Easy to implement.
Operating System 34
May lead to poor CPU utilization if long processes arrive first (the "convoy
effect").
Example: Imagine three processes arriving in the order P1, P2, P3. They
execute sequentially, with P1 running to completion before P2 and P3 start.
Description: SJN or SJF scheduling selects the process with the shortest burst
time for execution. This minimizes average waiting time.
Characteristics:
Characteristics:
Example: If processes P1, P2, and P3 each get a time quantum of 2, they take
turns executing in a cyclic manner, like P1 -> P2 -> P3 -> P1 -> P2 -> ...
4. Priority Scheduling:
Operating System 35
ones.
Characteristics:
Example: If P1 has higher priority than P2, the scheduler executes P1 before
P2.
Characteristics:
Characteristics:
7. Lottery Scheduling:
Operating System 36
Description: In lottery scheduling, each process is assigned a number of
lottery tickets. The scheduler selects a ticket at random, and the process
holding that ticket is granted access to the CPU.
Characteristics:
Example: If a process holds 10 out of 100 total tickets, it has a 10% chance of
being selected.
8. Real-Time Scheduling:
Characteristics:
These are some of the most common scheduling algorithms. The choice of
scheduling algorithm depends on the specific requirements of the system, the
nature of the workloads, and the desired system behavior. The selection of an
appropriate scheduling algorithm is crucial for optimizing system performance and
meeting the objectives and criteria set for scheduling.
Demand Scheduling:
Demand scheduling, also known as event-driven scheduling or on-demand
scheduling, is a scheduling mechanism where a process requests CPU time when it
needs it, rather than being allocated a fixed time slice or being scheduled by a pre-
Operating System 37
defined policy. This approach is often used in interactive and event-driven systems.
Here's how demand scheduling works:
Examples: User interactions with graphical user interfaces (GUIs) often trigger
demand scheduling. When a user clicks a button or enters text, the associated
event handler is executed immediately.
Real-Time Scheduling:
Real-time scheduling is used in systems with time-critical tasks where meeting
specific deadlines is crucial. These systems include applications like avionics,
industrial control systems, medical devices, and telecommunications. Real-time
scheduling is classified into two categories: hard real-time and soft real-time.
The scheduler prioritizes tasks based on their importance and ensures that
high-priority tasks are executed before lower-priority ones. This may involve
preemptive scheduling.
Operating System 38
Examples include flight control systems, medical equipment, and
automotive safety systems.
In soft real-time systems, occasional deadline misses are tolerable, and the
system can recover. While meeting deadlines is still a priority, there is some
flexibility.
The scheduler aims to maximize the number of deadlines met and minimize
the number of missed deadlines. Tasks are often assigned priorities based
on their timing constraints.
Real-time scheduling is challenging due to the need for precise timing and meeting
stringent deadlines. Schedulers in real-time systems often employ priority-based
algorithms, rate-monotonic scheduling, earliest deadline first (EDF), and other
techniques to ensure that critical tasks are executed on time and that system
performance is predictable and reliable.
Operating System 39
Unit 2
Process Synchronization: Mutual Exclusion
critical section :- It is a part of a program where shared
Mutual Exclusion:
resources can be accessed by multiple processes.
Definition: Mutual exclusion is a fundamental concept in process
synchronization that ensures that only one process or thread can access a
critical section of code or a shared resource at a time, preventing concurrent
access and potential data corruption or race conditions.
Key Considerations:
If another process holds the lock, the requesting process must wait until the
lock is released.
Once a process exits the critical section, it releases the lock, allowing
another process to enter.
1. Locks:
A process or thread acquires the lock before entering a critical section and
releases it upon exiting.
Locks ensure that only one thread can hold the lock at a time.
Operating System 40
2. Semaphores:
Semaphores are more versatile synchronization objects, but they can also
be used to implement mutual exclusion.
3. Atomic Operations:
Prevents race conditions: Ensures that only one process can access critical
sections, preventing conflicts and data corruption.
Operating System 41
In summary, mutual exclusion is a fundamental concept in process synchronization
that ensures that only one process can access a critical section or shared resource
at any given time. Achieving mutual exclusion is crucial to prevent race conditions,
maintain data consistency, and promote orderly access to shared resources.
Various synchronization mechanisms, including locks, semaphores, and atomic
operations, are used to implement mutual exclusion in concurrent programs.
1. Locks (Mutexes):
Description: Locks, also known as mutexes (short for mutual exclusion), are
one of the most widely used software solutions for achieving mutual exclusion.
2. Semaphores:
Operating System 42
Implementation: Semaphores can be found in many programming languages
and libraries, such as sem_init in C/C++ or Semaphore objects in Java.
4. Peterson's Algorithm:
Operating System 43
Implementation: Lamport's Bakery algorithm is typically implemented in
shared-memory systems with proper atomic operations.
These software solutions to the mutual exclusion problem provide mechanisms for
controlling access to critical sections of code and shared resources in concurrent
programs. The choice of which solution to use depends on the programming
language, the platform, and the specific requirements of the application.
How It Works: To achieve mutual exclusion, processes or threads can use TAS
to acquire a lock. If the previous value is 0, it means the lock was successfully
acquired, and the process can enter the critical section. If the previous value is
1, the process is blocked until the lock is released.
Operating System 44
How It Works: To achieve mutual exclusion, CAS is used to attempt to update
a lock variable. If the current value matches the expected value, CAS sets the
new value and returns whether the operation was successful. Processes can
use CAS to compete for and acquire locks.
Drawbacks: The availability and behavior of CAS may vary across different
hardware architectures.
4. Locking Instructions:
Benefits: Locking instructions are highly efficient and eliminate the need for
busy-waiting or spinlocks.
Operating System 45
Drawbacks: The availability of locking instructions may be limited to specific
CPU architectures and may not be portable.
Semaphores
Semaphores are a synchronization mechanism used in concurrent programming to
control access to shared resources or coordinate the execution of multiple
processes or threads. They were introduced by Edsger Dijkstra in 1965 and have
become an essential tool in managing mutual exclusion and synchronization.
Semaphores are particularly valuable in situations where mutual exclusion and
coordination are required. Here's an overview of semaphores:
1. Binary Semaphores:
2. Counting Semaphores:
Operating System 46
Wait (P) Operation: Decrements the semaphore value. If the value is
already 0, the calling process or thread is blocked until another process
increments the semaphore (release operation).
3. Semaphore Operations:
Wait (P) Operation: The "P" operation (short for "proberen," which means "to
test" in Dutch) is used to request access to a semaphore. If the semaphore's
value is greater than zero, it is decremented, and the process continues
execution. If the value is zero, the process is blocked.
Signal (V) Operation: The "V" operation (short for "verhogen," which means "to
increment" in Dutch) is used to release a semaphore. It increments the
semaphore's value. If there are any blocked processes or threads waiting for
the semaphore, one of them is unblocked.
Mutual Exclusion: Binary semaphores can be used to ensure that only one
process or thread accesses a critical section at a time.
Operating System 47
5. Benefits of Semaphores:
6. Drawbacks of Semaphores:
Operating System 48
resources, data, or code, and conflicts can occur. The goal is to ensure that only
one process at a time can execute a critical section of code or access a shared
resource to prevent data corruption, race conditions, and other synchronization
problems. Here are some key aspects of critical section problems:
1. Critical Section:
Mutual Exclusion: Only one process can execute the critical section at any
given time.
Bounded Waiting: There exists a bound on the number of times that other
processes are allowed to enter their critical sections after a process has
made a request to enter its critical section and before that request is
granted.
Locks: Mutexes (mutual exclusion locks) are used to ensure that only one
process or thread can hold the lock at a time, providing mutual exclusion.
Operating System 49
Semaphores: Binary semaphores or counting semaphores can be used to
coordinate access to critical sections by allowing or blocking processes
based on the semaphore's value.
Condition Variables: Condition variables are used to signal and wait for
specific conditions to be met before accessing critical sections, often in
combination with locks.
Starvation can also occur when processes or threads are unfairly treated in
resource allocation, resulting in some processes being delayed indefinitely in
entering their critical sections.
In summary, critical section problems are central to ensuring the orderly execution
of concurrent programs. By applying synchronization mechanisms and algorithms
that satisfy the requirements of mutual exclusion, progress, and bounded waiting,
developers can address these issues and prevent synchronization problems such
as data corruption, race conditions, and deadlocks.
Operating System 50
Problem Description:
The Dining Philosophers problem is a classic synchronization problem that
illustrates the challenges of resource allocation and concurrency in a multi-process
or multi-threaded environment. The problem is often framed as follows:
Each philosopher thinks and eats. To eat, a philosopher must pick up both the
left and right forks.
Philosophers can only pick up one fork at a time, and they can only eat when
they have both forks.
The goal is to design a solution that allows the philosophers to eat without
leading to deadlocks or other synchronization issues.
Solution:
Several solutions can address the Dining Philosophers problem, ensuring that all
philosophers can eat while avoiding deadlocks. One common solution involves
using semaphores and mutex locks:
2. Pick up forks: To eat, a philosopher must pick up both the left and right
forks. To do this, they acquire the semaphores (forks). If both forks are
available, the philosopher picks them up.
4. Put down forks: After eating, the philosopher releases the forks (releases
the semaphores) for others to use.
Operating System 51
philosophers pick up the left fork first and then the right fork, one philosopher
can pick up the right fork before the left, breaking the circular dependency.
Care must be taken to ensure that the critical section (the process of picking up
and putting down forks) is properly synchronized with mutex locks to avoid race
conditions.
Balancing the use of forks is important to ensure that all philosophers get a
chance to eat.
There is one barber and a waiting room with limited capacity for customers.
Customers enter the barber shop and either find an available seat in the waiting
room or leave if it's full.
The barber serves one customer at a time. If there are no customers, the barber
sleeps. When a customer arrives, they wake the barber.
The goal is to design a solution that simulates this behavior while ensuring that
customers are served in an orderly manner.
Solution:
Solving the Barber Shop problem requires coordination and synchronization to
manage customers and the barber's activities. One common solution involves using
semaphores and mutex locks:
A mutex lock is used to control access to shared resources, such as the waiting
room and the barber's chair.
Operating System 52
1. If a seat is available in the waiting room, they take a seat. If all seats are
occupied, they leave.
Memory Organization
Memory organization is a fundamental concept in computer systems and computer
architecture. It refers to how a computer's memory is structured, managed, and
Operating System 53
used to store and retrieve data and instructions. Memory organization plays a
critical role in the performance and functionality of a computer system. Here's an
overview of memory organization:
1. Memory Hierarchy:
Registers: These are the smallest, fastest, and most closely located to the
CPU. Registers store data that the CPU is actively processing.
Main Memory (RAM): RAM is the primary volatile memory used to store
data and instructions that the CPU needs to access quickly during program
execution.
2. Address Space:
The address space is divided into various regions for different purposes,
including program memory, data storage, and system memory.
Operating System 54
3. Memory Units:
Memory is typically organized into smaller units, such as bytes. Each memory
unit is identified by a unique address. In modern systems, the basic unit of data
storage is the byte, which is composed of 8 bits.
4. Memory Types:
Memory can be categorized into different types based on its characteristics and
usage. Common memory types include:
RAM (Random Access Memory): RAM is used for temporary data storage
during program execution. It provides fast read and write access.
Cache Memory: Cache memory is a small but extremely fast memory used
to store frequently accessed data.
5. Memory Management:
6. Memory Access:
Memory access involves reading data from or writing data to memory. The
speed and efficiency of memory access are critical for the overall performance
of a computer system.
7. Address Mapping:
Operating System 55
access the correct memory locations.
8. Memory Protection:
Memory Hierarchy
The memory hierarchy is a key concept in computer architecture and design,
outlining the various levels of memory in a computer system, each with distinct
characteristics and purposes. The memory hierarchy is structured in a way that
optimizes data access speed and storage capacity while managing costs. Here's an
overview of the memory hierarchy:
1. Registers:
Description: Registers are the smallest and fastest storage units in the
memory hierarchy. They are located within the CPU itself.
Purpose: Registers store data and instructions that the CPU is actively
processing. They are used for rapid data access and temporary storage during
computation.
Characteristics: Registers have very fast access times, but their capacity is
extremely limited. They typically store small amounts of data, such as CPU
registers like the program counter and general-purpose registers.
2. Cache Memory:
Operating System 56
Purpose: Cache memory serves as a buffer for frequently accessed data and
instructions. It helps improve the CPU's speed and performance by reducing the
time it takes to access data.
Characteristics: Cache memory is faster than main memory but has limited
capacity. It operates on the principle of temporal and spatial locality, storing
frequently accessed data to reduce the need for slower access to main memory.
Purpose: RAM stores data and instructions that the CPU needs to access
quickly during program execution. It serves as a bridge between the high-speed
cache and the long-term storage of secondary storage devices.
Description: Secondary storage devices include hard disk drives (HDDs) and
solid-state drives (SSDs).
5. Tertiary Storage:
Operating System 57
Description: Tertiary storage typically includes slower and higher-capacity
storage solutions like optical discs (e.g., CDs, DVDs) and tape drives.
Purpose: Tertiary storage is used for archival purposes and long-term data
backup. It is slower to access than secondary storage.
6. Remote Storage:
Purpose: Remote storage is used for data backup, sharing, and access from
multiple devices. It provides redundancy and data availability from various
locations.
Operating System 58
Memory Management Strategies
Memory management is a critical aspect of computer systems, responsible for
organizing and allocating memory efficiently. It ensures that programs and data are
loaded into the computer's memory and that they are accessible to the CPU when
needed. Here are some key memory management strategies:
2. Partitioned Allocation:
Operating System 59
Advantages: Efficient use of memory, and multiple programs can be run
simultaneously.
3. Paging:
Description: Paging divides physical memory and virtual memory into fixed-
size blocks called pages. Similarly, programs are divided into fixed-size blocks
called frames. The OS manages a page table to map virtual pages to physical
frames.
4. Segmentation:
Disadvantages: May still suffer from fragmentation, both internal and external.
Segment management can be complex.
5. Virtual Memory:
Operating System 60
Disadvantages: Disk access is slower than physical memory, which can lead to
performance issues when swapping data between memory and disk.
6. Demand Paging:
7. Thrashing:
Description: Thrashing occurs when the system spends more time swapping
pages between memory and disk than executing processes. It is a performance
bottleneck caused by excessive page faults.
8. Swapping:
9. Compaction:
Operating System 61
Advantages: Reduces external fragmentation, making more memory available
for new processes.
5. Limited Address Space: Contiguous allocation may limit the address space
available for a single process, as the size of a process cannot exceed the size
Operating System 62
of the largest available contiguous block.
Comparison:
Operating System 63
Complexity: Non-contiguous allocation can be more complex to manage,
particularly in terms of memory protection and the management of separate
segments.
1. Fixed Partitioning:
2. Dynamic Partitioning:
Operating System 64
How It Works: When a process is loaded, it is allocated the exact amount of
memory it needs, and the partition size can vary. As processes enter and exit
the system, partitions are created and destroyed as needed.
3. Buddy System:
How It Works: Memory is split into blocks of sizes like 1 KB, 2 KB, 4 KB, and
so on. When a process requests memory, it is allocated a block that matches or
exceeds its size, and any excess space is divided recursively until an exact
match is found.
4. Paging:
Description: Paging divides both physical and virtual memory into fixed-size
blocks called pages.
How It Works: Processes are divided into fixed-size blocks called frames. The
operating system maintains a page table to map virtual pages to physical
frames.
5. Segmentation:
Operating System 65
Description: Segmentation divides memory into logical segments based on a
program's structure. Each segment is assigned specific permissions and can
grow or shrink as needed.
How It Works: Memory is divided into segments like code, data, and stack.
Processes can request additional memory segments as needed.
Disadvantages: May still suffer from fragmentation, both internal and external.
Segment management can be complex.
6. Swapping:
Operating System 66
Here's a breakdown of the differences between logical and physical address
spaces:
1. Definition: The logical address space, also known as virtual address space, is
the set of addresses generated by a program during its execution. These
addresses are typically generated by the CPU as the program runs and are
used by the program to reference memory locations.
3. Size: The logical address space is often larger than the physical address
space. It can be as large as the range of values that can be held by the data
type used to represent addresses (e.g., 32-bit or 64-bit).
1. Definition: The physical address space is the set of addresses that directly
correspond to locations in the computer's physical memory hardware (RAM).
These addresses are used by the memory management unit (MMU) to fetch
data from or store data to physical memory chips.
Operating System 67
2. Visibility: Physical addresses are not visible to the application or program. The
program running on the computer deals with logical addresses, and the
translation to physical addresses is handled by the operating system and
hardware.
3. Size: The physical address space is limited by the amount of physical RAM
installed in the computer. It typically ranges from a few gigabytes to terabytes,
depending on the system's hardware.
5. Static: Physical addresses are static and do not change during program
execution. They map directly to physical memory locations.
Address Translation:
The relationship between logical and physical addresses is managed through
address translation. The MMU, which is part of the CPU, translates logical
addresses generated by the program into physical addresses that are used to
access memory. This translation is a crucial part of virtual memory systems,
allowing programs to operate with a larger logical address space than the available
physical memory.
Swapping
Operating System 68
Swapping is a memory management technique used in computer systems to
efficiently utilize physical memory (RAM) and provide the illusion of having more
memory than is physically available. It involves moving data between the RAM and
secondary storage (usually a hard disk or SSD) to ensure that the most actively
used processes and data are in RAM, while less active or unused portions are
temporarily stored on the disk. Here's how swapping works and its key aspects:
How Swapping Works:
4. Disk Swap Space: The portion of secondary storage used to temporarily store
swapped-out pages is called the "swap space." This space must be large
enough to accommodate the pages that are not in physical memory.
2. Page Faults: The frequency of page faults is a key factor in determining when
and how often swapping occurs. A system with a high rate of page faults may
experience more swapping.
Operating System 69
3. Optimizing Swapping: Properly configuring the system's virtual memory
settings, such as the size of physical memory, the size of swap space, and the
page replacement algorithm, is crucial for optimizing swapping and avoiding
performance bottlenecks.
Paging
Paging is a memory management technique used in computer systems, particularly
in virtual memory systems, to efficiently manage and access memory. It involves
dividing both physical and virtual memory into fixed-size blocks called pages.
Paging allows for several benefits, including efficient memory allocation, memory
protection, and the illusion of a larger address space. Here's an overview of how
paging works and its key aspects:
Operating System 70
1. Page Size: Paging divides memory into fixed-size blocks known as pages. The
page size is typically a power of 2, like 4 KB or 4 MB. Both the physical memory
(RAM) and virtual memory (address space) are divided into pages.
3. Page Table: The page table is a data structure maintained by the operating
system. It contains entries that map virtual page numbers to physical frame
numbers. Each process has its own page table, which is used for address
translation. The operating system is responsible for creating and managing
these page tables.
4. Page Faults: If a program tries to access a page that is not in physical memory
(a page fault occurs), the operating system must fetch the required page from
secondary storage (usually a hard disk) into an available physical frame. This
process is known as page swapping.
2. Address Space: Paging provides the illusion of a large address space for each
process. The actual physical memory can be smaller than the virtual address
space, allowing programs to access more memory than is physically available.
3. Page Replacement: When physical memory is full, the operating system uses
a page replacement algorithm (e.g., LRU, FIFO) to determine which page to
swap out to secondary storage to make room for a new page. This is crucial for
maintaining the illusion of a larger address space.
Operating System 71
4. Security and Isolation: Paging allows for memory protection and isolation.
Each process has its own address space, and the page table ensures that
processes cannot access each other's memory.
Operating System 72
Segmentation
Segmentation is a memory management technique used in computer systems to
divide the memory into logical segments, each of which can be assigned specific
attributes and permissions. Segmentation provides a more flexible and structured
approach to memory organization compared to other techniques like contiguous
memory allocation. Here's an overview of how segmentation works and its key
aspects:
Operating System 73
2. Segment Attributes: Each segment can be assigned specific attributes, such
as read-only, read-write, or execute permissions. These attributes dictate how
the segment can be accessed and modified.
3. Segment Limits: Segments have defined limits, indicating the size or extent of
each segment. The limits specify the range of addresses that belong to the
segment.
Operating System 74
5. Complexity: Managing segments and their attributes can be more complex
than traditional memory allocation methods. The operating system must
maintain segment tables and perform address translations.
Operating System 75
Segmentation with Paging Virtual Memory: Demand
Paging
Segmentation and paging are two distinct memory management techniques.
However, in some modern computer systems, they are combined to create a more
flexible and efficient memory management scheme. When segmentation and
paging are used together with demand paging, it provides benefits in terms of
memory allocation, protection, and the illusion of a larger address space. Here's an
explanation of how these techniques work together with a focus on demand paging:
Operating System 76
segment may be used for different types of data or code, making memory
management more structured and secure.
2. Paging: Paging divides both physical and virtual memory into fixed-size blocks
called pages. Paging eliminates external fragmentation and allows for the
efficient allocation and deallocation of memory.
Demand Paging:
3. Page Table: The operating system maintains page tables for each segment.
These page tables map virtual page numbers to physical frame numbers. Each
segment has its page table.
4. Demand Paging: Pages are not loaded into physical memory at the start of
program execution. Instead, they are loaded into memory on-demand when
they are accessed by the program. When a page fault occurs (i.e., a requested
page is not in physical memory), the page is brought from secondary storage
(e.g., a hard disk) to physical memory.
Operating System 77
6. Security and Protection: Segmentation and paging together allow for strong
memory protection and isolation. Each segment/page can have specific
attributes and permissions, and unauthorized memory access results in
segmentation and paging faults.
8. Operating System Control: The operating system manages the page tables
for each segment, performs address translations, and handles page faults. It is
responsible for setting up and managing segments and pages.
The combination of segmentation and paging with demand paging allows for a more
flexible, efficient, and secure memory management scheme. It's suitable for
multitasking environments where multiple processes run concurrently, and memory
protection is crucial. This approach ensures that processes can have a structured
and isolated memory layout while benefiting from the efficient use of physical
memory through demand paging.
Page Replacement:
1. Page Table: In a virtual memory system, a page table is used to map logical
pages in a program's address space to physical frames in RAM.
2. Page Fault: When a program accesses a page that is not currently in physical
memory, a page fault occurs. This means the required page must be brought in
Operating System 78
from secondary storage (e.g., a hard disk) into an available physical frame.
3. Full Memory: When physical memory is fully occupied, and a new page needs
to be brought in, an existing page must be evicted (swapped out) to make room
for the new page.
Page-Replacement Algorithms:
2. LRU (Least Recently Used): LRU replaces the page that has not been used
for the longest time. It requires maintaining a linked list or a counter for each
page to track their usage history. While it provides better performance, it can be
complex to implement efficiently.
3. Optimal: The optimal algorithm selects the page that will not be used for the
longest time in the future. This algorithm provides the best possible
performance but is not practical in real systems because it requires knowledge
of future page accesses, which is impossible to predict.
4. LFU (Least Frequently Used): LFU replaces the page that has been used the
least number of times. It tracks the usage count of each page. LFU can be
efficient in some scenarios but may not always provide the best results.
Operating System 79
are replaced. If a page is accessed again before replacement, it is kept in
memory.
7. NRU (Not Recently Used): NRU divides pages into four categories (based on
the recent usage and modification bits), and it replaces a page randomly from
the lowest non-empty category.
8. LFU (Least Frequently Used): This algorithm replaces the page with the
lowest usage count. It aims to keep the pages that are accessed most
frequently in memory.
9. MFU (Most Frequently Used): MFU replaces the page that has the highest
usage count. It attempts to keep the pages that are most heavily used in
memory.
Advantages:
Operating System 80
memory. This initial loading minimizes the memory footprint and allows the
system to run more processes simultaneously.
3. Quick Program Start: Processes can start running quickly because they don't
need to load their entire code and data into memory before execution. Only the
initially needed pages are loaded on-demand.
Challenges:
1. Page Faults: A major factor affecting the performance of demand paging is the
frequency of page faults. Frequent page faults, especially when combined with
slow disk I/O operations, can degrade system performance.
3. Disk I/O Latency: The performance of demand paging is highly sensitive to the
speed of secondary storage (e.g., hard disks or SSDs). Slow disk I/O can lead
to significant delays in page retrieval and swapping, negatively impacting
performance.
Operating System 81
1. Page Replacement Algorithms: Selecting an efficient page replacement
algorithm can significantly impact page fault rates. Algorithms like LRU (Least
Recently Used) and LFU (Least Frequently Used) aim to reduce the number of
page faults.
2. Page Size: The choice of page size can affect the granularity of page swaps.
Smaller pages reduce internal fragmentation but may lead to more frequent
page faults.
4. Memory Size: Increasing physical memory can reduce the frequency of page
faults because more pages can be kept in RAM. Adequate RAM is essential for
efficient demand paging.
Thrashing
Thrashing is a term used in the context of computer systems and memory
management to describe a situation where the system is excessively swapping data
between physical memory (RAM) and secondary storage (e.g., hard disk or SSD). It
occurs when the demand for memory by processes exceeds the available physical
memory, leading to a constant cycle of page swapping. Thrashing significantly
degrades system performance and can lead to a "death spiral" where the system
becomes virtually unresponsive. Here's a more detailed explanation of thrashing:
Causes of Thrashing:
Operating System 82
1. Insufficient Physical Memory: The primary cause of thrashing is an
inadequate amount of physical memory (RAM) to meet the memory
requirements of the running processes. When RAM is fully occupied, the
operating system starts swapping pages to secondary storage to make room for
new pages. This page swapping can become a vicious cycle if physical memory
remains overwhelmed.
Characteristics of Thrashing:
Operating System 83
1. Add More Physical Memory: The most straightforward solution to thrashing is
to increase the amount of physical memory in the system, ensuring that there is
enough RAM to accommodate the demands of running processes.
Operating System 84
Demand Segmentation and Overlay Concepts
Demand segmentation and overlay are memory management techniques that were
primarily used in early computer systems to optimize memory utilization in situations
where physical memory (RAM) was limited. These techniques allowed programs to
execute efficiently despite the constraints of limited memory resources. Here's an
explanation of demand segmentation and overlay concepts:
Demand Segmentation:
Operating System 85
3. On-Demand Loading: Demand segmentation takes the concept of "on-
demand loading" to the segment level. Initially, only a small portion of each
segment is loaded into physical memory. As the program runs, additional pages
are loaded into memory only when needed. This approach minimizes the initial
memory requirements for a program and allows it to execute with limited RAM.
Overlay Concepts:
Comparison:
Operating System 86
Overlay programming, on the other hand, splits a program into separate
overlays, with an overlay manager responsible for swapping them in and out of
physical memory.
Unit 3
Deadlocks
Deadlock in operating systems occurs when two or more processes are unable to
proceed because each is waiting for the other to release resources. This situation
leads to a standstill where none of the processes can continue their execution.
2. Hold and Wait: Processes already holding resources may request new
resources while waiting for others to be released.
Operating System 87
4. Circular Wait: There must exist a circular chain of two or more processes, each
waiting for a resource held by the next one in the chain.
Examples of Deadlock:
Neither process can proceed as they're each holding a resource the other
needs.
If a cycle exists in the graph where each process is waiting for a resource
held by another process in the cycle, it indicates a deadlock.
Operating System 88
mechanisms. These approaches aim to either eliminate one of the deadlock
conditions or manage resources in a way that avoids the possibility of a deadlock
occurring.
Deadlock Solution
There are several approaches to solve or prevent deadlocks in operating systems.
Each method aims to eliminate one or more of the necessary conditions for a
deadlock to occur: mutual exclusion, hold and wait, no preemption, and circular
wait.
Deadlock Prevention:
1. Mutual Exclusion Avoidance:
One way to prevent hold and wait is to require processes to request all their
required resources simultaneously, rather than one at a time.
Alternatively, a process can release all its currently held resources before
requesting new ones, reducing the likelihood of hold and wait scenarios.
3. No Preemption:
Operating System 89
The Banker's Algorithm is used to avoid deadlock by ensuring that the system
doesn’t enter an unsafe state where it can't satisfy the resource requests of all
processes. It works by analyzing the maximum, allocated, and available resources.
2. Resource Requests:
If the request doesn't push the system into an unsafe state, the resources
are allocated. Otherwise, the process is made to wait until the state
becomes safe again.
3. Safety Check:
It simulates the allocation to check if the system will remain in a safe state
after granting the resources.
If the system can satisfy all pending requests, it's considered safe to
allocate resources; otherwise, the process must wait.
Deadlock Detection:
Resource Allocation Graph (RAG):
1. Constructing the RAG:
Operating System 90
Nodes represent processes and resources.
2. Cycle Detection:
3. Algorithm Execution:
Deadlock Recovery:
Recovery Methods:
1. Process Termination:
2. Resource Preemption:
3. Rollback:
Revert the state of the system to a point where the deadlock didn't exist.
Conclusion:
Operating System 91
criticality of the processes involved. Employing a combination of these techniques
helps ensure system stability and prevent the detrimental effects of deadlocks on
system performance.
Resource concepts
In operating systems, resources refer to any entity that is used by a process and
can be requested, allocated, or manipulated during program execution. These
resources can be categorized into two main types: hardware resources and
software resources.
Hardware Resources:
1. CPU (Central Processing Unit):
It's divided into various segments like stack, heap, and code segment.
These devices allow interaction between the computer and the external
world.
4. Storage Devices:
Hard disks, solid-state drives (SSDs), and other storage mediums store
data persistently even when the computer is turned off.
Operating System 92
5. Network Resources:
Software Resources:
1. Files and Databases:
Databases organize and store data for efficient retrieval and manipulation.
Executable files and programs are software resources that execute specific
tasks on a computer system.
3. Memory Space:
4. Synchronization Objects:
2. Resource Management:
Operating System 93
Techniques include deadlock avoidance, detection, and recovery
mechanisms.
Device Management
Certainly! Device Management is a fundamental aspect of operating systems that
involves efficiently controlling and coordinating input/output (I/O) devices like disks,
printers, keyboards, etc. It encompasses various strategies to optimize device
performance, ensure data integrity, and manage the interaction between the
operating system and the devices.
2. I/O Scheduling:
Operating System 94
Mechanisms to handle device errors, recover from failures, and ensure data
integrity.
5. Interrupt Handling:
Processes are served in the order they arrive, irrespective of their location
on the disk. Can lead to high seek times and poor performance.
Services the request with the shortest seek time, minimizing disk arm
movement.
However, it may lead to starvation of requests further from the disk's current
position.
The disk arm moves in one direction, serving requests along the way until it
reaches the end, then reverses direction.
Operating System 95
Reduces the average seek time but may cause requests at the extremes to
wait for a long time.
Similar to SCAN but the arm only moves in one direction, servicing requests
in a circular manner.
Minimizes waiting time for requests as the arm moves continuously in one
direction.
Variants of SCAN and C-SCAN that avoid servicing empty areas of the disk,
improving performance for some workloads.
Rotational Optimization:
Rotational optimization techniques aim to minimize the rotational latency associated
with accessing data on a disk. This latency occurs due to the time it takes for the
disk's read/write head to reach the desired sector as the disk platter rotates.
ZBR divides the disk into zones, with each zone having a different number
of sectors per track.
Outer tracks have more sectors than inner tracks because they cover a
larger circumference. This helps in storing more data in outer tracks where
the linear velocity of the disk is higher, thus improving transfer rates for
these sectors.
Operating System 96
3. Multiple Disks and Redundancy:
Techniques like striping data across multiple disks or using parity for
redundancy and performance can reduce access times by distributing I/O
operations.
System Consideration:
System considerations in device management involve various factors that impact
the effective management and optimization of devices within an operating system
environment.
2. Reliability:
3. Scalability:
4. Compatibility:
Operating System 97
Ensuring compatibility with different types of devices and adherence to
industry standards.
2. Reduced Latency:
3. Enhanced Performance:
Conclusion:
Operating System 98
encompass various aspects like performance, reliability, scalability, and
compatibility to ensure effective device management within an operating system.
Caching and buffering techniques enhance I/O performance by storing frequently
accessed data in fast-access memory, reducing latency and improving overall
system efficiency. These components collectively contribute to efficient device
management and optimal utilization of I/O resources in operating systems.
Unit 4
File System Introduction:
A file system is a method used by an operating system to manage and organize
files and directories on storage devices like hard drives, SSDs, optical drives, etc. It
provides a structured way to store, retrieve, and manage data efficiently.
Have names, types, sizes, and attributes (e.g., permissions, creation date,
etc.).
2. Directories:
3. File Operations:
Include basic operations like create, read, write, delete, rename, etc.
The file system provides interfaces for applications and users to perform
these operations.
File Organization:
Operating System 99
File organization refers to the way files are physically stored and structured on a
storage medium. Various file organization methods exist, each offering different
advantages and trade-offs in terms of access speed, storage efficiency, and ease of
file manipulation.
Data is accessed sequentially from the start to the end of the file.
Suitable for applications that access data in a linear manner, like tape
storage.
Files are divided into fixed-size blocks or records that can be accessed
directly using an index or address.
Allows immediate access to any part of the file without reading the
preceding data, common in disk-based storage.
File systems manage the storage and retrieval of data by organizing files and
directories on storage devices. Understanding different file organization techniques
is crucial as it impacts the speed of access, storage efficiency, and overall
performance of file systems. The selection of a file organization method often
depends on the specific requirements of the applications and the characteristics of
the storage medium being used. Efficient file organization contributes significantly to
the effective management and utilization of storage resources within an operating
system.
2. File Operations:
4. Access Control:
Organizes files and data on physical storage media, such as hard drives,
SSDs, or tapes, using specific data structures and algorithms.
Manages space allocation, disk partitioning, and data structures for efficient
storage and retrieval.
Handles the details of reading from and writing to storage devices, including
disk I/O operations and device drivers.
Includes structures like File Allocation Tables (FAT), Master File Table
(MFT), inode tables, or other data structures to map logical file names to
physical disk addresses.
It uses the services provided by the Physical File System to implement and
manage files on the actual storage medium according to the logical structure.
Both the Logical and Physical File Systems work in tandem to manage files and
directories in an organized and efficient manner, providing users and applications
with a standardized and user-friendly interface while efficiently managing storage
resources at the hardware level.
Doesn't suffer from fragmentation but can be inefficient for random access
due to traversal of linked blocks.
3. Indexed Allocation:
Uses an index block that contains pointers to all the blocks of a file
scattered across the disk.
Allows direct access to any block in the file without traversing pointers.
A variation of indexed allocation with a table that maps file blocks to their
respective locations on the disk.
Provides efficient access and reduces fragmentation but can be less space-
efficient due to overhead from maintaining the table.
5. Multi-level Indexing:
Breaks down the index structure into several layers to manage large files
and disks effectively.
6. Extent-Based Allocation:
Conclusion:
File allocation strategies play a crucial role in file system design, impacting space
utilization, access speed, and fragmentation levels. The choice of allocation method
depends on the characteristics of the file system, the nature of the data being
stored, and the trade-offs between efficient space utilization and access speed. A
well-designed file allocation strategy ensures optimal utilization of storage resources
while providing efficient access to files.
Simple and efficient for determining free blocks but can be space-inefficient
for large disks.
Efficient for small file systems but can be slow for large storage systems
due to traversal.
3. Grouping or Segmentation
Divides the disk into segments or groups and maintains free lists for each
segment.
Helps in reducing search time for free blocks by narrowing down the search
space.
4. Counting Methods
Tracks the total number of free blocks instead of individual block status.
Allows quick determination of total free space but requires scanning to find
specific free blocks.
1. Initial Allocation:
Occurs during disk formatting or file system creation, where the entire disk
space is marked as free.
When a new file is created, the file system allocates space by selecting free
blocks/clusters.
1. Fragmentation:
3. Dynamic Allocation:
Conclusion:
Free space management is a critical aspect of file system design, ensuring efficient
utilization of storage and facilitating optimal allocation of space for new files. The
choice of a particular free space management technique depends on factors like
disk size, access patterns, performance considerations, and trade-offs between
storage efficiency and access speed. A well-designed free space management
mechanism contributes to maintaining an organized file system, reducing
fragmentation, and optimizing storage utilization within the operating system.
Permissions are assigned to the owner, the group, and other users, defining
what actions each category of users can perform on the file.
Users are assigned to specific roles, and each role has predefined access
permissions.
MAC defines access permissions based on security labels and policies set
by the system administrator or security policy.
1. Sequential Access:
2. Random Access:
Allows direct access to any part of a file without having to read through the
preceding data.
4. File Mapping:
Maps a file directly into memory, enabling direct manipulation of the file's
contents in memory without explicit read/write operations.
File access control mechanisms ensure that sensitive data is protected from
unauthorized access or modification.
2. Performance:
4. Resource Utilization:
Conclusion:
File access control mechanisms define and enforce permissions and restrictions on
file access and modifications. Data access techniques determine how efficiently and
effectively data is retrieved, processed, and manipulated within the file system.
Balancing security requirements with performance considerations is crucial in
designing file access control and data access techniques to ensure data integrity,
confidentiality, and efficient file management within an operating system.
These values are compared before and after data transmission or storage
to verify data integrity. Any changes in data would result in different
checksums or hashes.
3. Digital Signatures:
They involve the use of public and private key pairs to sign and verify the
data, ensuring that it has not been altered or tampered with.
In addition to CRC, error detection and correction codes are used in storage
systems to detect and fix data errors or corruption.
4. Business Continuity:
Ensures that in case of data corruption or loss, there are measures in place
for data recovery, preserving business continuity.
Challenges:
1. Overhead and Performance Impact:
Conclusion:
File systems
File systems like FAT32, NTFS, and Ext2/Ext3 are used by various operating
systems to organize and manage data on storage devices like hard drives, SSDs,
USB drives, and memory cards. Each file system has its characteristics, features,
and limitations, catering to different needs and scenarios.
2. Limitations:
3. Usage:
Commonly used for USB drives, memory cards, and external storage due to
its cross-platform compatibility.
3. Usage:
2. Ext3 Characteristics:
3. Usage:
Optimized for SSDs and flash storage, offering improved performance and
efficiency.
Primary file system for macOS and iOS devices, replacing HFS+.
2. Usage:
Mainly used for high-capacity storage systems and servers requiring robust
data protection.
Conclusion:
Different file systems offer various features, performance levels, and compatibility
with different operating systems. The choice of a file system depends on the
specific requirements, platform compatibility, security needs, and intended use
(such as internal storage, external drives, or specialized environments). Each file
system has its advantages and limitations, and selecting the right file system is
crucial for optimal data management and storage within an operating system or
computing environment.