Final Notes
Final Notes
Operating System
Chapter 1 - Overview of Operating System (8 Marks)
Operating System 🔴
An operating system (OS) is software that manages all the hardware and software on a computer
or device.
It acts as a bridge between the user and the computer's hardware, making it easier for users
to interact with the system.
The OS controls tasks like running applications, managing files, and handling input/output
operations.
User Interface: It provides an easy way for users to interact with the computer through
graphical interfaces or commands.
Program Execution: It helps programs run by managing their tasks and giving them the
resources they need.
File Management: It organizes and stores files in a structured way, allowing easy access and
retrieval.
Security and Protection: It keeps data safe from unauthorized access and prevents the system
from crashing.
2. Memory Management: It allocates and tracks memory used by for programs and makes sure they
don’t use more memory than allowed.
3. File Management: It handles the storage, organization, and retrieval of files on the
computer.
4. Device Management: It controls input/output devices like the keyboard, mouse, printer, etc.,
ensuring they work smoothly.
5. Security and Access Control: Protects system resources and data, allowing access only to
authorized users and programs.
6. User Interface: Provides a way for users to interact with the computer, like using a GUI or
command line.
Since resources are limited, multiple users or programs may need the same resources, such as
memory and CPU, at the same time.
The operating system makes sure that all processes get the resources they need without
problems like deadlocks.
The operating system also manages memory efficiently through virtual memory techniques.
It uses file system management to create, delete, and modify files and directories on storage
devices.
Additionally, network management techniques are used by the operating system to manage
network bandwidth efficiently.
Operating System 1
Batch Operating System. 🔴
A batch operating system is an operating system that processes jobs in groups, called
batches, without requiring user interaction during execution.
A batch is a collection of similar jobs grouped together to be processed without any user
input.
In a batch operating system, jobs are lined up in a queue and processed one by one
automatically by the OS.
Once a job is submitted, the user does not interact with it until it is finished. The OS
takes care of executing the tasks in the background.
Batch OS is useful for handling large volumes of repetitive tasks, reducing the need for user
involvement.
It is commonly used for tasks like payroll processing, report generation, or data analysis,
where automatic execution of large jobs is needed.
The scheduler decides which programs to move into the ready queue.
The ready queue stores multiple programs in main memory, waiting to be executed.
Since there is only one processor, only one program is executed at a time.
The CPU switches between these programs to ensure it is always busy, which increases overall
efficiency.
Example: A user can run multiple applications like Word, Excel, and Access on a computer at
the same time.
Advantages:
2. Higher Throughput
This switching happens so fast that users can interact with each program as if it’s running
continuously.
Operating System 2
Time-sharing systems provide direct communication between users and the computer, giving each
user the feeling that they have their own CPU.
The system lets multiple users share computer resources at the same time.
The operating system gives each user a small time slice of CPU time. Once a user's time slice
ends, the CPU moves to the next user.
The time slice is so short that users experience minimal delay, giving the impression of
exclusive CPU use.
The main goal of a time-sharing system is to reduce response time and provide fast, efficient
user interaction.
In above figure, the user 5 is active but user 1, user 2, user 3, and user 4 are in waiting
state whereas user 6 is in ready status.
These processors share common computer resources like the bus, clock, memory, and peripheral
devices.
The operating system manages tasks in a multiprocessor system by assigning different tasks to
each processor.
Programs for multiprocessor systems are often threaded, meaning they are divided into smaller
parts that can run separately.
Multiple CPUs work together to divide a task, speeding up its completion. Once all parts are
finished, the results are combined to produce the final output.
1. Improved Reliability
2. Increased throughput
3. Cost Efficiency
4. Scalability
5. Faster Execution
Types:
1. Hard:
Operating System 3
For example, in medical devices like pacemakers, it’s crucial to meet deadlines to
ensure patient safety.
2. Soft:
Missing a deadline may affect performance but won’t cause the system to fail.
For example, in video streaming, slight delays may reduce quality but won’t stop the
stream.
Applications:
Simulations
Military applications
Each computer has its own processor, memory, and storage, but they are linked through a
network.
The operating system manages tasks and resources across all the computers, making sure they
share data and work together smoothly.
Users interact with the system as if it were a single machine, even though the work is being
done on different computers.
The system is fault-tolerant, meaning if one computer stops working, others can take over its
tasks without causing any issues.
Advantages include better performance, sharing resources, and flexibility, as many computers
can contribute to completing big tasks.
In summary, a distributed operating system helps multiple computers work together, making
tasks faster and more reliable.
Allows multiple programs to use the CPU at Allows user interaction while running
once. multiple tasks.
Reduces CPU idle time and increases Runs multiple processes at the same time
throughput. to enhance CPU and system efficiency.
The CPU quickly switches between different In a single-user environment, the CPU
programs or processes in a multiuser switches between processes of various
environment. programs.
Difference between Time sharing system and Real time system 🟠 (2M - S-23)
Real-Time Operating System (RTOS) Time-Sharing Operating System (TSOS)
Built for tasks that need instant Built to share resources among many users
responses or tasks
Gives quick and reliable responses Responses can take longer and vary in time
Used in critical systems like medical Used in general computing like desktops
devices and robots and servers
Difference between CLI based OS and GUI based OS 🟢 (4M - W-22, W-23, S-22)
Command-Line OS (CLI) Graphical User Interface OS (GUI)
Operating System 4
Command-Line OS (CLI) Graphical User Interface OS (GUI)
Uses less memory and power Uses more memory and power
Provides full control over the system Limited control compared to CLI
Efficient for repetitive tasks Best for visual tasks and navigation
2. Program Execution
3. I/O Operations
5. Communication
6. Error Detection
7. Resource Allocation
8. Accounting
They act as an interface between a program and the OS, allowing programs to request services
or access hardware resources that are controlled by the OS.
System calls handle various functions, including file operations, process control, and
communication.
Types:
1. Process Control:
Create/Terminate process
Load/Execute process
End/Abort process
2. File Management:
3. Device Management:
4. Information Maintenance:
Operating System 5
Get/Set time or date
5. Communication:
Components of Operating System 🟢 (4M - W-19, W-22, W-23, S-22, S-23, S-24)
1. Process Management:
A process requires resources like CPU time, memory, files, and I/O devices to execute.
These resources can be allocated when the process is created or during its execution.
The operating system is responsible for the following activities in connection with
process management:
Main memory is a large, fast storage space that holds data and programs.
It consists of smaller units called words or bytes, each with a unique address.
The CPU accesses main memory directly to fetch instructions and read/write data.
3. File Management:
Files are stored on secondary storage like disks or tapes for long-term use.
Some of the examples of storage media are magnetic disks and optical disks, and each of
these have unique features like speed, capacity, data transfer rate, etc.
Files are organized into directories to make them easier to find and manage.
It manages input and output devices like printers, scanners, and drives.
The operating system uses device drivers to communicate with these devices.
Device drivers convert OS data into a format that devices can process, like laser pulses
for a printer.
Operating System 6
Secondary storage is used to store data and programs that don’t fit in main memory.
It is also necessary because main memory loses data when the power is off.
Common secondary storage devices include disks and tapes, which store programs and files
until they are needed.
Operating System Tools 🟢 (6M - W-19, W-22, W-23, S-22, S-23, S-24)
1. User Management:
The operating system creates, modifies, and deletes user accounts on the system.
The system tracks user actions, such as login/logout, file access, and command execution,
for auditing or security purposes.
Users are organized into groups, making it easier to assign permissions to multiple users
at once.
2. Device Management:
The operating system manages the allocation of input/output devices, such as printers and
scanners.
It ensures smooth data transfer between the device and the operating system.
3. Performance Monitor:
The performance monitor tracks CPU usage, memory usage, disk activity, and other system
resources.
It notifies users about potential problems, such as high CPU usage or memory leaks.
4. Task Scheduler:
The task scheduler decides which process gets CPU time and when.
The scheduler ensures that all processes get a fair share of the CPU, preventing any one
process from monopolizing resources.
5. Security Policy:
The security policy specifies rules and guidelines for protecting system data and
resources.
The system ensures that users authenticate themselves using passwords, biometrics, or
other methods before gaining access.
Data is protected through encryption, ensuring it remains secure even in the event of a
breach.
Operating System 7
A process is a program in execution.
1. New:
The process is being created and initialized. It is not yet ready to run.
Moves to the Ready state when it has been set up and is ready to be scheduled.
2. Ready:
The process is waiting in memory for CPU time. It is ready to run but not currently
executing.
Moves to the Running state when the CPU scheduler selects it for execution.
3. Running:
Can move to the Waiting state if it needs to wait for an I/O operation to complete, or to
the Ready state if it is preempted by the scheduler. Moves to the Terminated state when it
finishes execution.
4. Waiting:
The process is waiting for an event or resource, such as an I/O operation, to complete.
Moves to the Ready state once the event or resource becomes available and the process is
ready to resume execution.
5. Terminated:
Process State: It indicates current state of a process. Process state can be new, ready,
running, waiting and terminated.
Process Number: Each process is associated with a unique number which is known process
identification number.
Program Counter: It indicates the address of the next instruction that needs to be executed
for the process.
CPU Registers: Includes information about various registers used by the CPU, such as
accumulators, stack pointers, and general-purpose registers.
Memory Management Information: Contains details about memory allocation, like base and limit
registers, page tables, or segment tables.
Accounting Information: Records data about CPU usage, time limits, and I/O devices used by
the process, such as a list of open files.
Operating System 8
Scheduling Queues 🔴 (4M - S-22)
Job Queue: Stores all processes that are waiting to be processed by the operating system.
Ready Queue: Contains processes that are ready to run and waiting for CPU time.
Device Queue: Contains processes that are waiting for an input/output device, such as a
printer or disk, to become available.
This scheduler selects programs from the job pool and loads them into the main memory.
It controls the degree of multiprogramming, which refers to the number of processes loaded
into memory at one time.
I/O Bound Processes: These processes spend more time performing input/output
operations.
CPU Bound Processes: These processes spend more time doing computations with the CPU.
The long term scheduler balances the system by loading both I/O bound and CPU bound
processes into the main memory.
When it selects a process, the process's state changes from new to ready.
Also known as the CPU scheduler, this scheduler selects processes that are ready for
execution from the ready queue.
The short term scheduler executes more frequently than the long term scheduler.
When it selects a process, the process's state changes from ready to running.
This scheduler comes into play when a running process is blocked due to an interrupt.
It swaps out the blocked process and stores it in a queue for blocked and swapped-out
processes.
When there is space available in the main memory, the medium term scheduler looks at the
list of swapped-out but ready processes.
It selects one process from that list and loads it into the ready queue.
The job of medium term scheduler is to select a process from swapped out process queue and
to load it into the main memory.
The medium term scheduler works closely with the long term scheduler to manage which
processes are loaded into the main memory.
Differences between Long term, Medium term and Short term Scheduling 🟠 (4M - W-22)
Long-Term Scheduler Short-Term Scheduler Medium-Term Scheduler
Controls the degree of Provides less control over Reduces the degree of
multiprogramming multiprogramming multiprogramming
Operating System 9
Long-Term Scheduler Short-Term Scheduler Medium-Term Scheduler
Selects processes from the pool and Chooses processes ready for CPU Swaps processes in and out of
loads them into memory for execution execution memory, continuing execution later
Manages process transition from "new" Manages process transition from No specific process state
to "ready" state "ready" to "executing" state transition management
It is a mechanism that saves and restores the CPU's state (or context) in a Process Control
Block (PCB), allowing the process to resume execution later.
State Save: The CPU's current state (register values, program counter, etc.) of the
process being removed is saved into its PCB.
State Restore: The CPU's state is restored from the PCB of the process that is about to
execute.
The context switch ensures that when a process is paused and then resumed, it can continue
from where it left off.
This process helps in managing multiple processes, allowing the CPU to switch between them
efficiently.
There are two main models of IPC: shared memory and Message passing.
The Shared Memory Communication Model allows multiple processes to exchange data
through a common memory space.
All processes involved can read from or write to this shared memory area.
Since processes directly access the memory, this method is very fast.
Without synchronization, processes might read incorrect data if they access shared
memory at the same time.
It is commonly used when processes are running on the same computer, especially in
systems with multiple processors.
Examples:
Client and server applications within the same machine sharing data.
Video games where multiple components (like graphics and physics engines) need to
share information quickly.
The shared memory model is great for fast communication but requires careful
synchronization to ensure data integrity.
Operating System 10
2. Message Passing: (4M - S-23)
The Message Passing Communication Model allows processes to communicate by sending and
receiving messages.
Messages can contain data or information, and are exchanged through the operating
system.
The operating system is involved in delivering messages between processes, which makes
it slower compared to shared memory.
Kernel intervention ensures that the communication is managed properly and securely.
The message passing model is easier to synchronize because each process communicates
explicitly.
Examples:
Chat applications where each message is sent from one user to another.
Thread 🔴
A thread is the smallest unit of execution within a process. It is a sequence of instructions
that can be scheduled and executed by the CPU.
A process can have multiple threads, and they share the same memory space and resources of
the process but execute different tasks.
Advantages of Threads:
Faster Execution: Threads in the same process share resources, making them faster and more
efficient than separate processes.
Resource Sharing: Threads share memory, so they use fewer resources compared to separate
processes that need their own memory.
Improved Responsiveness: Multiple threads can work on different tasks at the same time,
making the application more responsive.
Better CPU Utilization: Threads can run on different CPU cores, boosting performance and
CPU use.
Simplified Communication: Threads in the same process can easily share data, making
communication simpler than between separate processes.
Operating System 11
Process Thread
User-level threads are managed by user libraries and the application, with no direct
involvement from the operating system (OS).
The OS is unaware of these threads, so all management like creation and scheduling is handled
by the user program.
Advantages:
Disadvantages:
If one thread blocks, the entire process is blocked because the OS sees it as a single
thread.
Not ideal for multi-core systems, as the OS can't use multiple cores for user-level
threads.
Kernel-level threads are managed by the OS, which is responsible for their creation,
scheduling, and management.
The OS has full knowledge of the threads and can schedule them on different CPU cores.
Advantages:
If one thread blocks, the OS can still run other threads in the same process.
Disadvantages:
Higher overhead due to the need for system calls for management.
Multithreading 🔴
Multithreading is a technique where a single process is divided into multiple threads that
run at the same time (concurrently).
Each thread performs a specific task, allowing the process to carry out multiple operations
simultaneously, which boosts efficiency and performance.
Since threads share the same memory and resources of the process, communication between them
is faster and easier.
1. Responsiveness:
Multithreading keeps programs smooth and responsive, even during long tasks.
Example: A browser lets you use it while loading images in the background.
2. Resource Sharing:
Threads share memory and resources within the same process, making communication
easier.
3. Economy:
Threads are cheaper and faster to create and manage than processes.
Operating System 12
Switching between threads is quicker than switching between processes.
4. Scalability:
Threads can run on multiple processors at the same time, improving speed.
If the kernel thread is blocked, all user threads are also blocked.
Even with multiple processors, only one processor will be used since there is only one
kernel thread.
Advantages:
1. Easier to implement because all user-level threads are mapped to a single kernel
thread.
2. Switching between threads is faster since it does not involve kernel-level context
switching.
Disadvantages:
1. Only one thread can execute at a time, so it does not take full advantage of
multiprocessor systems.
2. One-to-One Model:
Each time a user thread is created, a corresponding kernel thread must be created.
Since each user thread is mapped to a different kernel thread, if one thread is blocked,
the others continue running.
Each kernel thread can run on different processors, allowing better use of multiple
processors.
Advantages:
1. Each user-level thread is paired with a kernel thread, allowing true parallel execution
on multiprocessor systems.
2. If one thread blocks, others can continue executing because they are independent.
Disadvantages:
1. More overhead for managing multiple kernel threads, including context switching.
Operating System 13
2. Each thread requires its own kernel resources, which can strain system resources.
3. Many-to-Many Model:
Many user threads are mapped to an equal or smaller number of kernel threads.
If a user thread makes a blocking system call, other threads are not blocked.
Advantages:
2. Can adapt to the number of processors available, making better use of system resources.
Disadvantages:
2. Managing the mapping of user threads to kernel threads can create overhead, potentially
slowing down performance.
Example: $ps – shows all running processes for the current user.
4. -e : This options displays all processes, including both user and system processes.
2. wait
The wait command pauses the execution of a script or process until the specified process
finishes.
Example: wait 1234 – waits for the process with id 1234 to complete.
Syntax: sleep[number][suffix]
Example: kill 1234 – terminates the process with the PID 1234.
5. exit
Operating System 14
Syntax: exit
Fair Allocation: Scheduling ensures that all processes are given a fair amount of CPU time,
preventing one process from monopolizing the CPU.
Maximize Throughput: It aims to increase the number of processes completed in a given time
period, improving system performance.
Minimize Waiting Time: Scheduling helps reduce the time processes spend waiting for CPU
access, leading to faster execution.
Maximize Response Time: In interactive systems, scheduling aims to respond to user inputs as
quickly as possible.
Prioritize Critical Tasks: It ensures that important or high-priority processes get CPU time
before less critical ones.
Maintain System Stability: Good scheduling prevents overloads and maintains a balanced
system.
I/O burst cycle: The time when a process is busy performing input/output (I/O) operations.
Resources are allocated for a limited time. Resources are held until the process completes or waits.
Round Robin, Shortest Remaining Time First. First Come First Serve, Shortest Job First.
2. Throughput:
For long processes, throughput might be one process per time unit, while for short
processes, it could be ten or more.
Operating System 15
3. Turnaround Time:
The time interval from the time of submission of a process to the time of completion of
that process is called as turnaround time.
It includes waiting to enter memory, time spent in the ready queue, CPU execution time,
and I/O operations.
4. Waiting Time:
Waiting time is the total time a process spends in the ready queue before it gets CPU
time.
If it needs resources while executing, it may go into a waiting state until those
resources are ready.
5. Response Time:
Response time is the time from when a request is submitted until the first response is
received.
It focuses on how quickly the system responds, not on the completion of the entire
process.
A process can produce early output while continuing to compute new results.
Describe I/O burst and CPU burst cycle with neat diagram 🟠 (4M - W-19)
CPU burst cycle: The time when a process is actively using the CPU to execute instructions.
I/O burst cycle: The time when a process is busy performing input/output (I/O) operations.
A process alternates between CPU execution and I/O operations during its execution.
It starts with a CPU burst cycle when the CPU is assigned to the process.
After the CPU burst, it enters an I/O burst cycle to perform I/O tasks.
The process keeps switching between CPU burst cycles and I/O burst cycles repeatedly.
The complete execution of a process starts with CPU burst cycle, followed by I/O burst cycle,
then followed by another CPU burst cycle, then followed by another I/O burst cycle and so on.
The process ends with a final CPU burst cycle that completes its execution and sends a
request to terminate.
FCFS is a scheduling algorithm where processes are executed in the order they arrive in
the ready queue.
How it works:
Each process runs till it’s completed before the next one starts.
Advantages:
Operating System 16
Fair: Each process is treated equally, and no process is skipped.
Disadvantages:
Convoy Effect: If a long process arrives first, it delays all subsequent processes,
leading to inefficient CPU utilization.
No Prioritization: FCFS doesn't consider the priority or burst time of processes, which
can lead to poor performance for short tasks.
Example:
P1 0 7
P2 1 4
P3 2 10
P4 3 6
P5 4 8
Shortest Job First (SJF) (4M - W-22, 6M - W-19, W-22, W-23, S-22, S-23, S-24)
SJF is a scheduling algorithm that selects the process with the shortest burst time
(execution time) to execute next.
How it works:
The process that has the smallest CPU burst time is selected for execution first.
Once a process completes, the scheduler selects the next shortest process.
Types:
Preemptive SJF (Shortest Remaining Time First): If a new process with a shorter burst
time arrives, it preempts the current process.
Advantages:
Minimizes Average Waiting Time: By executing shorter processes first, it minimizes the
time processes spend waiting in the queue.
Efficient for Batch Systems: Works well when processes have known burst times ahead of
time.
Disadvantages:
Starvation: Long processes might never get executed if there are always shorter
processes arriving.
Difficult to Predict: In practice, it's hard to know the exact burst time of a process
in advance.
Example:
Operating System 17
Round Robin (RR) (4M - W-19, 6M - W-22, S-23, S-24)
A small unit of time called a time quantum or time slice is used for pre-emption of a
currently running process.
The ready queue is implemented as a circular queue, where the CPU is given to the entire
processes in a first-come, first-served manner for a specific time period (time quantum).
When a process enters the system, it is added to the end of the queue. The CPU scheduler
selects the first process at the head of the queue and assigns the CPU to it for the
duration of the time quantum.
If a process finishes before the time quantum ends, the CPU is released and the next
process in the queue is given the CPU.
If a process doesn’t finish within the time quantum, it is preempted (paused), moved to
the end of the queue, and the CPU is given to the next process.
Newly created processes are added to the end of the queue, ensuring they get their
turn.
Disadvantages:
Longer Wait Times: Processes may wait longer to get CPU time.
Frequent Context Switches: Switching between processes takes time and resources.
Large Gantt Chart: If the time slice is too short (like 1 ms), the Gantt chart can
become unwieldy.
Example:
P1 24
P2 3
P3 3
Time quantum: 4 ms
Operating System 18
Priority Scheduling 🔴
Priority Scheduling is a scheduling algorithm where each process is assigned a priority.
The process with the highest priority is executed first.
How it works:
Each process is given a priority value, either by the system or the user.
Processes with higher priority values are scheduled before those with lower priority
values.
If two processes have the same priority, they are scheduled based on their arrival time
(FCFS can be used as a tie-breaker).
Types:
Preemptive Priority Scheduling: If a new process arrives with a higher priority than
the currently running process, the current process is preempted, and the new process is
executed.
Advantages:
Efficient: Can give priority to important tasks, improving system responsiveness for
critical tasks.
Disadvantages:
Starvation: Low-priority processes may never get executed if there are always higher-
priority processes.
Example:
The system maintains multiple queues, each with a specific priority level.
Each queue can use different scheduling algorithms like FCFS, SJF, or Priority Scheduling.
Once a process enters a queue, it remains in that queue for the entire duration of its
execution.
Operating System 19
Advantages:
Fairness: Important processes can be given higher priority with dedicated queues.
Disadvantages:
Inflexible: Once a process is assigned to a queue, it cannot move between queues, which
may not be ideal in some cases.
Example:
When a process requests resources and they are unavailable, it enters a waiting state.
Sometimes, a waiting process cannot proceed because the resources it needs are held by other
waiting processes, leading to a situation called deadlock.
Deadlock occurs when a process requests resources held by another waiting process, which, in
turn, is waiting for resources held by yet another process. As a result, no process can
execute its task.
Example:
Consider a system with three disk drives and three processes. If each process is allocated
one disk drive, there are no drives left. If all three processes then request an additional
disk drive, they will enter a waiting state, causing a deadlock. None of the processes can
continue until one releases the disk drive it is holding.
Operating System 20
At least one resource is held in a non-sharable mode, meaning only one process can use it
at a time.
If another process requests the same resource, it must wait until the resource is
released.
3. No Preemption:
A resource can only be released voluntarily by the process holding it after completing its
task.
4. Circular Wait:
There must be a circular chain of processes, where each process is waiting for a resource
held by the next process in the chain.
Mutual exclusion occurs when a resource can only be used by one process at a time.
Deadlocks can happen due to this condition, as processes wait for the resource to
become free.
Sharable resources do not require mutual exclusion, so they cannot cause deadlocks.
A process should not hold some resources while waiting for additional resources.
Require a process to request all needed resources at the start before execution
begins.
A process must release all currently allocated resources before requesting new ones.
3. Eliminate No Preemption:
If a process holding resources requests another resource that is unavailable, all its
currently held resources are released (preempted).
The preempted resources are added to the list of resources the process is waiting for.
The process restarts only when all required resources (old and new) are available.
Preemption ensures resources are efficiently utilized and prevents deadlocks caused by
hold-and-wait conditions.
Each process must request resources in increasing order based on this sequence.
This prevents processes from holding resources in a circular chain, breaking the
deadlock cycle.
Example:
Operating System 21
If a process needs two resources, A and B , it has to request both at once rather than
holding one (A) and waiting for the other (B) .
This prevents a situation where one process holds a resource and waits for another, which
could lead to a deadlock.
Deadlock Avoidance 🔴
Deadlock Avoidance is a method used by the operating system to prevent deadlock by managing
how resources are allocated to processes.
How it works:
The system carefully checks each resource request before granting it, ensuring the request
won’t cause a deadlock situation.
Resources are only given if they won't lead to a state where processes are stuck waiting
for each other (deadlock).
Safe State: Before granting a resource request, the system checks if the resources can
still be allocated safely, meaning all processes can eventually complete without deadlock.
Banker's Algorithm: This algorithm helps determine safe resource allocation. It checks the
available resources, each process's maximum resource needs, and what is currently
allocated. If granting a request leads to an unsafe state, it is not allowed.
Example:
The operating system checks if granting R to P1 will allow all processes to eventually
complete (safe state). If not, it delays granting R to P1 to avoid a potential deadlock.
Equal Size Partitioning: Main memory is divided into equal-size partitions. Any process
that fits within this size can be loaded into any available partition.
Unequal Size Partitioning: Main memory is divided into partitions of different sizes. Each
process is loaded into the smallest partition that can accommodate it.
In this method, when a process enters main memory, it is allocated exactly the amount of
memory it needs.
Therefore, the size of partitions can vary based on the requirements of each process.
The operating system maintains a table that indicates which parts of memory are available
and which are occupied.
When a new process arrives, the system searches for available memory space and allocates
it by creating a partition if there is enough space.
Operating System 22
Example:
Consider the following table with processes and their required memory space:
P1 20 MB
P2 14 MB
P3 18 MB
Free Space Management Techniques 🟢 (4M - W-22, S-22, 6M - W-19, W-23, S-24)
1. Bitmap method: (4M - W-23)
The bitmap method, also known as the bit vector method, is a commonly used way to manage
free space.
In this method, each block on the hard disk is represented by a single bit (0 or 1).
For example, consider a disk having 16 blocks where block numbers 2, 3, 4, 5, 8, 9, 10,
11, 12, and 13 are free, and the rest of the blocks, i.e., block numbers 0, 1, 6, 7, 14
and 15 are allocated to some files.
The bit vector for this disk will look like this-
The main advantage of this approach is that it is simple and efficient at finding the
first available block or a group of consecutive free blocks on the disk.
2. Linked List:
The linked list method is another way to manage free space on a disk.
In this approach, all the free blocks are linked together in a chain.
The last free block points to null, indicating the end of the list.
For example, consider a disk having 16 blocks where block numbers 3, 4, 5, 6, 9, 10, 11,
12, 13, and 14 are free, and the rest of the blocks, i.e., block numbers 1, 2, 7, 8, 15
and 16 are allocated to some files.
If we maintain a linked list, then Block 3 will contain a pointer to Block 4, and Block 4
will contain a pointer to Block 5.
Operating System 23
Virtual Memory 🟠 (2M - W-19, S-23)
Virtual Memory is a feature of the operating system that helps manage memory when there is
not enough physical memory (RAM) available.
It allows the system to temporarily move data from RAM to disk storage, creating more space
for active processes.
This separation between logical memory (what programs think they have) and physical memory
(actual RAM) lets programs use more memory than is physically available.
These pages are retrieved from secondary storage (like a hard drive) and loaded into the main
memory as needed.
Process address space is divided into Process address space is divided into
blocks called pages. blocks called segments.
May cause internal fragmentation (wasting May cause external fragmentation (wasting
memory inside pages). memory between segments).
Logical address is divided into page Logical address is divided into segment
number and page offset. number and segment offset.
Data is stored using a page table. Data is stored using a segmentation table.
It occurs when free memory gets divided into small, non-contiguous blocks, making it hard to
allocate large blocks to processes.
Types of Fragmentation:
1. Internal Fragmentation:
This happens when a process is given a block of memory that is larger than what it
actually needs.
2. External Fragmentation:
External fragmentation occurs when there is enough free memory to fulfill a process's
request, but the memory is spread across different non-contiguous blocks.
Page Fault 🔴
A page fault occurs when a program tries to access a page in memory that is not currently
loaded into the computer's main memory (RAM).
Operating System 24
Page replacment algorithms 🟢 (6M - S-23)
FIFO (6M - S-22, S-24)
5. Replace the oldest page (the one that has been in the frame the longest) with the new
page.
7. Keep count of how many times a page needs to be replaced (page faults).
If there is no space, find the page that hasn't been used for the longest time.
5. Replace the least recently used page with the new page.
Operating System 25
7. Keep count of how many times a page needs to be replaced (page faults).
If there is no free space, find the page that will not be used for the longest time in
the future.
5. Replace the page that will not be needed for the longest time with the new page.
7. Keep count of how many times a page needs to be replaced (page faults).
2. Identifier: The file system assigns a unique number or tag to each file, which helps identify
it within the system.
3. Type: This shows the file type, which is important for systems that handle different types of
files.
5. Size: This indicates the current size of the file (in bytes, words, or blocks) and may also
include the maximum allowed size.
6. Protection: This information controls who can read, write, or execute the file.
7. Time, Date, and User Identification: This records when the file was created, last modified,
and last used. This data is useful for protection, security, and monitoring usage.
2. Writing to a file
4. Renaming a file
5. Deleting a file
Operating System 26
6. Repositioning within a file
This is the most common way to access files, used by programs like text editors and
compilers.
Read Operation: Reads data in sequence, one part after the other, automatically moving the
file pointer to the next part.
Write Operation: Writes data in sequence, adding new information to the end of the file and
moving the pointer to the end of the written data.
A sequential file can be reset to the beginning, and in some systems, programs can skip
forward or backward through records.
As shown in above diagram, a file can be rewind (moved in backward direction) from the
current position to start with beginning of the file or it can be read or write in forward
direction.
In direct access, files consist of fixed-length records that allow quick reading and writing
in any order.
This method is based on the disk model and allows random access to file blocks.
File Structure: A file is viewed as a sequence of numbered blocks or records. For example,
you can directly read block 14, then block 53, etc.
Read/Write Operations:
The block numbers provided by the user are relative block numbers, starting from 0 for the
first block.
This system prevents users from accessing areas outside their files and lets the OS manage
where files are stored on disk.
Example:
Databases often use direct access for quick retrieval of specific records.
If a query requests certain information, the system calculates which block holds the data
and reads it directly.
In the contiguous allocation method, each file occupies a set of contiguous (continuous)
blocks of disk space.
This means that all parts of a file are stored one after another on the disk.
A file's location on the disk is defined by the starting address of its first block and
the length of the file (i.e., how many blocks it uses).
If a file starts at block b and has a length of n blocks, it will occupy blocks b, b+1,
b+2,..., b+n-1.
The directory entry for each file stores the starting address and the number of blocks
allocated to that file.
Operating System 27
Advantages:
It supports both sequential and direct access to data, meaning files can be read in
order or accessed directly at any block.
Provides good performance because the file blocks are stored together.
Disadvantages:
It is difficult to find enough contiguous blocks of space for new or growing files.
In this allocation method, each file is stored as a linked list of blocks. Each block has
a pointer to the next block in the sequence.
The blocks do not need to be contiguous on the disk; they can be scattered anywhere.
The file directory holds pointers to the first and last blocks of each file, making it
easy to create new files by adding a directory entry.
When writing to a file, the system takes the first free block from the free space list and
writes to it. This block is then linked to the end of the file's chain of blocks.
To read a file, the system follows the chain by reading each block in sequence.
There is no external fragmentation with this method; however, around 1.5% of disk space is
used to store pointers instead of actual data.
If a pointer gets lost or damaged (e.g., due to bugs or hardware issues), it may lead to
accessing the wrong part of the file.
Linked allocation does not support direct access; it only allows sequential access,
meaning you have to read blocks in order.
This method requires more space for pointers, so "clusters" (groups of blocks) are
sometimes used to reduce the number of pointers, but this can lead to internal
fragmentation within clusters.
3. Indexed Allocation:
Indexed Allocation is a method of storing files where a special block, called an index
block, keeps track of the addresses of all the file’s data blocks.
Each file has its own index block that lists where all its parts are stored, making it
easy to find and access any part of the file directly.
The operating system can quickly locate any part of the file without scanning through the
entire file.
Operating System 28
Advantages:
1. Direct Access: Any part of the file can be accessed quickly using the index block.
2. Efficient for Large Files: It works well for large files since data blocks don’t need
to be stored together.
Disadvantages:
1. Extra Storage for Index Blocks: Additional storage is needed to store the index blocks
for each file.
2. Limited Index Size: If the index block is small, it limits how many data blocks a file
can have, which may be an issue for very large files.
3. More Complexity: Managing and updating the index blocks adds complexity to the file
management system.
1. Single-Level Directory
2. Two-Level Directory
1. Single-Level Directory:
Advantages:
Simple operations like creating, searching, deleting, and updating files are possible.
Disadvantages:
All files must have unique names. If two users try to name their files the same, it
causes a conflict.
If the number of files increases, searching for a specific file becomes slow and
inefficient.
It is not suitable for multi-user systems, as users cannot have their own directories.
Operating System 29
This structure solves the problem of file name conflicts in single-level directories.
In this system, each user has their own User File Directory (UFD).
The system maintains a Master File Directory (MFD), which keeps track of all users and
their directories.
Advantages:
Two files can have the same name if they are in different user directories.
One user cannot access or modify another user’s directory without permission.
Disadvantages:
This is the most commonly used directory structure, especially in personal computers.
The structure looks like an upside-down tree, where the topmost directory is the root
directory.
Each user has their own directory under the root, and they can further create
subdirectories.
Advantages:
The root directory is highly secure and accessible only by the system administrator.
It supports grouping and allows users to separate important files from unimportant
ones.
Disadvantages:
Operating System 30